How to evaluate the research level of a paper before any publication? Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How Scopus CSAB plan to evaluate a submitted journal's citedness in Scopus?Journal unsatisfied with revisions after acceptance, author proofs and copyrightWhy the submitted paper remains “With Editor” or “Editor Assigned” for more than 1 months?What to do with gaps in mathematical papers?What does referee mean in The Computer Journal?How to evaluate a book, rather than a paper, on a scientific subject?Should I be concerned when an editor tells me that I have the final say about the paper he sent me to referee?Selecting a journal for a review: Generalist or specilized journal?How to interpret this rejection email from Journal of American Math Society? Anything to read between the lines?How should I state my peer review experience in the CV?
In musical terms, what properties are varied by the human voice to produce different words / syllables?
Weaponising the Grasp-at-a-Distance spell
White walkers, cemeteries and wights
Trying to understand entropy as a novice in thermodynamics
GDP with Intermediate Production
Co-worker has annoying ringtone
What's the difference between Try Ubuntu and Install Ubuntu option in VirtualBox?
Why is it faster to reheat something than it is to cook it?
Monty Hall Problem-Probability Paradox
Central Vacuuming: Is it worth it, and how does it compare to normal vacuuming?
The test team as an enemy of development? And how can this be avoided?
Most effective melee weapons for arboreal combat? (pre-gunpowder technology)
As a dual citizen, my US passport will expire one day after traveling to the US. Will this work?
I can't produce songs
What is the "studentd" process?
Is there any word for a place full of confusion?
Constant factor of an array
Can you force honesty by using the Speak with Dead and Zone of Truth spells together?
How can I prevent/balance waiting and turtling as a response to cooldown mechanics
How often does castling occur in grandmaster games?
What are the main differences between the original Stargate SG-1 and the Final Cut edition?
Can I cast X spells with Bolas's Citadel?
How does light 'choose' between wave and particle behaviour?
Should a wizard buy fine inks every time he want to copy spells into his spellbook?
How to evaluate the research level of a paper before any publication?
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
Announcing the arrival of Valued Associate #679: Cesar Manara
Unicorn Meta Zoo #1: Why another podcast?How Scopus CSAB plan to evaluate a submitted journal's citedness in Scopus?Journal unsatisfied with revisions after acceptance, author proofs and copyrightWhy the submitted paper remains “With Editor” or “Editor Assigned” for more than 1 months?What to do with gaps in mathematical papers?What does referee mean in The Computer Journal?How to evaluate a book, rather than a paper, on a scientific subject?Should I be concerned when an editor tells me that I have the final say about the paper he sent me to referee?Selecting a journal for a review: Generalist or specilized journal?How to interpret this rejection email from Journal of American Math Society? Anything to read between the lines?How should I state my peer review experience in the CV?
There are three differents cases when the research level of a paper must be evaluated before any publication:
- for author: to choose the most appropriate journal,
- for referee: to recommend acceptance or rejection for a given journal,
- for editor: to take the final decision.
If we compare this process to justice, a referee is like a lawyer, and an editor is like a judge.
Question: How an author, a referee and an editor can (respectively) evaluate the research level of a paper?
We here ask about the purely research level of a paper, so we assume that the paper is original, correct and well-written. We also assume that the paper is not too specialized if it is a general-audience journal, and is on-topic if it is a specialized journal (idem for any other specificity). Finally, if specific, I am mainly interested in mathematical papers.
An utilitarianism way could be to estimate how many papers should cite this paper in the next five years (self-citations excepted). Then the author can choose a journal having this number as Article Influence score (after renormalization), and the referee can check if it matches with the chosen journal. But then it would be necessary to know how to make such an estimate...
Of course, an author/referee/editor can evaluate the paper subjectively, but subjectivity varies with emotions, it can be manipulated and the process can become political. I wonder whether there is an objective way to proceed, or at least, if we can add a bit of rationality in this process. Consider the process of justice, it contains undeniably a part of subjectivity, but also rationality, called the law.
peer-review journals mathematics paper-submission evaluation
add a comment |
There are three differents cases when the research level of a paper must be evaluated before any publication:
- for author: to choose the most appropriate journal,
- for referee: to recommend acceptance or rejection for a given journal,
- for editor: to take the final decision.
If we compare this process to justice, a referee is like a lawyer, and an editor is like a judge.
Question: How an author, a referee and an editor can (respectively) evaluate the research level of a paper?
We here ask about the purely research level of a paper, so we assume that the paper is original, correct and well-written. We also assume that the paper is not too specialized if it is a general-audience journal, and is on-topic if it is a specialized journal (idem for any other specificity). Finally, if specific, I am mainly interested in mathematical papers.
An utilitarianism way could be to estimate how many papers should cite this paper in the next five years (self-citations excepted). Then the author can choose a journal having this number as Article Influence score (after renormalization), and the referee can check if it matches with the chosen journal. But then it would be necessary to know how to make such an estimate...
Of course, an author/referee/editor can evaluate the paper subjectively, but subjectivity varies with emotions, it can be manipulated and the process can become political. I wonder whether there is an objective way to proceed, or at least, if we can add a bit of rationality in this process. Consider the process of justice, it contains undeniably a part of subjectivity, but also rationality, called the law.
peer-review journals mathematics paper-submission evaluation
add a comment |
There are three differents cases when the research level of a paper must be evaluated before any publication:
- for author: to choose the most appropriate journal,
- for referee: to recommend acceptance or rejection for a given journal,
- for editor: to take the final decision.
If we compare this process to justice, a referee is like a lawyer, and an editor is like a judge.
Question: How an author, a referee and an editor can (respectively) evaluate the research level of a paper?
We here ask about the purely research level of a paper, so we assume that the paper is original, correct and well-written. We also assume that the paper is not too specialized if it is a general-audience journal, and is on-topic if it is a specialized journal (idem for any other specificity). Finally, if specific, I am mainly interested in mathematical papers.
An utilitarianism way could be to estimate how many papers should cite this paper in the next five years (self-citations excepted). Then the author can choose a journal having this number as Article Influence score (after renormalization), and the referee can check if it matches with the chosen journal. But then it would be necessary to know how to make such an estimate...
Of course, an author/referee/editor can evaluate the paper subjectively, but subjectivity varies with emotions, it can be manipulated and the process can become political. I wonder whether there is an objective way to proceed, or at least, if we can add a bit of rationality in this process. Consider the process of justice, it contains undeniably a part of subjectivity, but also rationality, called the law.
peer-review journals mathematics paper-submission evaluation
There are three differents cases when the research level of a paper must be evaluated before any publication:
- for author: to choose the most appropriate journal,
- for referee: to recommend acceptance or rejection for a given journal,
- for editor: to take the final decision.
If we compare this process to justice, a referee is like a lawyer, and an editor is like a judge.
Question: How an author, a referee and an editor can (respectively) evaluate the research level of a paper?
We here ask about the purely research level of a paper, so we assume that the paper is original, correct and well-written. We also assume that the paper is not too specialized if it is a general-audience journal, and is on-topic if it is a specialized journal (idem for any other specificity). Finally, if specific, I am mainly interested in mathematical papers.
An utilitarianism way could be to estimate how many papers should cite this paper in the next five years (self-citations excepted). Then the author can choose a journal having this number as Article Influence score (after renormalization), and the referee can check if it matches with the chosen journal. But then it would be necessary to know how to make such an estimate...
Of course, an author/referee/editor can evaluate the paper subjectively, but subjectivity varies with emotions, it can be manipulated and the process can become political. I wonder whether there is an objective way to proceed, or at least, if we can add a bit of rationality in this process. Consider the process of justice, it contains undeniably a part of subjectivity, but also rationality, called the law.
peer-review journals mathematics paper-submission evaluation
peer-review journals mathematics paper-submission evaluation
asked Mar 19 at 12:56
Sebastien PalcouxSebastien Palcoux
1998
1998
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
While I think your "justice" metaphor is wrong, the answer in all cases is experience. You need experience in the field as a reviewer. You need to know what has been done and what is important yet to do. As an editor you need either that field experience or experience with your reviewers - who is trustworthy and who is not. As an author in the early career you don't have any experience other than some within the field, but it will grow.
For everyone here, the way you get experience is to make best effort attempts at whatever job you have and evaluate the response. As they say in engineering and computer science, "Good design comes from experience. Experience comes from bad design."
If you are looking for an algorithm, then I would say it doesn't exist. It might be possible in theory to construct one with an AI looking at tens of thousands of interactions, but it might show bias, as many such things have been shown to do.
But the system as a whole just depends on (nearly) everyone trying to do their best with what they have in front of them in world of imperfect information.
In theory, theory is the same as practice. But not in practice. - Fnord Bjørnberger
2
Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.
– Jon Custer
Mar 19 at 13:32
1
In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"
– Andreas Blass
Mar 19 at 22:40
add a comment |
I think you have a misconception here about the point of the peer-review. The review process is not meant to be a future prediction of how many citations a paper might get. Yet, I have experienced in the past that some journals ask the reviewers, after the review is finished, if a paper should be highlighted in the current issue of the journal or on the journal website.
As a reviewer, I can make a personal judgement if a paper should be highlighted, but as broad, interdisciplinary and diversified as science has become, I often vote to not highlight a paper, because it would be a subjective judgement. I also leave the decision, if a script fits the scope of a journal often to the editor (and care only about the quality of the reported research), this is literally not my business as an unpaid reviewer and is based on my view, to select a paper for reading by myself by factors not biasing me too much like journal impact factor etc...
It just follows from scientific history that we often cannot predict the impact of fundamental research. I also think, there is no need to do what you ask or suggest for submitted scripts before publication, because the temporal highlighting of important research is done on conferences and later by the peers and readers in the community via review articles or even blogs, where you have much more, more experienced and objective "judges", than before publication.
2
The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.
– Bryan Krause
Mar 19 at 15:40
1
@BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.
– user847982
Mar 19 at 15:49
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "415"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f126693%2fhow-to-evaluate-the-research-level-of-a-paper-before-any-publication%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
While I think your "justice" metaphor is wrong, the answer in all cases is experience. You need experience in the field as a reviewer. You need to know what has been done and what is important yet to do. As an editor you need either that field experience or experience with your reviewers - who is trustworthy and who is not. As an author in the early career you don't have any experience other than some within the field, but it will grow.
For everyone here, the way you get experience is to make best effort attempts at whatever job you have and evaluate the response. As they say in engineering and computer science, "Good design comes from experience. Experience comes from bad design."
If you are looking for an algorithm, then I would say it doesn't exist. It might be possible in theory to construct one with an AI looking at tens of thousands of interactions, but it might show bias, as many such things have been shown to do.
But the system as a whole just depends on (nearly) everyone trying to do their best with what they have in front of them in world of imperfect information.
In theory, theory is the same as practice. But not in practice. - Fnord Bjørnberger
2
Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.
– Jon Custer
Mar 19 at 13:32
1
In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"
– Andreas Blass
Mar 19 at 22:40
add a comment |
While I think your "justice" metaphor is wrong, the answer in all cases is experience. You need experience in the field as a reviewer. You need to know what has been done and what is important yet to do. As an editor you need either that field experience or experience with your reviewers - who is trustworthy and who is not. As an author in the early career you don't have any experience other than some within the field, but it will grow.
For everyone here, the way you get experience is to make best effort attempts at whatever job you have and evaluate the response. As they say in engineering and computer science, "Good design comes from experience. Experience comes from bad design."
If you are looking for an algorithm, then I would say it doesn't exist. It might be possible in theory to construct one with an AI looking at tens of thousands of interactions, but it might show bias, as many such things have been shown to do.
But the system as a whole just depends on (nearly) everyone trying to do their best with what they have in front of them in world of imperfect information.
In theory, theory is the same as practice. But not in practice. - Fnord Bjørnberger
2
Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.
– Jon Custer
Mar 19 at 13:32
1
In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"
– Andreas Blass
Mar 19 at 22:40
add a comment |
While I think your "justice" metaphor is wrong, the answer in all cases is experience. You need experience in the field as a reviewer. You need to know what has been done and what is important yet to do. As an editor you need either that field experience or experience with your reviewers - who is trustworthy and who is not. As an author in the early career you don't have any experience other than some within the field, but it will grow.
For everyone here, the way you get experience is to make best effort attempts at whatever job you have and evaluate the response. As they say in engineering and computer science, "Good design comes from experience. Experience comes from bad design."
If you are looking for an algorithm, then I would say it doesn't exist. It might be possible in theory to construct one with an AI looking at tens of thousands of interactions, but it might show bias, as many such things have been shown to do.
But the system as a whole just depends on (nearly) everyone trying to do their best with what they have in front of them in world of imperfect information.
In theory, theory is the same as practice. But not in practice. - Fnord Bjørnberger
While I think your "justice" metaphor is wrong, the answer in all cases is experience. You need experience in the field as a reviewer. You need to know what has been done and what is important yet to do. As an editor you need either that field experience or experience with your reviewers - who is trustworthy and who is not. As an author in the early career you don't have any experience other than some within the field, but it will grow.
For everyone here, the way you get experience is to make best effort attempts at whatever job you have and evaluate the response. As they say in engineering and computer science, "Good design comes from experience. Experience comes from bad design."
If you are looking for an algorithm, then I would say it doesn't exist. It might be possible in theory to construct one with an AI looking at tens of thousands of interactions, but it might show bias, as many such things have been shown to do.
But the system as a whole just depends on (nearly) everyone trying to do their best with what they have in front of them in world of imperfect information.
In theory, theory is the same as practice. But not in practice. - Fnord Bjørnberger
answered Mar 19 at 13:19
BuffyBuffy
58.6k17182279
58.6k17182279
2
Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.
– Jon Custer
Mar 19 at 13:32
1
In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"
– Andreas Blass
Mar 19 at 22:40
add a comment |
2
Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.
– Jon Custer
Mar 19 at 13:32
1
In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"
– Andreas Blass
Mar 19 at 22:40
2
2
Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.
– Jon Custer
Mar 19 at 13:32
Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.
– Jon Custer
Mar 19 at 13:32
1
1
In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"
– Andreas Blass
Mar 19 at 22:40
In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"
– Andreas Blass
Mar 19 at 22:40
add a comment |
I think you have a misconception here about the point of the peer-review. The review process is not meant to be a future prediction of how many citations a paper might get. Yet, I have experienced in the past that some journals ask the reviewers, after the review is finished, if a paper should be highlighted in the current issue of the journal or on the journal website.
As a reviewer, I can make a personal judgement if a paper should be highlighted, but as broad, interdisciplinary and diversified as science has become, I often vote to not highlight a paper, because it would be a subjective judgement. I also leave the decision, if a script fits the scope of a journal often to the editor (and care only about the quality of the reported research), this is literally not my business as an unpaid reviewer and is based on my view, to select a paper for reading by myself by factors not biasing me too much like journal impact factor etc...
It just follows from scientific history that we often cannot predict the impact of fundamental research. I also think, there is no need to do what you ask or suggest for submitted scripts before publication, because the temporal highlighting of important research is done on conferences and later by the peers and readers in the community via review articles or even blogs, where you have much more, more experienced and objective "judges", than before publication.
2
The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.
– Bryan Krause
Mar 19 at 15:40
1
@BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.
– user847982
Mar 19 at 15:49
add a comment |
I think you have a misconception here about the point of the peer-review. The review process is not meant to be a future prediction of how many citations a paper might get. Yet, I have experienced in the past that some journals ask the reviewers, after the review is finished, if a paper should be highlighted in the current issue of the journal or on the journal website.
As a reviewer, I can make a personal judgement if a paper should be highlighted, but as broad, interdisciplinary and diversified as science has become, I often vote to not highlight a paper, because it would be a subjective judgement. I also leave the decision, if a script fits the scope of a journal often to the editor (and care only about the quality of the reported research), this is literally not my business as an unpaid reviewer and is based on my view, to select a paper for reading by myself by factors not biasing me too much like journal impact factor etc...
It just follows from scientific history that we often cannot predict the impact of fundamental research. I also think, there is no need to do what you ask or suggest for submitted scripts before publication, because the temporal highlighting of important research is done on conferences and later by the peers and readers in the community via review articles or even blogs, where you have much more, more experienced and objective "judges", than before publication.
2
The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.
– Bryan Krause
Mar 19 at 15:40
1
@BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.
– user847982
Mar 19 at 15:49
add a comment |
I think you have a misconception here about the point of the peer-review. The review process is not meant to be a future prediction of how many citations a paper might get. Yet, I have experienced in the past that some journals ask the reviewers, after the review is finished, if a paper should be highlighted in the current issue of the journal or on the journal website.
As a reviewer, I can make a personal judgement if a paper should be highlighted, but as broad, interdisciplinary and diversified as science has become, I often vote to not highlight a paper, because it would be a subjective judgement. I also leave the decision, if a script fits the scope of a journal often to the editor (and care only about the quality of the reported research), this is literally not my business as an unpaid reviewer and is based on my view, to select a paper for reading by myself by factors not biasing me too much like journal impact factor etc...
It just follows from scientific history that we often cannot predict the impact of fundamental research. I also think, there is no need to do what you ask or suggest for submitted scripts before publication, because the temporal highlighting of important research is done on conferences and later by the peers and readers in the community via review articles or even blogs, where you have much more, more experienced and objective "judges", than before publication.
I think you have a misconception here about the point of the peer-review. The review process is not meant to be a future prediction of how many citations a paper might get. Yet, I have experienced in the past that some journals ask the reviewers, after the review is finished, if a paper should be highlighted in the current issue of the journal or on the journal website.
As a reviewer, I can make a personal judgement if a paper should be highlighted, but as broad, interdisciplinary and diversified as science has become, I often vote to not highlight a paper, because it would be a subjective judgement. I also leave the decision, if a script fits the scope of a journal often to the editor (and care only about the quality of the reported research), this is literally not my business as an unpaid reviewer and is based on my view, to select a paper for reading by myself by factors not biasing me too much like journal impact factor etc...
It just follows from scientific history that we often cannot predict the impact of fundamental research. I also think, there is no need to do what you ask or suggest for submitted scripts before publication, because the temporal highlighting of important research is done on conferences and later by the peers and readers in the community via review articles or even blogs, where you have much more, more experienced and objective "judges", than before publication.
answered Mar 19 at 15:27
user847982user847982
1,155412
1,155412
2
The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.
– Bryan Krause
Mar 19 at 15:40
1
@BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.
– user847982
Mar 19 at 15:49
add a comment |
2
The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.
– Bryan Krause
Mar 19 at 15:40
1
@BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.
– user847982
Mar 19 at 15:49
2
2
The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.
– Bryan Krause
Mar 19 at 15:40
The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.
– Bryan Krause
Mar 19 at 15:40
1
1
@BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.
– user847982
Mar 19 at 15:49
@BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.
– user847982
Mar 19 at 15:49
add a comment |
Thanks for contributing an answer to Academia Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f126693%2fhow-to-evaluate-the-research-level-of-a-paper-before-any-publication%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown