Peer review is the backbone of high quality scientific publications. Although the idea that only articles that are approved by a set of anonymous nitpickers can ever see the light of publication on "serious" journals is old and perfectible, there is currently no valid alternative to identify verified, rigorous scientific work, and to filter out unsubstantiated claims, and methodologically unsound results - the scientific analogue of "fake news".
In practice, the method works as follows. A scientific journal receives an offer to publish an article by a scientist or team of scientists, or identifies a suitable author(s) for a review paper. A preliminary version of the article is then produced, at which point the editors of the journal then look for eminent scientists who work in the same area of research. These are asked to read the article. They must then decide on one of four possible courses:
1 - accept the article for publication;
2 - ask for minor revisions, according to a provided list of questions, comments, and requests for changes;
3 - ask for a major rewrite, a change in methodology, or other quite significant modification of the contents;
4 - reject the article tout court.
In cases 2 and 3 the authors are requested to answer questions, implement changes, work on the weak points, and finally resubmit their manuscript for a second round of screening; in cases 1 and 4 the outcome is direct and no further iteration is necessary.
As you realize, the job of the reviewer is complex, as he or she is called to carefully scrutinize the material and think deeply at whether it is sound, or whether it can and needs to be improved, or whether it is not worth publishing altogether. The choice can be tough, and the time spent in the review of a 10 page document can be quite significant if done accurately.
If you add to the above the fact that usually no compensation is provided to the reviewer of a scientific article, you will not be surprised to know that quite often scholars refuse to review papers, unless they are not very interested in the subject, or if they do not have other reasons for doing the job. The reason is also practical: the more you accept to review papers, the higher is the chance that more requests will be directed to you. You can easily end up spending 50% of your time in this unrewarding occupation.
Because of the above, editors have a very hard time finding reviewers. If you manage a journal which publishes several articles per month this becomes a true nightmare. The folks at Elsevier, a company who publishes a journal I am an editor of (Reviews in Physics) use an application called Evise to manage the process of paper production. Using that application, editors can search for proper reviewers and invite them very quickly.
I think it would be interesting to discuss here whether the model is obsolete or okay, and whether there can be some better way to ensure that sound scientific information gets properly published and that scientific consensus is used to filter the background noise. Also, it would be quite interesting to discuss whether this system is actually a power tool meant to control what really is scientific consensus on any topic, or whether instead the scientific community is such a well-organized and democratic environment that no such problem exists. But I want to tell you something else here today.
The thing is - the system is fallible, and indeed sometimes, maybe out of frustration or lack of time to do a good job, editors send out many requests for review at once, to less-than-qualified scholars for the task at hand. Also, the automated programs like Evise which offer suggestions for who could be good reviewers of any particular paper do their job by matching keywords to scholars that are listed in databases. Those databases are in turn constructed from other automated systems which may sometimes misclassify the expertise base of the subjects.
So it sometimes happens that academics receive invitations to review papers they have no connection with. It happened to me, and it happened to Daniel Whiteson, who besides being a colleague who works for the ATLAS collaboration and a professor of University of California Irvine, is the acclaimed author of "We have no idea", a successful sci-pop book that uses cartoons to explain the current mysteries of fundamental science (take my advice: buy the book).
In this particular instance Daniel was contacted by an Elsevier editor of a journal publishing research in diabetes, who asked him to please review an article on that subject. He replied somewhat snappily that he was unqualified to review it, and that the fact he was picked to be a reviewer of the article was tantamount to classifying the journal as garbage. There followed an entertaining exchange with the editor, who claimed that since Daniel was unqualified he should refrain from making assessments on the quality of the journal, with some escalation to threats from the Elsevier part of legal action (I hate that).
I believe the whole thing was due to email being such an ineffective way to transmit sentiments, as well as nuisances such as irony. In front of a glass of wine at a bar the conversation would have ended much differently. But I also believe there is truth both in Daniel's assessment (unfortunately) and in the editor's stern defense of the system. The problem is manifold, as I explained above: the system is fallible, editors are pressed to find reviewers, reviewers do not get any recognition for their work, and the result indeed threatens to jeopardize the quality of the publications appearing in print.
My solution? I offered it years ago. I suggested that once a publication is accepted, the name of the reviewer appears if he or she desires this to happen. And that the number of reviews enters the list of gauges that define the academic productivity of a scholar. To do this, one should envision a system to distribute fairly the offers of reviews, but this is not too hard. The problem with this is that we live in a deep potential well with the current habits, evaluation systems, and publishing model. On the other hand, things can be changed.
See for instance how open access publishing has become a thing. The money was once spent by libraries to buy the subscription-only journals, while today it can be spent by the authors to publish their articles in open access journals. The publisher still makes its profit - often huge, but that's another issue - and things work better. The issue here is the slowness of funding agencies to realize they need to give researchers and scholars an allowance for open access publishing fees, taking the funds from those allotted to management of institutional libraries. It was a non-adiabatic transition that has taken place anyway.
The debate continues, and the challenges posed by the world of scientific publishing, with its underworld of predatory journals, silly metrics that value citations for their own sake, and groupthink egemony, will stay with us in the near future, unless we find a very good way to move to a better overall model.
On The Qualifications Of Peer Reviewers For Scientific Papers
Related articles
- Your Thoughts On Peer Review Needed
- Elsevier To Create New Guidelines For Pharmaceutical Article Reprint, Compilation And Custom Publications
- Elsevier Launches Executable Paper Grand Challenge
- Elsevier Launches Peer Review Report Transparency Pilot Program
- Elsevier Introduces Article-Based Publishing To Increase Publication Speed
Comments