Understanding the Limits of Peer Review

One of the defining features of academic publishing – and the one that gives it its credibility – is peer review. Few will dispute the importance of this process; however, many agree that the current system has its flaws. As the greater publishing community looks to address some of these shortcomings with new models of peer review, we thought we’d take a look at some of the limitations of the practice to get a realistic sense of what we should and shouldn’t expect of the current system.

What peer reviewers don’t do:

Check Statistics

Peer reviewers are just that: your peers. They are usually not statisticians and thus not experts in all types of statistical analysis. They likely know the common and appropriate tests for the types of data generated in your field, but will generally only be able to identify obvious errors in test choices or results. Peer reviewers don’t re-run calculations (unless a glaring error shows up and they know how to check it quickly), and they are not expected to do so.

 

Examine Raw Data Related to statistics, peer reviewers also do not check raw data. This would make the review process very cumbersome and time-consuming, so any issues with the original data collected may not be apparent in the draft seen by manuscript reviewers. If they do find issues with the statistical analysis, their suggestions for correction may be limited and it is on the author to go back and correct the issue using the original data.

 

Redo Experiments / Test Reproducibility

Much like checking all of the raw data would be unrealistic to ask of a reviewer, the same goes for redoing experiments. Reviewers do check that a manuscript provides sufficient details that they could re-create the experiment if they wanted to, but the actual verification of this is well beyond the scope of the service.

 

Verify Author Details

One recent area of concern in scholarly publishing is that of defining and identifying appropriate authorship. Some manuscripts have been found to have listed fake authors and affiliations or authors that didn’t even know they were included on the paper. Peer reviewers may be familiar with the researchers listed on a manuscript but more often than not, they aren’t, or they are reviewing a blinded manuscript. Checking that all listed authors are real and that the affiliations are correct would be a very difficult task and one that wastes the expertise offered by the reviewers; therefore, as it stands, reviewers rely on authors to be honest in this regard.

 

Verify Conflicts of Interest

Conflicts of interest have strong effects on people’s perception of bias in a study, so it goes without saying that declaring potential COIs is essential for every manuscript. That said, it would be next to impossible for peer reviewers to verify and research the validity of an author’s declarations or find COIs that they may have omitted. Reviewers will check that declarations have been made as a matter of course and flag any related bias in the manuscript they find, but this is the extent of their expected role in this area.

In an ideal world, all of these things could be caught before articles are published, but it’s just not realistic with the current system in place. As it stands, peer review is still an essential check and helps strengthen articles and catch major flaws that could influence the interpretation of the results. By keeping these things in mind, hopefully it will help you realize that no study you read is perfect, but the more people that publish in an area, the better we will all get at deciphering which studies hold more clout than others. Use a critical eye when reading articles and don’t be quick to jump on the attack of the entire publishing system when flaws are found post-publication. Slowly but surely, these findings will help build and strengthen new models of peer review and hopefully, someday the above list will be a thing of the past.

For more information on new initiatives looking to address these problems, check out some of the links below:

PLOS data policy

The Reproducibility Initiative

Increasing statistical scrutiny at Science and other journals

 

Scroll to Top