“Trust” is the timely theme for this year's Peer Review Week. The COVID-19 pandemic has demonstrated both the need for speed of dissemination of new scientific findings and the need for rigorous peer review of findings that are achingly desired, but that are not supported by good experimental design. The availability of preprints before the peer review process has accelerated sharing of experimental hypotheses and findings, and also demonstrates the self-correcting nature of research, as reliable results make it into the peer-reviewed literature and are replicated, while flawed studies are critiqued, and can result in dangerous, or even deadly, public health outcomes. This combination of rapid dissemination and considered peer review is creating a new standard that will promote free exchange of scientific information, while maintaining the strengths of a system that has been used by scientists to evaluate their research since it was introduced by the Royal Society of Edinburgh almost 300 years ago.
In an emergency, as we have been living through this year, the flaws of peer review become more obvious: it can be slow, it burdens already overburdened researchers, and it demonstrates the divisions between scientific points of view. Despite these flaws, peer review continues to be one of the most robust ways to determine whether a study meets the standards of experimental design and whether the data gathered support the stated conclusions.
At The Journal of Neuroscience, trust in the peer review system is a core value. The goal of peer review at JNeurosci is to provide feedback to authors that they can use to increase the likelihood that their study will still be considered reliable years into the future. We understand the flaws of peer review (the potential for personal bias, the power of entrenched hypotheses to squelch innovative ideas, and the additional time added to the publication process), but we continue to recognize the strength of giving scientists who are not involved in a study the chance to weigh in on whether they think the experiments have been designed in a way that makes their outcomes believable. JNeurosci also tries to mitigate some of the shortcomings of the peer review process. At least two editors see and rate every review, evaluate the fairness and usefulness of the written critique, and provide a level of oversight that is meant to mitigate bias. Further, when reviewers disagree, editors can call them into an anonymous consultation to determine whether they can come to an agreement about the strengths and weaknesses of a study. Finally, JNeurosci encourages posting manuscript drafts as preprints so that the data can be used by the scientific community while the process of peer review goes forward. We think this reliance on review of experimental design is one of the reasons that JNeurosci manuscripts continue to be cited decades after their publication.
Trust in the peer review process requires reliable reviewers who can identify both strengths and flaws in methodology and experimental design, and who can identify the aspects of the study that may be most useful to other researchers in the field. These skills do not magically appear, but instead require feedback and training. Initial training in manuscript review often comes from mentors who invite trainees to co-review manuscripts, and we acknowledge this important aspect of reviewer training by publishing the names of all co-reviewers annually. We have also developed a Reviewer Mentor Program that pairs trainees with some of our most frequent and reliable reviewers, many of whom have served on the JNeurosci Editorial Board, so that we can help develop the next generation of trusted reviewers.
Despite its long standing, peer review has evolved, and changes continue to improve the transparency and fairness of the process. Consensus review processes, as at JNeurosci when editors bring together reviewers who disagree, or at our sister journal, eNeuro, in a fully collaborative process mediated by a handling editor, have been used increasingly to identify the aspects of a study that are strongest and those that must be addressed for the findings to be reliable. In its best form, peer review provides input to authors that is constructive and improves the study. The best reviewers provide suggestions that may reveal aspects of a study that were not immediately obvious to the authors. In situations where trusted peer review results in a stronger study, reviewers should be able to share in the pride when the final version of the manuscript is published.
At a time when reproducible science is so badly needed, and when trust in science and the scientific process by the public varies tremendously, our responsibility is to develop processes, however imperfect, to increase the likelihood that scientific findings are reliable. One way we can do this is to establish standards for experimental design that increase the likelihood of reproducibility, as we have in a number of recommendations from our Editorial Board (https://www.jneurosci.org/collection/experimental-design-editorials). But once all of these recommendations are put forward, one of the most effective ways to increase the trustworthiness of a scientific study is to ask at least two scientists who have nothing to do with the work to take the time to read the study carefully, write down their thoughts on its strengths and weaknesses, and ask the study authors to address those thoughts with citations or data. When this is done with strong editorial oversight by working scientists, it is one of the most important components of building trust in the outcomes of scientific work.