Peer review
Written evidence submitted by Richard Horton (PR 02)
1. Peer review is a central issue in many scientific controversies and disputes today. Take climate change. In the
Times Higher Education
, last year, Andrew Montford, author of
The Hockey Stick Illusion: Climategate and the Corruption of Science
(1), argued that events at the Climatic Research Centre (UK) at the University of East Anglia (CRU) had far-reaching implications for the world of scientific peer review and publishing (2). His charge sheet was sharp and precise: that scientists undermined the peer-review process.
Implicit in Montford's argument is that peer review is critical to the process of – and thereby public trust in – science. Writing in
The Guardian
, George Monbiot put it this way: "science happens to be [a] closed world with one of the most effective forms of self-regulation: the peer review process."(3). But for many of us who do peer review, this "most effective" form of self-regulation is often misunderstood and misrepresented.
Peer
review:
firewall
or
the
weakest
link?
2. For scientific journals, peer review is the (confidential) evaluation of a submitted manuscript by one or more individuals who are experts in an aspect of the work under
scrutiny.
3. Who invented peer review? It's hard to be sure, but possibly the prize goes to Ishaq
bin Ali Al Rahwi (AD 854-931) (4). In his book,
Ethics of the Physician
, Al Rahwi
apparently encouraged doctors to keep contemporaneous notes on their patients, later
to be reviewed by a jury of fellow physicians. But the serious business of journal peer
review had to wait another 800 years. Henry Oldenburg, editor of
Philosophical
Transactions of the Royal Society
, was the first modern editor to adopt peer review in
the seventeenth century. He used it to famous effect, provoking often fractious, but
illuminating, debates between scientists across Europe.
4. Today, any scientific journal that lays claim to respectability must have a robust peer review process. At
The Lancet
, the process goes like this. A research paper is submitted electronically to a secure database and allocated by an editor to a colleague. The first or second editor can reject the manuscript at that early stage if the paper is judged to be scientifically poor, unsuitable for the journal‘s readership, unoriginal, or insufficiently topical. Journals differ here. For
The Lancet,
around three-quarters of manuscripts are rejected at this point. If a paper survives preliminary editorial review, it is discussed at a pre-review meeting to assess its suitability for external peer review. If judged a potential candidate for publication, the manuscript is sent to three expert advisors, commonly from overseas and representing different methodological dimensions of the research, as well as a statistician. There is always the risk of group-think among experts. That is, there may be an orthodox belief about a particular subject, strongly held, which resists alternative perspectives. Editors try to reduce the risk of group-think by sending papers to different and widely dispersed reviewers, deliberately seeking or even provoking critical reviews (just like Henry Oldenburg). Reviewers are not referees in the sense that they can blow a whistle and call time on the paper. We ask reviewers to provide written comments for authors, confidential comments to editors, and a detailed rating for each section of the paper. Those comments are collected, presented, and discussed at a once-weekly manuscript meeting attended by all the journal‘s editors.
5. At this stage, a paper can be rejected or we can open negotiations with authors. If we proceed, reviewers' questions and concerns are put to the authors, with appropriate
guidance from editors. The authors will reply by answering each question from
reviewers, submitting a revised manuscript that attempts to respond to the points
raised by editors and reviewers alike. The authors may also disagree with or challenge
reviewers with varying degrees of force. The revised paper is discussed again at a
manuscript meeting. The options at this stage are to reject, accept, go back to the
authors with further requests for clarification, or return to reviewers (old or new) for
additional opinions. We proceed with further revisions of the paper until a final
reject/accept decision is made. We know that with such a high rejection rate we may
get it wrong. To limit errors of omission, we have a formal appeals process where
editors promise to look again at a paper, weigh up the authors‘ arguments, and
reconsider our decision.
6. Once the paper is provisionally accepted, the peer review process is not over. The
paper is then passed to a scientifically qualified assistant editor who edits the paper‘s
technical content. Mistakes may still be found at this stage, leading to further editorial
or expert review, even (though rarely) rejection. A lesson learned from sometimes
bitter experience is that a paper is not fully accepted until it is published.
7. Here are some of the commonest questions asked about the peer review process (5).
8.
Do reviewers make mistakes in their judgments?
Of course, and so do editors.
Sadly, the scientific literature is littered with retractions of papers that once passed the
test of peer review.
9.
Are reviewers objective in their judgments
? Pure objectivity is impossible. For some subjects, an editor can predict the judgment of the reviewer based on past experience with that reviewer. But this misses the point of what an editor is seeking. It is not simply the judgment of reject/accept that an editor wants from a reviewer. That
decision is the responsibility of the editor and the editor alone. What an editor really
seeks is a powerful critique of the manuscript – testing each assumption, probing
every method, questioning all results, and sceptically challenging interpretations and
conclusions. Armed with that critique, the editors decide – and take full responsibility
for deciding.
10.
Are reviewers willing to accept new ideas
?
Certainly, they are, although they might question those ideas to destruction. The vast majority of reviewers take their
responsibility as advisors very seriously indeed. They themselves are often on the
receiving end of peer review. Most try to be as open as possible to new findings,
although we encourage them to ask difficult and awkward questions.
11.
Despite peer review, are authors able to get away with dishonest or dubious
research?
Yes, they are. Peer review does not replicate and so validate research. Peer
review does not prove that a piece of research is true. The best it can do is say that, on
the basis of a written account of what was done and some interrogation of the authors,
the research seems on the face of it to be acceptable for publication. This claim for
peer review is much softer than often portrayed to the general public. Experience
shows, for example, that peer review is an extremely unreliable way to detect research
misconduct.
12.
Are peer reviewers accountable for what they do?
Yes, to the editor. But in a broader sense, to the scientific community and to the public as well. To a large extent, the trust society places in science depends on the scientific process, including peer review and publication, getting it right most of the time.
13.
Does peer review improve the quality of published research?
In our everyday
practice, we see that it does. And research suggests that it does too (6). Peer review
improves discussion of the limitations of research. It emphasises uncertainty. It invites
justification of generalisability. As one study of peer review concluded, "peer review
is a negotiation between authors and journal about the scope of the knowledge claims
that will ultimately appear in print" (9).
14.
Is there still a need for peer review, given the extraordinary ability of the Internet to enable continuous open criticism of research once published (that is, surely a thousand readers as reviewers after publication are better than 4 reviewers selected by editors before publication)?
There is no right answer to this question. Certainly, post-publication peer review adds greatly to the understanding of a piece of research. But watching pre-publication peer review in action - both at the macro level of external expert review and the micro level of technical editing - and seeing the extent to which research papers change (mostly for the better) after peer review, I think that pre-publication review still has an important part to play in science. At its best, pre-publication peer review clarifies, introduces uncertainty, insists on placing new work in the context of the totality of available evidence, demands a careful explanation of limitations, and prevents flights of fanciful over-interpretation.
15. Peer review has changed considerably during the past two decades. First, the stakes are higher. Individual and institutional success depends on getting papers published in high-impact journals. Citation data are now a standard metric for measuring research performance. This trend has increased competition and rivalry for places in the best journals. Second, the globalisation of science has expanded the geographic range of papers submitted to journals. Research originating from China, for example, is now far more common than even five years ago. The
internationalisation of science has further intensified competition for publication. Third, research papers are increasingly multi-disciplinary, requiring a much broader
range of expertise during peer review. Fourth, science is a stronger part of our public
culture now than it once was. What scientists used to write only for other scientists is
today available to – and sometimes read by – non-scientists, policy makers, and the
media. Fifth, the importance of statistics has grown substantially. Whereas twenty
years ago
The Lancet
had no separate statistical peer review process, every paper we
now publish has been carefully scrutinised by an independent statistical advisor.
Editors are now far more aware of analytic errors in research. Sixth, to address the
often conflicting results of individual research studies that are trying to answer the
same (or a similar) question, a new type of research method has been devised – the
systematic, as opposed to the narrative, review. Systematic reviews aim to search for
particular types of study (eg, the randomised trial), then select only the best according
to pre-specified criteria, and, if possible, to combine those findings in a statistically
meaningful way (which is called meta-analysis). Examples include the risk of cervical
cancer among women taking hormonal contraceptives (7) and the effects of a class
of medicines on heart disease (8). In biomedicine, the Cochrane Collaboration is the
most mature example of an effort to create a database of systematic reviews on
treatments. Finally, editors have had to face an upsurge in the discovery of episodes of research misconduct (fabrication, falsification, and plagiarism). The increasing awareness of research fraud had led not only to greater vigilance (hopefully not suspicion) among editors but also to the birth of institutional mechanisms to set standards and advise on research practice (eg, the Committee on Publication Ethics).
16. Because of the faith journal editors have in peer review, together with the empirical evidence we believe exists to support peer review, we take it very seriously indeed (9). That said, editors are well aware that peer review is anything but
uncontroversial. Scientific discoveries that later turn out to be flagrant episodes of
dishonesty – from Woo-Suk Hwang's fabricated claims in
Science
about cloning
embryonic stem cells, to Andrew Wakefield's falsifications in
The Lancet
– are not
uncommon. They raise troubling questions about the robustness of peer review.
Editors are only too well aware of the limitations of the peer-review system. Authors,
for example, can be deeply resistant to responding to questions from anonymous
critics (this fact at least partly drives the argument for fully transparent peer review,
where reviewers have to disclose their names to authors). The reluctance of some
authors – and some very famous authors, at that – to take the comments of their peers
seriously stems from the fact that they believe they have no peers. As one historian of
peer review put it, somewhat poetically, "anyone who possessed the MD degree had
no reason to defer to any colleague as an expert greater than he or she" (10).
17. So what is peer review in today's scientific culture? Various views have been more or less vividly expressed. Peer review is a sacred academic cow, according to one editor (11). Everyone – scientists, the public, policymakers, politicians – would like to believe that peer review is a firewall between truth and error (or dishonesty) (12). But as the editor of one leading specialist medical journal has rightly pointed out, "There is no question that, when it comes to peer review, the reviewers themselves are the weakest (or strongest) links" (13). This frustration among editors and scientists that peer review cannot always live up to the claims sometimes made for it produces frequent expressions of dismay. Is peer review a castle built on sand or the bedrock of scientific publishing (14)? Is peer review a landmark, landmine, or landfill (15)? Or, put bluntly, is peer review simply in crisis? (16). Is it "a flawed process at the heart of science and journals" (20)?
18. Unfortunately, there is evidence of a lack of evidence for peer review‘s efficacy. In 2002, Tom Jefferson and colleagues published a startling systematic review of all the evidence about editorial peer review in biomedical journals. Their exhaustive search yielded only a handful of studies. The conclusion? "Editorial peer review, although widely used, is largely untested and its effects are uncertain" (18). They went on, "Given the widespread use of peer review and its importance, it is surprising that so little is known of its effects." Jefferson and his colleagues have confirmed their
observations more recently (19). Their findings have been replicated by others (20).
To be fair, there is some evidence that micro peer review – technical editing – can
improve papers in biomedical journals (21). But, once again, this evidence is not as
robust as one would either like or have expected.
19. Jefferson extended his investigation of peer review by arguing that the objectives of the review process were also unclear (22). Without clear objectives, proving the value of peer review (or not) would be impossible. After almost 350 years of journal peer review, our zeal for and confidence in the peer review process seem inversely
proportional to our knowledge about what it actually does for science. Those who
make big claims for peer review need to face up to this disturbing absence of
evidence.
20. Worse still, what evidence is slowly accumulating should perhaps make scientists,
policymakers, and the public pause. Many who place great weight on the reliability of
the peer-reviewed scientific literature believe that it reflects the judgment of the
scientific community about the quality of research. But evidence suggests that
acceptance of research for publication may well depend on factors other than
scientific quality alone (23). Furthermore, peer reviewers will disagree greatly in their
recommendations to editors about a particular research paper. Yet editors seem to be
significantly influenced by reviewers who, when the quality of their advice is
measured independently, turn out to be extremely unreliable in their overall
judgments (24). Editors, some critics could reasonably argue, need to pay less, not
more, attention to the recommendations of their peer reviewers.
21. Scepticism about peer review is healthy. But every editor knows that peer review can be an indispensable aid to his or her work. Peer review can rescue science from
embarrassment and error. An extreme example goes some way to showing why. Peter
Duesberg is a well-known molecular virologist who believes that HIV is not the cause
of AIDS. In 2009, the journal
Medical Hypotheses
published a paper by Duesberg
arguing that the deaths attributed to AIDS in South Africa were false. The editor of
Medical Hypotheses
operated an editorial policy of no external peer review. The
justification was that peer review might suppress creative thinking. In the case of the
Duesberg paper, the idea that HIV does not cause AIDS was not new. More
importantly, South Africa is only now reversing its disastrous denialist policies on
HIV-AIDS. To consider Duesberg's old (and discredited) idea at a critical moment for
the country he was writing about would, most reasonable editors might conclude,
require some kind of external peer review to assist decision-making. The editor did
not seek expert reviews. He accepted the paper within a few days of its submission.
Many scientists in the AIDS community were appalled. They wrote to the publishers
(Elsevier, also the publishers of
The Lancet
) to complain. Elsevier removed the paper
from its online database pending the results of an independent investigation.
The
Lancet
was asked to review the paper. We did so and the reviews were uniformly and
deeply critical. No journal could have conceivably published the Duesberg paper
based on these reviews. The Duesberg paper remains retracted, excised from the
scientific literature. Here is an example of what can happen when peer review is
excluded from a journal's processes, and why peer review can bring important
information to bear on judgments about the suitability of research for publication.
Thanks to these events, this particular journal will now implement peer review. Meanwhile, the publishers have found a new editor (25).
Peer review under pressure
22. It is common for editors to have multiple, intense, and sometimes sharp interactions with authors and reviewers. Publication matters. Authors and reviewers are frequently passionate in their intellectual combat over a piece of research. The tone of their exchanges and communications with editors can be attacking, accusatory, aggressive, and even personal. If a research paper is especially controversial and word of it is circulating in a particular scientific community, third-party scientists or critics with an interest in the work may get to hear of it and decide to contact the journal. They might wish to warn or encourage editors. This kind of intervention is entirely normal. It is the task of editors to weigh up the passionate opinions of authors and reviewers, and to reflect on the comments (and motivations) of third parties. To an onlooker, these debates may appear as if improper pressure is being exerted on an editor. In fact, this is the ordinary to and fro of scientific debate going on behind the public screen of science. Occasionally, a line might be crossed. We experienced such a border crossing recently, where several reviewers and third parties encouraged us to delay publication of a paper for non-scientific reasons (26). Defining that line is a crucial task for editors.
23. One issue that is important to solve for the peer review process to work effectively is the full disclosure of all financial and relevant non-financial conflicts of interest. If a
research paper about drug A for disease Y is sent to a reviewer who has shares in a
company that makes drug B, also for disease Y, there is a potential for the introduction of bias into that reviewer's advice to the journal – favouring drug B over drug A. The editor may still want and value that reviewer's advice, but s/he needs to know about the reviewer's financial conflict to judge the weight s/he gives to the review. Non-financial conflicts may be even more important. If a scientist has devoted a life's work to theory A about disease Y, then clearly s/he might be biased if s/he is sent a manuscript that criticises theory A and proposes an alternative and compelling theory B for that same disease. Again, the editor would expect the reviewer to declare any non-financial academic or intellectual conflicts that might have the potential to influence that reviewer's critique.
24. It would be wrong for editors not to listen to advice about publication even after
acceptance of a paper. A paper is only fully accepted when it is published. New
information that informs the decision to publish a provisionally accepted paper before
publication can be very valuable.
The Lancet
has rejected papers in this twilight zone
of peer review. After publication, criticism is common and welcome, even lethal
criticism. This is the much vaunted self-regulation of science – except that sometimes
editors and authors are reluctant to act when things go wrong after publication.
25. Much has been made of whether scientists should or should not take public positions on the meaning of their data, especially if those data relate directly to policy or practice. The reality is that they do, all the time. Science does not exist in a political
vacuum. The idea that scientists are neutral observers, bereft of opinions, is naïve. In biomedical and public health research, scientists are often quick to make statements applying their data to the real world. They will often do so passionately and be well known for those passionate views. Indeed, the current climate of science is such that scientists are encouraged at every stage of their research to consider the impact – economic or human – of what they do, and to trumpet that impact. Research assessments in the future are likely to include a measure of impact when judging the quality of a scientist‘s work. In relation to peer review, the scientific, policy, or political positions an author, reviewer, or editor may hold could intervene to bias a review in one particular direction. There have been many examples of such conflicts in other scientific disciplines – eg, psychology (27) and genetic epidemiology (28). These episodes are troubling, but an almost inevitable consequence of the way peer review is ordinarily done.
26. The intersection of politics and science in well shown in the field of climate change.
The Skeptical Environmentalist
, written by Bjørn Lomborg and published by
Cambridge University Press, led to huge pressure on the publishers to withdraw the
book (29). Although the manuscript was reviewed by four experts who all
recommended publication, the scientific backlash was acute. Letters of protest were
written to newspapers. One scientist refused to work with Cambridge University Press
ever again. Lomborg was attacked physically. Chris Harrison, in his thoughtful reflections as the editor at Cambridge University Press who dealt with Lomborg's book, points out that peer review offers no guarantees of always ensuring the truth (32). But in the case of
The Skeptical Environmentalist
, the concerns were as much political as scientific. The publication of this book by a respected scholarly press might play to a particular political agenda and could be used and abused by vested corporate and political interests. Harrison rejected the idea that he should have applied these kinds of value judgments in the editorial process. He defended the scholarly publishing industry's commitment to pluralism.
27. This commitment to pluralism would be the likely view of many scientific editors,
even when controversy follows. One might conclude that these kinds of extreme debate, although difficult, are part of the normal fabric of scientific discourse. The question to be answered is: where is the line to be drawn between vigorous scientific exchange and improper attempts to close down debate (these two positions can be remarkably close to one another)? But one should also be conscious of what some observers have described as the "chilling" effect of political controversy on science. A survey of US National Institutes of Health scientists revealed that many engaged in self-censorship after they found themselves the subject of political criticism for their work (33). Political disagreement over science can shape not only the behaviour of scientists but also the future of science itself. Increasingly, commercial, as well as political, interests are also intervening to threaten the integrity of peer review (34).
28. Peer review and publication can provoke important questions about access to data.
During the review process, reviewers may seek more information. Except in
allegations of fraud, it would be highly unusual to provide or request raw data (even
then, journals expect institutions to take responsibility for investigating the
authenticity and reliability of original data). But access to data may be sought after
publication. This is a highly contentious and unresolved issue. In the field of medicine, these issues are currently the subject of much disagreement. While many parties might like to see greater sharing of data, this practice remains unusual. The Wellcome Trust is taking an especially strong interest in data access. It has proposed a code of conduct calling for "maximum public access to data of public health importance." The very fact that this proposal is being made illustrates the point that routine access to data is not a settled issue or a universal norm in science, as some
claim.
29. The issue of retention of records and exclusion of data is also a matter relevant to the peer review process and the ordinary working of journals. Journals do expect records to be kept for limited periods (say, 5 years, although journal practices vary). And they are comfortable with the exclusion of data provided that those exclusions – and the reasons for exclusion – are fully described, with appropriate sensitivity analyses being completed where necessary.
30. Two additional dimensions of peer review must be noted. One relates to confidentiality, the other to uncertainty. Editors send manuscripts to reviewers based on a principle of confidentiality. The author expects the editor to maintain a covenant of trust between the two parties. The editor will not misuse the author's work by circulating it outside of the confidential peer review process. The editor expects that covenant of trust to be honoured by the peer reviewer. No manuscript should be passed to a third party by a reviewer without the permission of the editor, usually on the grounds of improving the quality of the critique of the manuscript by involving a colleague in the review process. A disclosure to a third party without the prior permission of the editor would be a serious violation of the peer review process – a breach of confidentiality. It is also of paramount importance to report fully in all published scientific papers both quantitative and qualitative measures of uncertainty. One of the main benefits of peer review is to focus on areas of potential uncertainty and to ensure that those uncertainties are fully acknowledged, measured, and reported.
The future of peer review
31. Peer review is a human process and so will always contain flaws, produce errors, and occasionally mislead. Given that journals are the gatekeepers of scientific publication, they have enormous – probably too much – influence over the reputations of scientists, research units, and universities. Many measures of academic success
depend upon journal publication – promotion, tenure, grants, fame, and personal
wealth. It is not surprising that journals, and the main decision aid used by journals
(peer review), are the subject of constant tension and occasionally explosive
controversy. At such moments, it is not only essential to be clear (and modest) about
what peer review can do, but also to look for opportunities to do better. Journal articles are highly stylized reports of research. The linear and logical style of the research they report rarely presents a true or accurate picture of how a piece of
research was done. As the Nobel laureate, Peter Medawar, put it (32) in his essay,
Is the Scientific Paper a Fraud?
(to which he answered that it was),
"[the scientific paper] misrepresents the processes
of thought that accompanied or give rise to the work
that is described in the paper...The scientific paper in
its orthodox form does embody a totally mistaken
conception, even a travesty, of the nature of scientific
thought."
32. Medawar‘s point was that, "There is no such thing as unprejudiced observations." To add insult to injury, research papers may not even fully represent the views of the
authors who completed the work (33), and when faults are found after publication
those faults may be completely ignored in the subsequent use of that research (34).
There are actions that the scientific community could take to improve this far from
happy state of affairs surrounding one its foundational processes. First, there are new
opportunities and techniques available to search out, identify, and eliminate (or at
least reduce) unwanted bias in the peer review process (35, 36). Second, all young
scientists should receive formal training – which they currently do not – in the
standards and ethics expected in the peer-review process (37). It is scandalous that
peer review is simply not taken as seriously as it should be in the training of scientists.
The result is that peer review is often idiosyncratic and sometimes unreliable, fueling
scientific controversies, such as that over climate science, rather than defusing those
controversies. Strengthening the training, standards, and expectations around peer
review would do much to make the quality of peer reviewing part of the formal
appraisal of a scientist‘s contribution to his or her subject. There is a demand for
training in peer review (38). And the ethical dimensions of the review process are
now sufficiently concerning to scientists that they merit training as much as the more
formal methodological aspects of reviewing (39). Disappointingly, existing training
packages in peer review deliver little benefit to the quality of the peer review process
(40-42). Third, the peer review process is enormously inefficient. Individual journals will undertake peer review and reject manuscripts that will then cycle around other
journals until either the paper is accepted or the authors are sufficiently exhausted that
they abandon attempts at publication. In the face of such gross inefficiencies, some
scientific communities have tried to bring journals together to cooperate and make the
review process not only more efficient, but also less costly on the time and energy of
reviewers, authors, and editors (43). Alternatively, there may be intra-journal
procedures that can be introduced to deliver more efficient peer review (44). Fourth, journal editors should adopt more effective methods to resolve disputes between authors, reviewers, and readers. Within the journal, an ombudsperson operating independently of the editors can be one useful way to resolve intractable disagreements about journal processes (45). If a dispute remains impossible to resolve, journal editors can take their concerns to the Committee on Publication
Ethics, a charity that aims to set standards for journal practices, including peer review.
Journal editors should consider using this facility more often than they currently do –
in some ways, it represents the collective wisdom of a wide range of journal editors, a
collective wisdom that any scientific editor can draw upon in times of crisis. Lastly, peer review should be a subject for research in its own right. Although there is
a small group of scientists who study peer review (a biomedical peer review congress
is held every 4 years), that community is extraordinarily fragile when measured
against the size and importance of the contribution peer review makes to science (46). Historically, science funding bodies have been reluctant to invest in research on peer
review. This reluctance is partly responsible for the present vacuum in our knowledge
about the way scientific knowledge is constructed, reported, and discussed. One
positive result of the debate over the role of CRU scientists in peer review might be to
encourage funding bodies – such as the Medical Research Council and the National
Institute for Health Research – to take the science of peer review far more seriously.
33. Journals have inevitable limitations. When a paper with important policy implications is considered, editors can ask authors to balance their conclusions by putting the work in the context of existing evidence. Or we can commission an editorial that does the same. But a journal cannot adjudicate a public debate, and neither can conventional peer review. For those occasions when science meets (or clashes with) policy, there may be a case for referring that area of controversy to an independent body for a public inquiry - a National Agency for Science and Health, for example.
34. The best one might hope for the future of peer review is to be able to foster an
environment of continuous critique of research papers before and after publication.
Many writers on peer review have made such a proposal, yet no journal has been able
to create the motivation or incentives among scientists to engage in permanent peer
review (47-49). Some observers might worry that extending opportunities for
criticism will only sustain maverick points-of-view. However, experience suggests
that the best science would survive such intensified peer review, while the worst
would find its deserved place at the margins of knowledge. This process of weeding out weak research from the scientific literature can be accelerated through more formal mechanisms, such as the systematic review. A systematic approach to selecting evidence focuses on the quality of scientific methods rather than the reputations of scientists and their institutions. This more rigorous approach to gathering, appraising, and summing up the totality of available evidence has been profoundly valuable to clinical medicine.
35. More importantly, intensified post as well as pre publication review would put uncertainty – its extent and boundaries – at the centre of the peer review and publication process. This new emphasis on uncertainty would limit the rhetorical power of the scientific paper (50), and offer an opportunity to make continuous but constructive public criticism of research a new norm of science
1.
Montford A. The Hockey Stick Illusion:
Climategate and the corruption of
science
(Stacey International, 2010)
2 .
Montford A. Heated discussions.
Times Higher Education
March 25, 2010:
43-44.
3.
Monbiot G. Our narrow, antiquated school system is at the root of the climate email fiasco.
The Guardian
April 6, 2010: 25.
4.
Spier R. The history of the peer review process.
Trends in Biotechnology
2002;
20:
357-58.
5.
Hernon P, Schwartz C. Peer review revisited.
Library and Information Science
Research
2006;
28:
1-3.
6.
Goodman SN, Berlin J, Fletcher SW, Fletcher RH. Manuscript quality before
and after peer review and editing at
Annals of Internal Medicine
.
Ann Intern
Med
1994;
121:
11-21.
7.
Smith JS, Green J, de Gonzalez AB, et al. Cervical cancer and use of
hormonal contraceptives: a systematic review.
Lancet
2003;
361
: 1159-67.
8.
Jun M, Foote C, Lv J, et al. Effects of fibrates on cardiovascular outcomes: a
systematic review and meta-analysis.
Lancet
2010
9.
Green SM, Callaham ML. Current status of peer review at
Annals of
Emergency Medicine
. Ann Emerg Med 2006;
48:
304-08.
10.
Burnham JC. The evolution of editorial peer review.
JAMA
1990;
263:
1323-9.
11.
Fitzpatrick JJ. Peer review: a 2008 report on the sacred academic cow.
Applied Nursing Research
2008;
21:
53.
12.
Harms M. Peer review: the firewall of science.
Physiotherapy
2006;
92:
193-
94.
13.
DeMaria AN. Peer review: the weakest link.
JACC
2010;
55:
1161-62.
14.
Berger E. Peer review: a castle built on sand or the bedrock of scientific
publishing?
Ann Emerg Med
2006;
47:
157-59.
15.
Balistreri WF. Landmark, landmine, or landfill? The role of peer review in
assessing manuscripts.
J Pediatr
2007;
151:
107-08.
16.
Mulligan A. Is peer review in crisis?
Oral Oncology
2005;
41:
135-41.
17.
Smith R. Peer review: a flawed process at the heart of science and journals.
J R Soc Med
2006;
99
: 178-82.
18.
Jefferson T, Alderson P, Wager E, Davidoff F. The effects of editorial peer
review.
JAMA
2002;
287:
2784-86.
19.
Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for
improving the quality of reports of biomedical studies.
Cochrane Database
Syst Rev
2007;
2
: MR16.
20.
Richards D. Little evidence to support the use of editorial peer review to
ensure quality of published research.
Evid Based Dent
2007;
8
: 88-89.
21.
Wager E, Middleton P. Technical editing of research reports in biomedical
journals.
Cochrane Database Syst Rev
2008;
4:
MR2.
22.
Jefferson T, Wager E, Davidoff F. Measuring the quality of editorial peer
review.
JAMA
2002;
287:
2786-90.
23.
Aarssen LW, Lortie CJ, Budden AE, et al. Does publication in top-tier
journals affect reviewer behaviour? PLoS ONE 2009;
4
: e6283.
24.
Kravitz RL, Franks P, Feldman MD, et al. Editorial peer reviewers‘
recommendations at a general medical journal: are they reliable and do editors
care? PLoS ONE 2010;
5
: e10072.
25.
Enserink M. Elsevier to Editor: change controversial journal or resign.
Science
2010;
327
: 1316.
26.
Horton R. Maternal mortality: surprise, hope, and urgent action.
Lancet
2010;
27.
McCarty R. Science, politics, and peer review: an editor‘s dilemma.
American
Psychologist
2002;
57:
198-201.
28.
Calnan M, Smith GD, Sterne JA. The publication process itself was the major
cause of publication bias in genetic epidemiology.
J Clin Epidemiol
2006;
59:
1312-18.
29.
Harrison C. Peer review, politics, and pluralism.
Environmental Science and
Policy
2004;
7:
357-68.
30.
Kempner J. The chilling effect: how do researchers react to controversy. PLoS
Medicine 2008;
5
: e222.
31.
Curfman GD, Morrissey S, Annas GJ, Drazen JM. Peer review in the balance.
N Engl J Med
2008;
358
: 2276-77.
32.
Medawar PB. The Strange Case of the Spotted Mice (Oxford, 1996).
33.
Horton R. The hidden research paper.
JAMA
2002;
287
: 2775-78.
34.
Horton R. Postpublication criticism and the shaping of knowledge.
JAMA
2002;
287
: 2843-47.
35.
Bornmann L, Mutz R, Daniel H-D. How to detect indications of potential
sources of bias in peer review.
J Informatics
2008;
2
: 280-87.
36.
Ross JS, Gross CP, Desai MM, et al. Effect of blinded peer review on abstract acceptance.
JAMA
2006;
295
: 1675-80.
37.
Walbot V. Are we training pit bulls to review our manuscripts?
J Biol
2009;
8
:
24.38.
Snell L, Spencer J. Reviewers‘ perceptions of the peer review process for a medical education journal.
Med Educ
2005;
39
: 90-97.
39.
Resnik DB, Gutierrez-Ford C, Peddada S. Perceptions of ethical problems
with scientific journal peer review.
Sci Eng Ethics
2008;
14
: 305-10.
40.
Schroter S, Black N, Evans S, et al. Effects of training on quality of peer
review.
BMJ
2004 doi: 10.11136/bmj.38023.700775.AE.
41.
Callaham ML, Tercier J. The relationship of previous training and experience
of journal peer reviewers to subsequent review quality.
PLoS Medicine
2007;
4
: e40.
42.
Schroter S, Black N, Evans S, et al. What errors do peer reviewers detect, and
does training improve their ability to detect them?
J R Soc Med
2008:
101
:
507-14.
43.
Saper CB, Maunsell JHR. The Neuroscience Peer Review Consortium.
Brain
Res
2009;
1272
: 1-2.
44.
Johnston SC, Lowenstein DH, Ferriero DM, et al. Early editorial manuscript
screening versus obligate peer review.
Ann Neurol
2007;
61:
A10-12.
45.
Horton R. The journal ombudsperson: a step toward scientific press oversight.
JAMA
1998;
280
: 298-99.
46.
Linkov F, Lovalekar M, LaPorte R. Scientific journals are faith based‖: is
there science behind peer review?
J R Soc Med
2006;
99
: 596-98.
47.
von Segesser LK. Peer review versus public review – new possibilities of online publishing!
Interactive Cardiovascular and Thoracic Surgery
2002;
1:
61-62.
48.
Mandviwalla M, Patnayakuni R, Schuff D. Improving the peer review process
with information technology.
Decision Support Systems
2008;
46:
29-40.
49.
Liesegang TJ. Peer review should continue after publication.
Am J Ophthalmol
2010; March;
149
: 359-60.
50.
Horton R. The rhetoric of research.
BMJ
1995;
310
: 985-88.
Note: A longer version of this paper was submitted to the Muir Russell inquiry into the events that took place at the Climatic Research Unit of the University of East Anglia.
Declaration of Interest: I edit a medical journal,
The Lancet.
Richard Horton
Editor
The Lancet
9 February 2011
|