Written evidence submitted by the BMJ
Group (PR 41)
This written evidence on behalf of BMJ Group examines
the following aspects of the committee's terms of reference, with
particular focus on biomedical publication:
the
strengths and weaknesses of peer review as a quality control mechanism
for scientists, publishers and the public;
measures
to strengthen peer review;
the
processes by which reviewers with the requisite skills and knowledge
are identified, in particular as the volume of multi-disciplinary
research increases;
the
impact of IT and greater use of online resources on the peer review
process; and
possible
alternatives to peer review.
1. Peer review and scientific
norms. Peer review embodies all the so-called
Mertonian norms of science.1 Proposed by US sociologist
Robert Merton, these comprise: Communalism (common ownership of
scientific discoveries, where scientists give up intellectual
property rights in exchange for recognition and esteem [Merton
used the term Communism, but did not mean Marxism]), Universalism
(claims to truth are evaluated in terms of universal or impersonal
criteria and not on the basis of race, class, gender, religion,
or nationality), Disinterestedness (scientists are rewarded for
acting in apparently selfless ways), and organized Skepticism
(all ideas must be tested and subjected to rigorous, structured
community scrutiny).
2. Uses of peer review
in science. Peer review provides scrutiny
to support many elements of academic discovery: approval and funding
of research studies; regulation and approval of new drugs and
medical technologies; selection of research for presentation at
conferences and for publication; and rating and funding of academic
staff and departments.
3. Norms for peer review
at biomedical journals. The International
Committee of Medical Journal Editors (ICMJE) defines journal peer
review as "unbiased, independent, critical assessment
by
experts who are not part of the editorial staff" and deems
it an intrinsic part of all scholarly work. In biomedical publishing
several international organisations offer guidance to editors
including the International Committee of Medical Journal Editors
(ICMJE http://www.icmje.org), the World Association of Medical
Editors (WAME http://www.wame.org/), the Council of Science Editors
(CSE http://www.councilscienceeditors.org/), and the Committee
on Publication Ethics (COPE http://publicationethics.org/). Each
develops and promotes regularly updated policies and guidelines
on fair, professional, and efficient editorial and peer review
practices. Many biomedical editors are doctors or scientists with
little relevant experience or training before taking on the role,
so publishers and journal owners should point new editors to such
guidance and support them while they learn.
4. Costs of peer review. Peer reviewers
are rarely paid by publishers, and their work is often done out
of hours. Nevertheless, in 2010 a report for JISC Collections,
the organisation which supports the procurement of digital content
for education and research in the UK, estimated that UK higher
education institutions (HEIs) spend - in terms of staff time -
between £110 million and £165 million per year on peer
review and up to £30 million per year on the work of editors
and editorial boards.2 The authors of this report
also cited a study estimating that, worldwide, peer review costs
£1.9 billion annually and accounts for about a quarter of
the overall costs of scholarly publishing and distribution.3
Whether such expenditure represents good value for money is unclear,
but the conduct and quality of peer review has been evaluated,
and it is that evidence that we will focus on.
BIOMEDICAL GRANT
REVIEW
5. Current status of grant
review. A 2009-10 survey of 28 public
and private organisations that give grants for biomedical research
in 19 countries, and their reviewers, reported a growing workload
of biomedical proposals that is getting harder to peer review.4
Organisations reported these problems as frequent or very frequent:
declined review requests, late reports, administrative burden,
difficulty finding new reviewers, and reviewers not following
guidelines. The administrative burden of the process had increased
over the past five years. About half the responding organisations
expressed interest in the development of uniform requirements
for conducting grant review and for formatting grant proposals.
In a sub-study 258 reviewers from 22 countries reported inadequate
support for conducting grant review. Around half said their institutions
encouraged grant review, yet only 7% got protected time and 74%
received no academic recognition for this work. Reviewers rated
these factors as extremely or very important in deciding to review
proposals: desire to support external fairness, professional duty,
relevance of the proposal's topic, wanting to keep up to date,
desire to avoid suppression of innovation. Most had not been trained
in grant review and many wanted such training.
Strengths and weaknesses of journal peer review
as a quality control mechanism for scientists, publishers and
the public
6. Strengths: journal
review has an extensive evidence base.
Appraisal of articles submitted to journals is probably the oldest
form of formal peer review. It was used by Europe's first scientific
journals - the Journal des Sçavans (later renamed
Journal des Savants) and the Philosophical Transactions
of the Royal Society of London - when they launched in 1665.
Journal peer review is also the most evaluated form; particularly
in medical publishing. Many of these evaluations have been presented
at the International Congresses on Peer Review and Biomedical
Publication (held every four years since 1986) and have then been
published in peer reviewed journals, and at the 2009 congress
more than 100 studies on peer review were presented (http://www.ama-assn.org/public/peer/peerhome.htm).
However, most of this evidence is about identifying the weaknesses
of peer review and evaluating its different methods.
7. Strengths: peer review engenders trust.
The Science and Technology Committee (Commons) concluded in 2004,
when considering developments in open access publishing, that
there were "at least three strong arguments, however, for
keeping the system of peer review intact. Firstly, volume ...
academics are producing more research articles than ever before:
output increases by approximately 3% per year
Secondly,
peer review gives successful articles a mark of distinction that
helps to provide a measure of the academic's and their department's
level of achievement
Thirdly, peer review gives the lay
reader an indication of the extent to which they can trust each
article." [5, para 205] Peer review remains the best way
to engender such trust in scholarly work.
8. Strengths: peer review improves manuscripts.
Anecdotally, we know from authors and editors that peer review
tends to make articles clearer and more accurate. And now that
many journals - including BMJ Open (bmjopen.bmj.com)
and some BMC journals (BioMed Central. BMC series journals: peer
review processes. www.biomedcentral.com/info/authors/bmcseries)
- post reviewers' reports and previous manuscript versions on
their websites alongside the published articles, readers can see
how reviewers' comments lead to revisions. The effects of peer
review on manuscript quality have not, however, been much researched.
Jefferson and colleagues' 2008 Cochrane systematic review of
28 studies of editorial [journal] peer review, reported that it
"appears to make papers more readable and improve the general
quality of reporting (two studies), but the evidence for this
has very limited generalisability".6 Moreover,
they found only one small study testing the validity of peer review.
They concluded that "little empirical evidence is available
to support the use of editorial peer review as a mechanism to
ensure quality of biomedical research. However, the methodological
problems in studying peer review are many and complex
the
absence of evidence on efficacy and effectiveness cannot be interpreted
as evidence of their absence. A large, well-funded programme of
research on the effects of editorial peer review should be urgently
launched." Editorial research continues but - as it is mostly
conducted by editors interested in their own journals' practices
- it is haphazard, unfunded, and focused on processes rather than
outcomes.
9. Weaknesses. We know from experience
and evidence that journal peer review has many potential limitations.7
Studies have shown peer review to be too slow; overly conservative;
unreliable;8-11 poor at detecting errors and misconduct;
open to abuse; skewed towards research with positive results;12
biased by conflicts of interest; and systematically biased against
authors' ideas, reputations, locations, and even gender.13,14
Much submitted work is uncontentious, but controversial work
within polarised debates often poses particular challenges to
editors trying to find balanced reviews. However, many journals
now have policies and practices aimed at overcoming such problems
(see below), and those that conduct peer review research should,
arguably, focus now on its impact on the quality of published
content rather than the quality of their processes.
Measures to strengthen peer review
10. Choosing the right reviewers. For
many journals online manuscript handling systems have greatly
facilitated the search for and selection of reviewers, making
reviewers' and editors' decisions quicker and easier to share.
These systems usually have a single database that includes invited
reviewers, volunteer reviewers, and everyone who has submitted
an article via that journal's online system. Indeed,
a survey of more than 3000 academics in 2007 showed that more
than 90% of authors were also reviewers.15 From
several blinded studies we know which type of reviewers tend to
deliver the best opinions for medical journals: those who work
in reputable institutions, understand statistics and epidemiology,
and - perhaps counter-intuitively - are aged under 40 and are
not yet in the most senior posts.16 Asking authors
to suggest reviewers helps to extend a journal's pool of reviewers
and is often invaluable. But editors should note evidence showing
that author-selected reviewers - while producing reviews of similar
quality -are more likely than editor-selected reviewers to recommend
acceptance.17
11. Managing reviewers' behaviour. The
tenth report of the science and technology committee 2003-4 session,
Scientific publications: free for all? (para206), cited
measures used by four high-profile journals - Cell, The Lancet,
Science and Nature - to ensure the integrity of the
peer review process.5 At these and many other journals
(including those of the BMJ Group) authors these measures include
not using reviewers with potential conflicts of interest (see
para 13 below); having clear policies that reviewers should disqualify
themselves on the basis of conflicts of interest; using several
reviewers per article to allow for the moderation of opinions;
editors' tracking of reviews submitted by a particular reviewer
to monitor consistency; editors' evaluations and actions regarding
claims of reviewer bias or misconduct; and having formal appeals
procedures for authors of rejected articles.
12. Ensuring scientific transparency in authors'
articles and reviewers' reports. The Committee on Publication
Ethics (COPE) expects the editors of its more than 4,000
journal members worldwide to provide detailed advice on
conducting high quality peer review (http://publicationethics.org/files/u2/Best_Practice.pdf).
For authors the EQUATOR website (Enhancing the QUAlity and Transparency
Of health Research www.equator-network.org/)
hosts a wide range of freely available guidance on writing research
papers, called "reporting guidelines". Reporting guidelines
specify the minimum sets of items required to give a clear and
transparent account of the design, conduct, and findings for each
type of study in biomedical research. At the BMJ we do
not send research articles for external review until they have
been reported in line with the appropriate reporting guideline,
thus helping reviewers, editors, and readers to fully evaluate
and understand the methods and results and any limitations and
biases within the research.
13. Declaring conflicts of interest. The
International Committee of Medical Journal Editors (ICMJE) Uniform
Requirements for Manuscripts Submitted to Biomedical Journals
require that all participants in the peer review and publication
process must disclose all relationships that could be viewed as
potential conflicts of interest, and recommend that journals publish
authors' statements of competing interests when these might affect
the way the work is judged by readers. ICMJE now provides a single
disclosure form that has been adopted by all of its 12 member
journals, including the BMJ (www.icmje.org/coi_disclosure.pdf).
It is important to also ask reviewers to declare conflicts of
interest, and in journals such as BMJ Open - which post
reviewers' reports online next to the accepted articles - these
declarations are visible to all. Some conflicts may be unavoidable
and acceptable, but when reviewers' declared interests conflict
significantly with the content or authorship of particular articles
they should either decline to review or should not be chosen by
editors for that assignment.
14. Blinded peer review. Journals have
tried several ways to minimise bias in peer review. Most keep
reviewers' identities secret from authors (single blind review),
so that reviewers can freely express their views without fear
or favour. To reduce the risk of reviewer bias against particular
authors or institutions, some journals have also removed authors'
names and addresses from manuscripts (double-blind review). Few
journals use such double blind review, however: it is hard to
do well and, anyway, studies have shown that around a third of
the time reviewers correctly guess authors' identities.18,19
Furthermore, Jefferson and colleagues' Cochrane review found
"no clear-cut evidence of effect of the well-researched practice
of reviewer and/or author concealment on the outcome of the quality
assessment process".6
15. Potential for open (signed) peer review.
Another approach is to ask reviewers to sign their reports
and to reveal the identities of reviewers, editors, and authors
to each other. Responses to a 2009 survey of more than 4000 science
reviewers suggest, however, that reviewers prefer anonymity: 76%
favoured the double blind system where only the editor knows who
the reviewers and authors are.20 This built on a 2007
survey of around 3000 academics and editors around the world (of
whom about 10% worked in UK HEIs and 18% were working in clinical
medicine or nursing) which found relatively little support for
open review as an alternative to single- or double-blinded review.15
Respondents did, however, show considerable enthusiasm for trying
different approaches including post-publication review, though
mainly as a supplement to formal peer review.
16. Evidence on open (signed) peer review.
The surveys reported above support the common view that peer
reviewers will either refuse to take part in open review or will
provide only bland and uncritical comments, because they fear
reprisals for criticising other researchers' work openly. But
in a randomised controlled trial, where reviewers invited in the
usual way to review for the BMJ were allocated randomly
to single blind review or to open (signed) review, signing did
not reduce the extent or quality of reviewers' reports and it
improved their tone.21 Another randomised trial, at
the British Journal of Psychiatry, showed that such open
review was feasible even in a specialist field - where professional
rivalries might be stronger than in general medicine.22
On the basis of its trial the BMJ mandated signed open
review, and the journal has used this for more than decade with
no significant problems. PLoS Medicine, however, tried
and then discontinued this practice in late 2007 citing reviewers'
reluctance to sign their reports - perhaps because at that time
it was publishing a lot of laboratory-based research, which is
arguably more competitive than clinical research.
17. Evidence on open review with reviewers'
signed reports posted online alongside published articles.
At the BMJ we have evaluated an extended kind of open review:
making reviewers' reports (with their consent) available to readers
as part of an online pre-publication history alongside each research
paper.23 We aim to roll this out in late 2011, and
have already done so in our new sister journal BMJ Open
(bmjopen.bmj.com). Meanwhile the medical journals in the BMC series
published by BioMed Central have been using this approach for
many years.
Impact of IT and greater use of online resources
on the peer review process
18. Evidence on online community open review.
In other experiments the Medical Journal of Australia24
and Nature25 made articles openly available
online during, rather than after, the peer review process and
invited free comments from readers. But responses from their communities
were limited and both journals concluded that this was no substitute
for formal, traditional peer review by experts. In the Nature
trial, which ran for four months in 2006, papers that survived
initial editorial assessment were hosted on an open server for
moderated public comment as well as simultaneous standard peer
review. During the study only 5% of authors agreed to take part
and, of the 71 displayed papers only half received comments.
19. Sharing and reviewing raw data. The
Wellcome Trust and other major international funders have called
for public health researchers to make studies' raw data available.26
Annals of Internal Medicine, the BMJ, BMJ
Open, the PloS journals and
several BMC journals - among others -actively
encourage authors to share data in online repositories with necessary
safeguards to protect patient confidentiality. As yet, there has
been no real debate on whether or how such datasets should be
peer reviewed.
Possible alternatives to peer review
20. Invited moderation rather than peer review.
PLoS Currents: Influenza (http://knol.google.com/k/plos/plos-currents-influenza)
is an open access online journal that uses the application Google
knol (http://knol.google.com/k) to post informal articles or knols
("units of knowledge") that readers can rate and comment
on. The journal describes itself as "a website for rapid
and open sharing of useful new scientific data, analyses, and
ideas in all aspects of influenza research [where]
all content
is moderated by an expert group of influenza researchers, but
in the interest of timeliness, does not undergo in-depth peer
review."
21. Spontaneous post-publication comment.
Many online journals encourage continuing discussion of their
content. The BMJ's Rapid Responses or eletters, posted
daily, provide a voluminous, lively, and often scholarly discourse
and constitute an important source of ongoing peer review (http://www.bmj.com/letters).
Twitter has also entered the fray: although their <140 characters
allow only the briefest comment, tweets are facilitating rapid
and widespread sharing of links to articles and other online content
and can, it seems, quickly expose failings in peer review.27
22. Post-publication measures of quality and
impact. The web continues to bring other ways for rating and
commenting on research articles and other scholarly publications.
These include journal-specific measures of articles' usage and
reach eg the article-level metrics provided by PLoS journals (http://article-level-metrics.plos.org/)
and the annual audit conducted by the BMJ (http://resources.bmj.com/bmj/authors/bmj-papers-audit-1);
the independent rating of articles by services such as Faculty
of 1000 (http://f1000.com/) and McMaster/BMJ EvidenceUpdates (http://plus.mcmaster.ca/EvidenceUpdates/);
and - yet to come - the Impact Assessment for research within
the new UK Research Excellence Framework (http://hefce.ac.uk/research/ref/).
CONCLUSIONS
Peer review is an art rather than a science. It can
improve the trustworthiness and clarity of scholarly publications,
and its known limitations can be minimised. While there are many
ways to conduct and improve peer review, evidence shows it can
be an open and transparent endeavour without compromising the
quality of the process.
Trish Groves
Deputy Editor BMJ, on behalf of BMJ Group
9 March 2011
Competing interests: Trish Groves is the senior research
editor at the BMJ and is responsible for running the journal's
peer review processes. She is also Editor-in-Chief of BMJ Open.
REFERENCES
1. Merton, R K (1942). The Normative Structure
of Science In: Merton, Robert King (1973). The Sociology of
Science: Theoretical and Empirical Investigations. Chicago:
University of Chicago Press. ISBN 9780226520919. OCLC 755754.
2. Look H, Sparks S. The value of UK HEIs contribution
to the publishing process: summary report. Rightscom Ltd, 2010.
http://www.jisc-collections.ac.uk/Reports/valueofukhe/
3. Cambridge Economic Policy Associates for the
Research Information Network (2008). Activities, costs and funding
flows in the scholarly communications system in the UK. http://www.rin.ac.uk/our-work/communicating-and-disseminating-research/activities-costs-and-funding-flows-scholarly-commu
4. Schroter S, Groves T, Hjgaard L. BMC Medicine
2010, 8:62
http://www.biomedcentral.com/1741-7015/8/62
5. Tenth report of the science and technology
committee 2003-4 session. Scientific publications: free for all?
http://www.publications.parliament.uk/pa/cm200304/cmselect/cmsctech/399/39902.htm
6. Jefferson T, Rudin M,
Brodney Folse S, Davidoff F. Editorial peer review for improving
the quality of reports of biomedical studies. Cochrane Database
Syst Rev. 2007 Apr 18;(2):MR000016.
7. Godlee F, Jefferson T,
eds. Peer review in health sciences (second edition). BMJ
Books, 2003.
8. Ingelfinger F J. Peer review in biomedical
publication. Am J Med 1974;56:686-92.
9. Cole S, Cole J, Simon G. Chance and consensus
in peer review. Science 1981;214:881-6.
10. Hodgson C. How reliable is peer review? A
comparison of operating grant proposals simultaneously submitted
to two similar peer review systems. J Clin Epidemiol 1997;50:1189-95.
11. Fletcher R H, Fletcher S W. The effectiveness
of editorial peer review. In: Peer review in health sciences.
Godlee F, Jefferson T eds. London BMJ Publishing Group,1999:45-56
12. Hopewell S, Loudon K, Clarke M J, Oxman A
D, Dickersin K. Publication bias in clinical trials due to statistical
significance or direction of trial results. Cochrane Database
of Systematic Reviews 2009, Issue 1. Art. No.: MR000006. DOI:10.1002/14651858.MR000006.pub3.
13. Merton R K. The Matthew Effect in Science.
Science 1968;159:56-63.
14. Wenneras C, Wold A. Nepotism and sexism in
peer review. Nature 1997;387:341-3
15. Mark Ware Consulting & Mark Monkman.
Media for the Publishing Research Consortium (2007). Peer review
in scholarly journals: Perspective of the scholarly community:
an international study.
http://www.publishingresearch.net/documents/PeerReviewFullPRCReport-final.pdf
16. Black N, van Rooyen S, Godlee F, Smith R,
Evans S. What makes a good reviewer and a good review in a general
medical journal. JAMA 1998;280:231-3.
17. Schroter S, Tite L, Hutchings A, Black N:
Differences in review quality and recommendations for publication
between reviewers suggested by authors or by editors. JAMA
2006, 295:314-317.
18. Justice A C, Cho M K, Winker M, Berlin J
A, Rennie D, the PEER investigators. Does masking author identity
improve peer review quality? A randomised controlled trial. JAMA
1998; 280: 240-242[Abstract/Free Full Text].
19. Cho M K, Justice A C, Winker M A, Berlin
J A, Waekerle J F, Callaham M L, et al. Masking author identity
in peer review: what factors influence masking success? JAMA
1998; 280: 243-245[Abstract/Free Full Text].
20. Sense about Science. Peer
Review Survey 2009: preliminary findings http://www.senseaboutscience.org.uk/index.php/site/project/395
21. van Rooyen S, Godlee F, Evans S, Black N,
Smith R. Effect of open peer review on quality of reviews and
on reviewers' recommendations: a randomised trial. BMJ
1999; 318: 23-7
22. Walsh E, Rooney M, Appleby L, Wilkinson G.
Open peer review: a randomised controlled trial. Br J Psychiatry2000;
176:47-51.
23. Van Rooyen S, Delamothe T, Evans S. Effect
on peer review of telling reviewers that their signed reviews
might be posted on the web: randomised controlled trial. BMJ2010;341:c5729.
24. Bingham C M, Higgins G, Coleman R, Van Der
Weyden M B. Medical Journal of Australia internet peer-review
study. Lancet 1998; 352:441-5.
25. Sarah Greaves, Joanna Scott, Maxine Clarke,
Linda Miller, Timo Hannay, Annette Thomas, Philip Campbell. Nature's
trial of open peer review. Nature 2006.
doi:10.1038/nature05535. http://www.nature.com/nature/peerreview/debate/nature05535.html
26. Walport M, Brest P. Sharing research data
to improve public health. Lancet 2011;377:537-9.
27. Mandavilli A. Trial by Twitter. Nature
2011; 469: 286-7.
http://www.nature.com/news/2011/110119/full/469286a.html
BMJ Group
9 March 2011
|