Peer review
204. Many of the arguments in the debate on scientific
publications focus on the issue of peer review: do new developments
in the publishing market put it at risk? As is outlined in paragraphs
169174, we have concluded that they must not. A factor
in this debate is the scientific community's capacity for self-policing.
All of the academics that we spoke to were confident that they
could determine the quality of a research article for themselves.
This stands to reason given the fact that it is the same academics
who carry out the function of peer review. Ironically it is this
facility for self-regulation that calls peer review into question.
If academics can distinguish a good article from a bad one by
themselves, why do they need another academic to carry out this
function for them? From this argument stems the view that peer
review is unnecessarily censorious.
205. There are at least three strong arguments, however,
for keeping the system of peer review intact. Firstly, volume.
As has already been outlined, academics are producing more research
articles than ever before: output increases by approximately 3%
per year. Whilst academics might have the acumen to determine
which of these articles are worth reading, they probably do not
have the time to search through the entire output in order to
achieve this. The peer review services provided by publishers
act as a filter, saving academics time and thus also saving public
money. Secondly, peer review gives successful articles a mark
of distinction that helps to provide a measure of the academic's
and their department's level of achievement. As Procurement for
Libraries notes, for the academic, "scholarly publishing
in academic journals is essentially about validation of results
through the editorial and peer-review process".[351]
We heard that the main motivations for academics to publish were
career, funding and reputation-based. These incentives to publish
would be significantly reduced were the mark of achievement conferred
by passing successfully through the peer review process to be
abandoned. Thirdly, peer review gives the lay reader an indication
of the extent to which they can trust each article (see paragraph
132).
206. The usefulness of peer review to the scientific
process is not a guarantee of its quality. We wrote to the Editors
of four high-profile journals, Cell, The Lancet,
Science and Nature, to ascertain what measures they
used to ensure the integrity of the peer review process. Collectively
the Editors cited the following measures:
- Authors are given the opportunity
to exclude from consideration any reviewers who are affected by
a potential conflict of interest;
- Reviewers are given the opportunity to disqualify
themselves on the basis of a conflict of interest;
- Articles are sent to a number of reviewers, for
example, Cell uses three reviewers per article and The
Lancet uses four. This allows for the moderation of their
findings;
- Editors track all the reviews submitted by a
particular reviewer for consistency. Any comments that are judged
to be unduly harsh or lenient within that context are noted;
- Editors evaluate all claims of reviewer bias
or misconduct and appropriate action is taken; and
- Journals have a formal appeals procedure available
for all rejected articles.[352]
207. In addition, peer reviewers have no responsibility
for making the final decision about which articles are published,
and most of them are unpaid, ensuring that they retain a degree
of detachment from the publishing process. All of the above measures
attempt to minimise the risk of a compromise to the peer review
system. However, as Richard Horton, Editor-in-Chief of The
Lancet, pointed out in his response, "these processes
rely on the integrity of the individuals involved, and we rely
on trust between editors, reviewers, and authors".[353]
As is the case with any process, peer review is not an infallible
system and to a large extent depends on the integrity and competence
of the people involved and the degree of editorial oversight and
quality assurance of the peer review process itself. Nonetheless
we are satisfied that publishers are taking reasonable measures
to main high standards of peer review. Peer review is an issue
of considerable importance and complexity and the Committee plans
to pursue it in more detail in a future inquiry.
The Research Assessment Exercise
208. The Research Assessment Exercise (RAE) is used
as a means of implementing a policy of selective funding for universities.
It aims to measure the quality of research in different departments,
rewarding excellence where it occurs and encouraging its development
elsewhere. The rating awarded to a department by the RAE helps
to determine levels of funding. As one of the most readily identifiable
and quantifiable research outputs, journal articles are a key
measure used by the RAE. What follows is a brief analysis of the
impact of the RAE on STM publishing trends. We will examine wider
issues concerning the RAE in a forthcoming Report.[354]
209. Publication enhances career and reputation in
a general sense: academics do not publish their research findings
simply because of the RAE. As Rama Thirunamachandran pointed out
in oral evidence, "if you look at other countries which do
not have an RAE, people still want to publish in Nature".[355]
Nonetheless, we received evidence to suggest that the measures
used in the RAE distorted authors' choice of where to publish.
Although RAE panels are supposed to assess the quality of the
content of each journal article submitted for assessment, we reported
in 2002 that "there is still the suspicion that place of
publication was given greater weight than the papers' content".[356]
This is certainly how the RAE was perceived to operate by the
panel of academics we saw on 21 April. Professor Williams told
us that he chose to publish in journals with high impact factors
because "that is how I am measured every three years or every
five years; RAE or a review, it is the quality of the journals
on that list".[357]
Similarly Professor Crabbe stated that "the driver is finance.
The driver is the Research Assessment Exercise. Impact factors,
the half-life of journals are what drives us, I am afraid".[358]
In both oral and written evidence, HEFCE denied that journal impact
factors formed the basis for an assessment of the quality of articles
submitted to the RAE.
210. Whether or not RAE panels use journal impact
factors as an indication of the quality of the articles that they
assess, the perception that this is the case causes a bias amongst
UK authors towards journals with higher impact factors. This in
turn increases the journal's impact factor still further. In this
way, regrettably, the RAE indirectly supports a hierarchy of journals,
making it difficult for new and little-known journals, including
because they have appeared only recently some
author-pays journals, to compete. The Open University told us
that "Government should encourage the RAE to develop new
quality indicators so that articles published in new open access
journals can be evaluated in an even-handed manner in the Research
Assessment Exercise".[359]
However, the current system, which does not formally take account
of impact factors, should already ensure that this is the case.
The perception that the RAE rewards publication in journals
with high impact factors is affecting decisions made by authors
about where to publish. We urge HEFCE to remind RAE panels that
they are obliged to assess the quality of the content of individual
articles, not the reputation of the journal in which they are
published.
351 Ev 153 Back
352
Ev 427-8 Back
353
Ev 430 Back
354
HC 586. See also the Second Report of the Science and Technology
Committee, Session 2001-02, The Research Assessment Exercise
(HC 507) Back
355
Q 397 Back
356
Second Report of the Science and Technology Committee, Session
2001-02, p 17 Back
357
Q 285 Back
358
Q 286 Back
359
Ev 323 Back