2 Peer review in publishing
11. Peer review, in the context of publishing,
can take place before or after an article is published. The first
records of journal pre-publication peer review date back to the
17th century, when the Royal Society's Secretary, Henry
Oldenburg, adopted it as editor of the journal, Philosophical
Transactions of the Royal Society.[18]
The concept of peer review, however, may be even older. The Syrian
physician, Ishaq bin Ali Al Rahwi (AD 854-931) is thought to have
first described the concept in his book, Ethics of the Physician.[19]
Al Rahwi apparently "encouraged doctors to keep contemporaneous
notes on their patients, later to be reviewed by a jury of fellow
physicians".[20]
12. The Association of Learned and Professional
Society Publishers (ALPSP) explained that "peer review varies
considerably between scientific disciplines; it is not a one-size-fits-all
process. It has evolved to meet the needs of individual scientific
communities".[21]
Peer review originally evolved in a piecemeal and haphazard way
and did not become standard practice in publishing until the middle
of the 20th century.[22]
As pointed out by numerous individuals and organisations, peer
review is by no means a perfect system.[23]
The Publishers Association described peer review as a system "based
on human endeavour" which therefore "cannot be perfect
or infallible".[24]
Professor John Pethica, Physical Secretary and Vice President
of the Royal Society, surmised: "Given that there is no perfect
system, we have to devise a system which optimises the process".[25]
The traditional peer-review process
13. The key features in the peer-review process
in scholarly publishing are summarised in the figure below:
14. Authors submit a manuscript to their chosen
journal, usually via a web-based system. It is not unusual
for manuscripts to be sent to a few journals before being accepted
for publication, although authors are only allowedby conventionto
send their manuscripts to one journal at a time. Initial in-house
checks are carried out by part of the editorial team. These will
include basic checksfor completeness and adherence to journal
policies, as well as editorial checksfor scope, novelty,
quality and interest to journal readership. At this stage, manuscripts
may be returned to authors for completion and resubmission if
the technical omissions are extensive; in minor cases, authors
may just be asked to provide the missing items. Manuscripts can
also be rejected at this stage on editorial grounds, without being
sent out for external peer review. This decision is made by the
journal editors. In some top journals, the rejection rate at this
stage can be very high. For example, editors at Nature
"reject 70-80% of submitted papers (the exact proportion
varies with discipline) on purely editorial grounds".[26]
Manuscripts that pass the initial checks are sent to external
reviewers, usually two or more. The reviewers assess, and report
back to the editors on issues such as:
- Study design and methodology;
- Soundness of work and results;
- Presentation and clarity of data;
- Interpretation of results;
- Whether research objectives have been met;
- Whether a study is incomplete or too preliminary;
- Novelty and significance;
- Ethical issues; and
- Other journal-specific issues.
The reviewers' role at this stage is to provide a
critical appraisal, advise and make recommendations on the manuscript.
Editors take the final decision as to whether or not to accept
the manuscript for publication. The decision is then communicated
to the author. This will generally be one of the following: accept;
accept with revision (minor or major); reject but encourage resubmission;
or reject.
TYPES OF PEER REVIEW
15. There are three main types of peer review
in use. They are: "single-blind review", "double-blind
review" and "open review". The Royal Society explained
that:
By far the commonest system in use is "single
blind" peer review in which the author's name and institution
is known to the reviewer, but the reviewer's name is not provided
to the author.
A number of journals instead choose to operate a
"double blind" peer review system which is fully anonymised
(i.e. the author(s) are unaware of the identity of the reviewer(s)
and vice versa).
Recently, there have been some experiments with a
third type, "open" peer review, in which the authors'
and reviewers' names are revealed to each other. [
] Open
peer review can be reasonably described as an experimental system
at this stage and is far from common.[27]
16. During the course of this inquiry we heard
that the Institute of Physics (IOP), the Royal Society and the
Royal Society of Chemistry (RSC) use single-blind review.[28]
The publisher, John Wiley & Sons, also primarily uses single-blind
review.[29] It is the
commonest system in scientific journals. In the social sciences,
peer review "is almost invariably a double-blind process".[30]
Some journals, such as the BMJ, choose to use open peer
review.[31]
17. The BMJ Group explained that:
Responses to a 2009 survey of more than 4000 science
reviewers suggest, however, that reviewers prefer anonymity: 76%
favoured the double blind system where only the editor knows who
the reviewers and authors are.[32]
This built on a 2007 survey of around 3000 academics
and editors around the world (of whom about 10% worked in UK [Higher
Education Institutions] and 18% were working in clinical medicine
or nursing) which found relatively little support for open review
as an alternative to single- or double-blinded review.[33]
18. It is sometimes suggested that bias in the
peer-review process (see paragraphs 42-43) could be reduced by
using the double-blind approach.[34]
However, Dr Nicola Gulley, Editorial Director at IOP Publishing
Ltd, explained that this is not always practical:
Some of the research communities that I work with
particularly are very small, so doing double-blind refereeing
where neither the author nor the referee knows who each other
is defeats the object because, generally, the referees will know
who the author is from the subject area that they are working
in or from the references and things like that. It varies very
much between different subject areas.[35]
Others also acknowledged the problem of authors guessing
the names of reviewers and vice versa in double-blind peer review.[36]
19. Dr Liz Wager, Chair of the Committee on Publication
Ethics (COPE), told us that COPE does not recommend one system
or another. The reason given was that:
some editors have said to us, "We work in a
very narrow field. Everybody knows everybody else. It just would
not work to have this open peer review." There are different
options. [
] My opinion is that it depends on the discipline.
With a discipline as big as medicine, where there are hundreds
of thousands of people all around the world you can ask and they
probably don't bump into each other the next day, open peer review
seems to work. In much narrower and more specialised fields, it
perhaps does not, and the traditional system of the blinded review
is perhaps better.[37]
20. We conclude that different
types of peer review are suitable to different disciplines and
research communities. We consider that publishers should ensure
that the communities they serve are satisfied with their choice
of peer-review methodology. Publishers should keep them updated
on new developments and help them experiment with different systems
they feel may be beneficial.
Assessing manuscripts
21. The core of the traditional peer-review process
is the critical appraisal of the work and its reporting. The Public
Library of Science (PLoS) explained that:
It is helpful to divide [peer review's] functions
into two broad areas: technical and impact assessment. Whereas
technical assessment tends to be objective and provides greater
confidence in (although cannot assure) the reliability of published
findings, impact assessment is subjective and its role is less
clear-cut.[38]
22. The value of the technical assessment is
seldom questioned. Dr Michaela Torkar, Editorial Director at BioMed
Central, was of the view that:
It is fairly straightforward to think about scientific
soundness because it should be the fundamental goal of the peer
review process that we ensure all the publications are well controlled,
that the conclusions are supported and that the study design is
appropriate.[39]
We also heard from a number of witnesses that there
is evidence that many authors feel that peer review improves the
quality of the articles that they publish.[40]
23. Questions are, however, often raised about
the impact assessment. The impact assessment can be thought of
as the means by which an editorial decision is taken to publish
or not publish a manuscript. It is based on various factors, for
example, whether the subject of the manuscript will be of interest
to the journal readership or whether the research is perceived
to represent a ground-breaking discovery. Dr Nicola Gulley of
the IOP explained that peer review in this respect acts as a "filter",
helping scientists find the information that is of interest to
them.[41] Dr Mark Patterson,
Director of Publishing at the PLoS, explained the scale of the
current situation:
About 1.5 million [peer-reviewed] articles are
published every year. Before any of them are published, they are
sorted into 25,000 different journals. So the journals are like
a massive filtering and sorting process that goes on before publication.
The question we have been thinking about is whether that is the
right way to organise research.[42]
24. Professor Teresa Rees CBE, former Pro-Vice-Chancellor
at Cardiff University, added that:
We have an expanding number of journals [...] and
there is increasing pressure to publish. I think there is a question
of whether academics can keep up with reading all the material
in the growing number of journals. One might want to have a debate
at some stage about whether that is the most effective and efficient
way of managing all the potential research that can be published.[43]
25. Published research is currently organised
and sorted into thousands of journals. The impact or perceived
importance of a published article is often judged by the "Impact
Factor" of the journal in which it appears. A journal's Impact
Factor is calculated annually by Thomson Reuters. It is "a
measure of the frequency with which the 'average article' in a
journal has been cited in a particular year or period".[44]
It is, however, a measure of the journal and not of each individual
article. It should also be noted that there are many peer-reviewed
journals which are not indexed by Thomson Reuters and therefore
do not have an Impact Factor; the Thomson Reuters 2010 Journal
Citation Reports contains data for 10,196 journals.[45]
Impact Factors and high-impact journals are covered in more detail
in paragraph 59.
26. The question that arises when assessing the
merits of the impact assessment made during the peer-review process
is: how do journal editors or reviewers judge whether a particular
piece of work is important? Professor Ian Walmsley, Pro-Vice-Chancellor
at the University of Oxford, told us that this was "a very
difficult thing to do".[46]
He added that:
In many ways [impact] is something best assessed
post facto; that is, the impact of this work is: how many
other people find it a fruitful thing on which to build? How many
people find it a productive way to direct their research as a
consequence?[47]
Dr Rebecca Lawrence, Director of New Product Development
at Faculty of 1000 Ltd, agreed that:
often it is not known immediately how important something
is. In fact, it takes quite a while to understand its impact.
Also, what is important to some people may not be to others. A
small piece of research may be very important if you are working
in that key area. Therefore, the impact side of it is very subjective.[48]
Dr Michaela Torkar of BioMed Central was also of
the opinion that "the assessment of what is important can
be quite subjective".[49]
27. Dr Mark Patterson, from PLoS, gave his view
on the traditional process and how things may begin to change:
Traditionally, technical assessment and impact assessment
are wrapped up in a single process that happens before publication.
We think there is an opportunity and, potentially, a lot to be
gained from decoupling these two processes into processes best
carried out before publication and those better left until after
publication. [
] There are benefits to focusing on just the
technical assessment before publication and the impact assessment
after publication. That becomes possible because of the medium
that we have to use now. The 25,000 journal system is basically
one that has evolved and adapted in a print medium. Online we
have the opportunity to rethink, completely, how that works. Both
[technical and impact assessment] are important, but we think
that, potentially, they can be decoupled.[50]
28. Dr Malcolm Read OBE, Executive Secretary
of the Joint Information Systems Committee (JISC), agreed that
"separating the two is important because of the time scale
over which you get your answer".[51]
29. The importance of a pre-publication
technical assessment is clear to us. It should be a fundamental
aim of the peer-review process that all publications are scientifically
sound. Assessing the impact or perceived importance of research
before it is published will always require subjective judgement
and mistakes will inevitably be made. We welcome new approaches
that focus on carrying out a technical assessment prior to publication
and making an assessment of impact after publication.
Common criticisms
30. As explained in paragraph 12, peer review
is by no means a perfect system. Professor Sir John Beddington,
Government Chief Scientific Adviser, stated that:
If you posed the question, "Is the peer review
process fundamentally flawed?" I would say absolutely not.
If you asked, "Are there flaws in the peer review process
which can be appropriately drawn to the attention of the community?"
the answer is yes.[52]
However, as pointed out by Dr Fiona Godlee, Editor-in-Chief
of BMJ Group, "we have to acknowledge that there is a huge
variety in the quality of peer review across the publishing sector".[53]
Though there is variation in quality across the publishing sector,
it is important to note that "peer review is independent
of the business model applied to the journal".[54]
In particular, we heard that "it is terribly important to
put to bed the misconception that open access [see paragraph 79]
somehow does not use peer review. If it is done properly, it uses
peer review very well".[55]
In this section we explore some of the common criticisms of the
peer-review process.
STIFLES INNOVATION
31. A common criticism of peer review is that
in some cases "there may be a tendency towards conservative
judgements".[56]
The UK Research Integrity Office Ltd (see paragraph 254) went
so far as to suggest that "there is a danger that the peer-review
process can stifle innovation and perpetuate the status quo".[57]
In response to this, Dr Malcolm Read, JISC, stated: "that
sounds a bit overstated as peer review, in one form or another,
has been an underpinning aspect of researcharguably, even
before journals as we know them existed".[58]
32. Dr Gulley from IOP Publishing Ltd told us
that "there is more conservatism in some research areas than
there is in other areas".[59]
Professor Ron Laskey, Vice President of the Academy of Medical
Sciences, elaborated with an example:
It can be more difficult to establish a novel and
completely unexpected new branch of science if editors of journals
are not alert to the fact that it is coming. There are one or
two recent examples. One that springs to mind is a study in plant
sciences which concerned resistance to viral infection in plants.
That has given rise to a completely new area of understanding
of a group of molecules that turn out to be important in all cells,
not just in viral defence mechanisms against plants but because
they change fundamentally in certain types of cancer. That was
a small niche of advance that has suddenly become a front-line
subject, but it would have been very difficult to publish that
in a front-line journal at the time the work was being done.[60]
33. Dr Robert Parker, Interim Chief Executive
of the Royal Society of Chemistry (RSC), added that "knowing
the right people to ask about research that looks slightly different"
was important in the peer review of unexpected or unusual research.[61]
He added that the RSC "found, from doing studies on the articles
that we reject, that most of them end up being published somewhere
else. There are very few articles that we receive that are scientifically
completely wrong. Usually, there is some merit in them".[62]
Dr Malcolm Read, JISC, agreed, stating that this "cuts against
the conservatism".[63]
34. Dr Philip Campbell, Editor-in-Chief of Nature
and Nature Publishing Group, expressed the view that Nature
was open to bold new research. He told us that Nature "would
love to publish something that strongly made a provocative case
[
] that is not because we want to be sensationalist but
because [
] it needs to be out there and we would like to
be the place to publish it".[64]
35. Robert Campbell, Senior Publisher at Wiley-Blackwell,
agreed that it was not in a journal's best interest to be overly
conservative. He stated that:
If you have a very conservative editorial board,
the journal will suffer. It is a market; the more proactive entrepreneurial
editorial teams will win out and build better, more successful
journals. It is a very dynamic market. A conservative editorial
board wouldn't last long.[65]
36. Publishers are becoming increasingly more
entrepreneurial and innovative. Authors now have the option of
avoiding a conservative editorial judgement on provocative research
by submitting their manuscript to one of an increasing number
of online repository-type journals, such as PLoS ONE. These
journals assess only the technical merit of the manuscript and
are discussed in more detail in paragraphs 79-89.
37. However, it is not always simply an issue
of the research being too "provocative". Dr Philip
Campbell, Nature, explained that:
sometimes [bold new claims] are too easily said and
not backed up well enough. A journal, which also has a magazine
role in Nature, has one of the most critical audiences
in the world. They love to be stimulated but they also want to
make damned sure that the evidence on which we base the stuff
we publish is reasonably strong.[66]
As the Royal Society summarised, it seems that "in
general, an extraordinary claim requires extraordinary evidence".[67]
That is, a piece of research with potentially controversial impact
would likely be more rigorously tested than research making a
lesser claim.
38. Dr Philip Campbell, Nature, expanded
on the need to rigorously assess research:
Another use of the word "conservative"
concerns robustness. For us, peer review helps us deliver robust
publications. We, at Nature, if anything, are more conservative
than other journals. We make researchers go the extra mile to
demonstrate what they are saying. I also celebrate the fact that
we do not want to be conservative with papers that go against
the status quo. We want to encourage radical discoveries.[68]
39. Dr Godlee, BMJ Group, agreed that "conservatism
is not a bad thing in science or medicine in terms of making sure
that what we publish is robust, relevant and properly quality
controlled".[69]
BIASED
40. In addition to a perceived bias toward conservative
judgements, Dr Liz Wager explained that "there are other
kinds of biases as well, but a well set-up system and a good editor
will minimise those biases".[70]
41. Professor Teresa Rees described the problem
of gender bias in peer review:
Do people operate with a preconceived notion of quality?
There is a whole series of studies about this. For example, evidence
from the States suggests that if John Mackay or Jean Mackay submits
an article it will be peer reviewed more favourably if it is by
John Mackay. There is a whole series of papers to that effect.
How do we deal with this? I add that this is discriminatory behaviour
by both men and women. It seems to me that in the selection of
reviewers to serve on research council boards, journals or promotion
panels we need transparency so that people can apply and be assessed
against merits to gain those positions, and we need turnover so
it is not the same people doing that assessment for 20 or 30 years.
We might want [
] double-blind reviewing so you don't know
the sex.[71]
The Committee on Publication Ethics (COPE) also acknowledged
the problem of bias but added that "the evidence is not clear-cut
and, in some cases, is contradictory".[72]
42. Professor Teresa Rees highlighted another
similar problem: that of "unconscious bias against people
with foreign-sounding names". She stated that:
Brazil's science minister is very concerned about
this and has encouraged academics there to co-author with people
from the US or Europe who may have a surname that is more familiar
to reviewers. Double-blind marking would deal with that unconscious
bias that affects peer reviewers as it does any other member of
the public.[73]
43. The BMJ Group added that studies have shown
peer review to also be systematically biased against authors'
ideas, reputations and locations.[74]
The use of double-blind peer review is one way to minimise bias,
but there are practical issues relating to its use, as described
in paragraph 18. COPE explained that "it is probably impossible
to eliminate all bias from peer review but good editors endeavour
to minimize it".[75]
The role of the editor is further explored in chapter 3.
Poor assessment of multidisciplinary work
44. It has also been suggested that peer review
is biased against multidisciplinary research.[76]
The Society for General Microbiology and the John Innes Centre
expressed the concern that with the rise in multidisciplinary
research it may sometimes be difficult to find reviewers with
the right skills and expertise needed to assess multidisciplinary
projects.[77]
45. Both PLoS and the UK Research Integrity Office
Ltd (UKRIO) recommended that if the work is multidisciplinary,
it may be necessary to seek the opinions of a larger number of
reviewers.[78] This is
the approach taken by the Royal Society, as described by Professor
John Pethica:
The process in the [Royal] Society is, essentially,
to increase greatly the number of referees and reviewers. Six
or seven would be common, whereas two or three might be the number
you would have within a well-defined subject, to try and ensure
you get that coverage for a number of broad views. [
] In
general, one is obliged to do that simply because there may be
a few people who have the vast and broad knowledge required, but
in truly interdisciplinary areas, which really span gaps, you
have to get a broad perspective and that means using more people,
including from a variety of countries, environments and so forth.[79]
EXPENSIVE
46. Another common criticism of peer review is
that it is expensive. In 2008, a Research Information Network
report estimated that the unpaid non-cash costs of peer review,
undertaken in the main by academics, is £1.9 billion globally
each year .[80] In 2010,
a report commissioned by JISC Collections brought together evidence
from a number of studies.[81]
It concluded that it costs UK Higher Education Institutions (HEIs),
in terms of staff time, between £110 million and £165
million per year for peer review and up to £30 million per
year for the work done by editors and editorial boards.[82]
The BMJ Group pointed out that "peer reviewers are rarely
paid by publishers, and their work is often done out of hours".[83]
The financial and personal burden on reviewers is discussed below.
47. The cost of peer review does not, however,
fall solely on reviewers and HEIs. Elsevier explained that "publishers
have [also] made significant investments into the peer review
system to improve [its] efficiency, speed, and quality".[84]
We explored this in further detail with Mayur Amin, Senior Vice
President of Research & Academic Relations at Elsevier, who
told us that:
Overall, one of the biggest investments for everyone
in the publishing industry in the last decade or so has been migration
to some of the electronic platforms. Across the industry, our
estimate is that somewhere in the order of £2 billion of
investment has been made. That includes the technologies at the
back end to publish the materials as well. The technology has
included submission systems, electronic editorial systems, peer
review support systems, tracking systems and systems that enable
editors to find reviewers.[85]
48. Elsevier later explained that the £2
billion estimate was based on a detailed review of Elsevier's
own technology investments (£600 million between 2000 and
2010), which were then extrapolated to the entire industry.[86]
The areas of investment are summarised in the table below:
Technology investment areas (2000-2010)
| Industry estimate
|
Author submission & editorial systems
| >£70m |
e-journals and reference works back files
| >£150m |
Production Tracking Systems
| >£50m |
Electronic Warehousing |
>£60m |
Electronic Publishing Platforms, incl. search and discovery platforms
| >£1500m |
Other related back-office and cross-industry systems. e.g. digital preservation, Crossref for linking, CrossCheck for plagiarism detection, creation of special font sets, development of technical standards
| >£300m |
Data provided by Elsevier[87]
BURDENSOME
49. Related to cost issues is criticism of the
perceived burden on academics involved in the peer-review process,
particularly in the role of reviewer. Vitae, the UK organisation
championing the personal, professional and career development
of doctoral researchers and research staff, stated that:
Most researchers will experience both authoring and
reviewing papers during their careers and therefore have a vested
interest in the system being as robust, ethical and equitable
as possible. [...] There is an expectation that researchers will
contribute to sustaining the peer review system by participating
as reviewers. This is predominantly without financial or formal
recognition, except for members of editorial boards (or grant
review panels). [Peer review] is rarely acknowledged as part of
the formal workload of an academic researcher. [...] Reviewing
is often an 'out of normal hours' activity and therefore adds
additional burdens on researchers [...] 'Good' reviewers are more
likely to be invited to do more reviewing, thereby adding to their
workloads.[88]
The "burden" on peer reviewers is discussed
in more detail in chapter 3.
LACK OF EVIDENCE OF EFFICACY
50. Despite these criticisms, the disappearance
of pre-publication peer review tomorrow would represent a "danger"
to the scientific record.[89]
Research Councils UK stated that "the strengths of peer review
far outweigh the weaknesses".[90]
Professor Ron Laskey of the Academy of Medical Sciences informed
us that in the absence of peer review a "particular problem"
in the biomedical sciences would be "sorting the wheat from
the chaff and knowing what information could be depended on".[91]
Tracey Brown, Managing Director of Sense About Science, used the
analogy of a "sea of material" that needs to be sorted,
one way or another.[92]
She added that:
The important thing with a system that produces 1.3
million papers a year is that it is self-reflective. A lot of
study goes on [
] looking at the fate of papers that aren't
published and looking, just generally, at trends across the system.
So long as that is going on and patterns of behaviour can be spotted,
then the system can be self-correcting.[93]
51. Sir Mark Walport highlighted a recent study
by the Wellcome Trust:
We do conduct studies of peer review. The Wellcome
Trust published a paper in PLoS ONE a couple of years ago
in which we took a cohort of papers that had been published. We
post-publication peer-reviewed them and then we watched to see
how they behaved against the peer review in bibliometrics. There
was a pretty good correlation, although there were differences.
Experiments of one sort or another are always going on.[94]
David Sweeney, Director for Research, Innovation
and Skills at the Higher Education Funding Council for England
(HEFCE), added that:
Through [HEFCE's] funding of JISC and [
] the
Research Information Network, much work has been carried out [looking
at peer review] and we remain interested in further work being
carried out where the objectives are clear.[95]
52. The BMJ Group, however, was of the view that
"little empirical evidence is available to support the use
of editorial peer review".[96]
The little evidence there is on editorial peer review is inconclusive.[97]
Richard Horton, Editor-in-Chief of The Lancet, explained
that Tom Jefferson and colleagues concluded in their review of
the evidence that:
"Editorial peer review, although widely used,
is largely untested and its effects are uncertain". [Jefferson
and colleagues] went on, "Given the widespread use of peer
review and its importance, it is surprising that so little is
known of its effects."[98]
53. In a recent article in the journal, Breast
Cancer Research, Dr Richard Smith, former Editor of the BMJ,
referred to a quote by Drummond Rennie, deputy editor of the Journal
of the American Medical Association, who once said "'If peer
review was a drug it would never be allowed onto the market".[99]
Dr Smith added:
not only do scientists know little about the evidence
on peer review but most continue to believe in peer review, thinking
it essential for the progress of science. Ironically, a faith
based rather than an evidence based process lies at the heart
of science.[100]
54. COPE, however, noted that:
lack of evidence of efficacy is not the same as saying
there is evidence that it does not work. Peer review is difficult
to study, partly because its functions have not always been clearly
defined.[101]
55. Dr Godlee, BMJ Group, suggested a way forward:
The overall level of evaluation of peer review is
very poor [...] The UK could lead on [a programme of research].
Funding [for this] should come from [...] a combination of the
journal publishing world, the grant-giving world, industry, but
also public funding.[102]
56. Professor Rick Rylance told us that Research
Councils UK "would be open to trying to think about how that
might be researched".[103]
However, when we asked Professor Sir Adrian Smith, Director General
of Knowledge and Innovation in the Department for Business, Innovation
and Skills (BIS), whether there was a need for a programme of
research to test the evidence for justifying the use and optimisation
of peer review in evaluating science, he responded:
The short answer is no. [Peer review] is an essential
part of the scientific process, the scientific sociology and scientific
organisation that scientists judge each other's work. It is the
way that science works. You produce ideas and you get them challenged
by those who are capable of challenging them. You modify them
and you go round in those kinds of circles. I don't see how you
could step outside of the community itself and its expertise to
do anything other.[104]
57. In summary, the peer-review process, as used
by most traditional journals prior to publication, is not perfect.
We have heard that there are a number of criticisms of it, including
that: it has a tendency towards publishing conservative research
(although this should not be confused with robustness); it does
not adequately guard against bias; it is expensive; and it represents
a huge burden on researchers. Despite these criticisms editorial
peer review is viewed by many as important. However, there is
little solid evidence on its efficacy.
58. We recommend that publishers,
research funders and the users of research outputs (such as industry
and Government) work together to identify how best to evaluate
current peer-review practices so that they can be optimised and
innovations introduced, and the impact of the common criticisms
of peer review minimised. We consider that this would also help
address any differences in the quality of peer review that exist.
We encourage increased recognition that peer-review quality is
independent of journal business model, for example, there is a
"misconception that open access somehow does not use peer
review".
High-impact journals
59. Impact Factor was defined in paragraph 25
as "a measure of the frequency with which the "average
article" in a journal has been cited in a particular year
or period".[105]
As we have noted, a journal's Impact Factor is calculated annually
by Thomson Reuters and it often serves as a proxy measure for
the impact or perceived importance of an article published in
that journal. As such, publishing in a high-impact journal is
traditionally perceived to represent a big achievement and is
often used as a proxy measure for assessing both the work of researchers
and research institutions. This is discussed in further detail
in paragraphs 165-177.
60. Elsevier told us that approximately 3 million
manuscripts are submitted to journals every year. Of these, around
half are rejected. It explained that "rejection rates vary
by journal, for example titles such as Cell and The
Lancet, which have extremely high publication impact [
]
have rejection rates of 95%".[106]
We questioned a group of publishers about why rejection rates
are so high. Dr Andrew Sugden, Deputy Editor and International
Managing Editor at Science (where more than 90% of the
submissions are rejected),[107]
explained that:
Part of it is simply that they are weekly magazines
with a print budget. We are publishing 20 papers [
] a week,
and a lot of people want to be published in them. We are receiving
10 times as many, roughly. [
] We want to showcase the best
across the range of fields in which we publish, so we have to
be highly selective to do that.[108]
61. Dr Philip Campbell of Nature suggested
that as journals increase their presence online and the prospect
of the decline of print journals happens, the "pressure is
lessened".[109]
He added, however, that Nature would probably still publish
the same number of papers.[110]
Dr Fiona Godlee, BMJ Group, agreed that printing journals
is no longer a constraint, but explained that editorial resource
is.[111] She added
that journals often find that "if they reduce the number
of research papers they publish, their Impact Factor creeps up
quicker. That is a commercial reputational issue".[112]
62. While high Impact Factors may be good for
journals, the British Antarctic Survey told us that authors are
known to complain that "for the very high profile journals
with high Impact Factors, competition for space is fierce, and
decisions about which papers are accepted can seem rather random".[113]
It noted, however that:
these decisions are often editorial ones based on
topicality, and not on peer review; and [
] papers rejected
from such journals will generally be published elsewhere. If they
are of sufficient importance this will usually be recognised by
high citation numbers wherever they are published.[114]
The need to publish in high-impact journals and the
effect this has on researchers and research careers is discussed
in paragraphs 165-177.
63. Authors are faced with a vast range of journals
in which to publish if they fail to get into a high-impact journal.
We were told that peer review "has led to the development
of a pecking order for journals".[115]
Manuscripts that are rejected from a high-impact journal will
often make their way down the pecking order until they find a
home in a journal. This can be a time-consuming process; at each
stage the manuscript is first assessed by editors who determine
whether it fits the scope of the journal before potentially being
sent out for external peer review. Dr Godlee explained that:
increasingly people are going straight into one of
the big open access journals, such as PLoS ONE. [
]
A lot of the publishers are beginning to open up so that people
can get speedy publication if they haven't got into the journal
of their choice. That is a good thing. That means we will see
authors being able to move on to the next thing rather than spending
a lot of their time adapting a paper for yet another journal which
is going to reject it and then move on.[116]
64. The PLoS ONE journal model is discussed
in further detail in paragraphs 79-89. Another method for reducing
the burden of resubmitting rejected manuscripts to new journals,
with fresh rounds of review, is the cascading system of review,
which is covered in paragraphs 146-152.
Innovation in peer review
65. Deviations from the traditional peer-review
process have been experimented with over recent years, some more
successfully than others. In this section we discuss three well-known
examples: pre-print servers; experiments in open peer review;
and the move towards repository journals.
PRE-PRINT SERVERS
66. An innovative approach to peer review that
has worked well for the physics community is the use of a pre-print
server. Dr Nicola Gulley of IOP Publishing Ltd explained that
the "arXiv" pre-print server was set up to allow authors
to submit work that is "at a very preliminary stage".[117]
The physics community is then able to access this work and comment
on it. Dr Gulley explained that arXiv:
originated from the high energy physics area where
they had a need to be able to discuss the results across the international
collaborations. A lot of the work that is posted, particularly
from areas such as high energy physics, also goes through internal
peer review within the research facilities as well before it is
posted.[118]
67. Some of the benefits of the arXiv system
were described by the Royal Society: it "allows the scientists
to publish research quickly and get informal feedback and identify
any weaknesses. This is then followed by formal peer review in
a journal".[119]
Dr Gulley explained that "a high percentage of articles that
are pre-prints are eventually submitted to journals and get published
in journals [
] so there is still that requirement for the
independent peer review".[120]
She added that:
We make it very easy for authors to be able to submit
from the arXiv into our journals, for example, and this is common
across many physics publishers, where the arXiv number can be
used when submitting the article to a journal. Authors are encouraged
to update their versions as well. From the publishing side, we
encourage them to update the citations so that the link goes back
to the final version of record once it has been peer reviewed
and published.[121]
68. The IOP provided further details of how it
makes this easy for authors:
Within our online submission form there is an option
for authors to enter their arXiv reference number when they submit
the article to be considered for publication. This number enables
us to locate the article in question and automatically upload
the files from arXiv to our peer review system for processing.[122]
69. While physics publishers are clearly well
linked into the arXiv server and it appears to be a system that
works well for the physics community, it is not necessarily the
best model for all disciplines. Dr Robert Parker of the RSC told
us that this system was "not popular with chemistry because
there is very often the possibility that an author will take out
a patent on what they are producing. Putting your results out
there in a pre-printed form is risking losing priority on them".[123]
Professor Ron Laskey indicated that a pre-print server would also
not be suitable for biomedical sciences.[124]
He described two worries from the Academy of Medical Sciences
submission to this inquiry:
One is that biomedical sciences are more prone to
inaccurate interpretations [
] There is a worry that, if
you extended the pre-publication model to the biomedical sciences
without any attempt to peer review, a lot of half-truths would
creep into the literature.
The second problem is the appetite of the media for
some aspects of biomedical science. Without peer review we would
get a storm, frankly, of incorrect headlines.[125]
70. Sir Mark Walport, from the Wellcome Trust,
reinforced Professor Laskey's point:
One of the issues in the biological sciences is that
the volume of research is extremely high. An important issue in
the medical sciences is that an ill-performed study can have harmful
consequences for patients. Therefore, there need to be filtering
mechanisms to make sure that things are not published that are,
frankly, wrong, misconceived, the evidence is bad and conclusions
are drawn which means that patients could be harmed. Different
communities require slightly different models.[126]
71. Professor John Pethica of the Royal Society
suggested that pure mathematics is a "good example of an
area" which might benefit from the pre-print server model
because "it can take a very long time for the assessment
of theorems to become correct".[127]
He added that this was in contrast with areas such as engineering,
where there is an immediate technological impact.[128]
72. We conclude that pre-print
servers can be an effective way of allowing researchers to share
and get early feedback on preliminary research. The system is
well established in the physics community, and works particularly
well, co-existing with more traditional publication in journals.
We encourage exploration in other fields. We note, however, that
pre-print servers may not work in fields where commercialisation
and patentability are issues, or in the biomedical sciences, where
publication of badly performed studies could have harmful consequences
and could be open to misinterpretation.
OPEN PEER REVIEW
73. Open peer review has traditionally been defined
as review in which the authors' and reviewers' names are revealed
to each other. This system has been used successfully by the BMJ
for more than a decade with no significant problems.[129]
BMJ Group told us that:
PLoS Medicine, however, tried and then discontinued
this practice in late 2007 citing reviewers' reluctance to sign
their reportsperhaps because at that time it was publishing
a lot of laboratory-based research, which is arguably more competitive
than clinical research.[130]
74. A more recent and much broader definition
can also cover cases where: reviewers' names are publicly disclosed;
the reviews are also published; and/or the community can take
part or comment. Dr Philip Campbell explained the well-known Nature
experiment in open peer review:
In 2006, Nature ran an experiment in open
peer review, in which over a period of four months, submitting
authors were invited to post their papers on an open website for
open assessment by peers. Their papers were also peer-reviewed
in the usual way.
[
] In brief, the take-up by authors was low,
as was the amount of open commenting. Furthermore it was judged
by the editors that the comments added little to the assessment
of the paper.
It is my view, consistent with this outcome, that
scientists are much better motivated to comment on an interesting
paper when directly requested to do so by an editor.[131]
As a result, Nature chose not to adopt the
widespread implementation of open peer review.[132]
75. Elsevier described the process operated by
another journal, Atmospheric Chemistry and Physics, that
uses an innovative type of open peer review:
Following initial review by an editor to assess alignment
with the title's coverage the manuscript is published online (usually
two to eight weeks after submission). Comments and discussion
by members of the public and select reviewers then take place
for an eight-week period. The author responds to comments within
four weeks, and then prepares a final revised article. The editor
then decides whether to accept the paper. The original paper,
comments, and final paper are all permanently archived and remain
accessible. Other than comments from invited reviewers, spontaneous
comments from members of the scientific community have been relatively
low.[133]
76. The "transparent" approach, used
by the EMBO Journal, which is published by the Nature Publishing
Group, features "the online display of anonymized referees
and editors/authors' correspondence after publication, alongside
the paper",[134]
provided as a "Peer Review Process File".[135]
However, Dr Philip Campbell informed us that:
Nature
and the Nature journals have so far not gone down this route.
This reluctance is partly based on a precautionary fear that it
might upset the relationship between editors and referees. Moreover,
the documents reflect only a part of the process of discussions
within the editorial team, between the editors and the referees,
and between the editors and the authors. There is also a belief
that few people will want to wade through this copious information.
Nevertheless, transparency has its own virtues, and
we are keeping this policy under review.[136]
The BioMed Central medical journals also provide
this sort of "pre-publication history".[137]
Dr Michaela Torkar, from BioMed Central, told us that this was
"a very transparent way of seeing how the system works and
the sort of records we keep".[138]
77. Others are now also seeing the virtues of
transparency, particularly where issues have arisen relating to
dissatisfaction with reviews. A recent example of this was the
open letter by 14 leading stem cell researchers to senior editors
of peer-reviewed journals publishing in their field:
Peer review is the guardian of scientific legitimacy
and should be both rigorous and constructive. Indeed most scientists
spend considerable time and thought reviewing manuscripts. As
authors we have all benefited from insightful referee reports
that have improved our papers. We have also on occasion experienced
unreasonable or obstructive reviews.
We suggest a simple step that would greatly improve
transparency, fairness and accountability; when a paper is published,
the reviews, response to reviews and associated editorial correspondence
could be provided as Supplementary Information, while preserving
anonymity of the referees.[139]
The letter went on to urge adoption of the EMBO
Journal model.
78. The principles of openness
and transparency in open peer review are attractive, and it is
clear that there is an increasing range of possibilities. There
are mixed results in terms of acceptance amongst researchers and
publishers, although some researchers are keen to see greater
transparency in their fields. We encourage publishers to experiment
with the various models of open peer review and transparency and
actively engage researchers in taking part.
ONLINE REPOSITORY JOURNALS
79. The constraints of print journals and the
challenges associated with authors striving to publish in high-impact
journals have been described in paragraphs 59-64. Authors are
now able to submit their manuscripts to one of an increasing number
of online repository-type journals. One such example is the journal,
PLoS ONE, published by the "open access" publisher,
the Public Library of Science (PLoS). "Open access"
is defined as the removal of all barriers (for example, subscription
costs) to access and reuse of the literature. To provide open
access, PLoS journals use a business model in which expenses are
recovered "in part by charging a publication fee to the authors
or research sponsors for each article they publish".[140]
This model is potentially open to abuse if the peer-review process
is not robust and if publishers view it mostly as a revenue-generating
venture.[141] However,
in the case of PLoS ONE, the goal is to publish "all
rigorous science",[142]
placing an "emphasis on research validity over potential
impact".[143]
The Wellcome Trust stated that:
The approach adopted by PLoS ONEwhere
the peer review process focuses solely on whether the findings
and conclusions are justified by the results and methodology presented,
rather than on assessment of the relative importance of the research
or perceived level of interest it will generatehas both
reduced the burden on the reviewer and the time it takes to get
a paper published.[144]
80. Dr Mark Patterson, Director of Publishing
at PLoS, explained that "PLoS ONE was launched in
December 2006, [it] published about 4,000 articles in 2009 and
6,700 last year, so it became the biggest peer-reviewed journal
in existence in four years".[145]
The popularity of PLoS ONE has spurred the launch of a
host of similar journals, as described by Dr Patterson:
The American Institute of Physics and the American
Physical Society have both launched physical science versions;
Sage has launched a social science version; the BMJ group, who
were actually the first, last year launched a clinical research
version of PLoS ONE; Nature has launched a natural
science version of PLoS ONE, and on it goes. The model
is getting that level of endorsement from major publishers and
I think, again, that is probably helping to make researchers very
comfortable with the way in which PLoS ONE works.[146]
81. He added that:
if another 10, 20 or 30 of these are launched over
the next one to two years, which I think is quite likely [
]
that could make some fairly substantial changes in the way the
prepublication peer review process works. [
] The
benefit will be the acceleration of research communication because
you avoid bouncing from one journal to another until you eventually
get published. That is a tremendous potential benefit.[147]
82. Professor Ron Laskey, from the Academy of
Medical Sciences, explained that:
initially, people envisaged PLoS ONE as a
journal they would submit to only if their paper was having severe
criticism from other higher impact journals. Now, important research
has been submitted to get it on the record quickly before it is
scooped by someone else who has a smoother path through the refereeing
jungle.[148]
83. Dr Philip Campbell, of Nature, added
that:
there are people who are sick to death of editors
and who value something like [PLoS ONE, or in Nature's
case] Scientific Reports, which have [
] no editorial
threshold but do have a peer review process just for the validity
aspect.[149]
84. Dr Patterson explained in further detail
the way in which PLoS ONE achieved quicker publication
times than traditional journals:
the real benefit in PLoS ONE, which is relevant
to speed, is that authors won't be asked to revise their manuscripts
to raise them up a level or two. With a lot of journals, you get
asked to do more experiments to raise it up to the standard that
particular journal wants. That doesn't and shouldn't happen at
PLoS ONE. As long as the work is judged to be rigorous,
it is fine. The amount of revision can be quite a lot less because
authors are asked to do it in that way and that can really reduce
the overall time from submission to publication.
There is another way in which I think PLoS ONE
accelerates research communication generally. Often, articles
are submitted to journal A and are rejected as not being
up to standard. They go to journal B and then journal C
and, eventually, are published. If you have a robust piece of
work it will be published in PLoS ONE as long as it passes
the criteria for publication. You will not have to fight with
editors who are trying to argue for a certain standard. I think
those two other things really have the potential to accelerate
research communication broadly.[150]
85. The speed between submission, acceptance
and publication has led to some commentators suggesting that the
PLoS ONE peer-review process is "light".[151]
Dr Patterson was asked whether he would describe it as "light
touch" and replied "no, not at all", and then went
on to describe the peer-review process at PLoS ONE.[152]
The Wellcome Trust also defended the peer-review process used
by PLoS ONE:
PLoS ONE has
very good peer review. Sometimes there is a confusion between
open access publishing and peer review. Open access publishing
uses peer review in exactly the same way as other journals. PLoS
ONE is reviewed. They have a somewhat different set of criteria,
so the PLoS ONE criteria are not, "Is this in the
top 5% of research discoveries ever made?" but, "Is
the work soundly done? Are the conclusions of the paper supported
by the experimental evidence? Are the methods robust?" It
is a well peer-reviewed journal but it does not limit its publication
to those papers that are seen to be stunning advances in new knowledge.[153]
86. PLoS ONE
publishes 69% of its submissions.[154]
However, Dr Patterson explained that this does not necessarily
mean that 31% are rejected.[155]
He told us:
Some of them are "lost" in the sense that
they may be sent back for revisionmaybe 5% to 10% are sent
back for revisionand the others are rejected, as they should
be, on the grounds that they don't satisfy technical requirements.
[
] We did some author research in the last couple of years
and we have seen that, in both cases, according to the authors'
responses, about 40% of rejected manuscripts have been accepted
for publication in another journal.[156]
87. There has also been speculation about the
level of copyediting that occurs at PLoS ONE. Richard Poynder,
a journalist with an interest in publishing, wrote:
PLoS ONE
does not copyedit [this is the work that an editor does to improve
the formatting, language and accuracy of text] the papers it publishes,
only the abstracts. But it would appear that even this minimal
service is not always provided. [...]When I contacted [Peter]
Binfield [PLoS ONE Publisher] [...] he said: "Speaking
for PLoS ONE we do not copyedit content (other than a very
light clean up). We do a light (but real) copyedit on the abstract;
and at time of submission one of our (many) Quality Control checks
is on the quality of the English. However, as a general rule,
if the language is intelligible, and passes QC and passes peer
review etc., then it will be published as is".[157]
88. We put some of these concerns to Dr Patterson,
who explained that:
In our production process we focus on delivering
really well structured files that will be computable, for example.
We don't expend effort in changing the narrative. Scientific articles
aren't works of literature. That is not to say it wouldn't be
nice if, sometimes, a bit more attention was paid to that. It
is also true that one of the criteria for PLoS ONE is that
the work is in intelligible English. If an editor or reviewer
thinks that something is just not good enough and they can't really
see what is happening, it will be returned to the author.[158]
89. We are impressed by the
success of PLoS ONE and welcome the wider growth
of quality online repository journals. These will accelerate the
pace of research communication and ensure that all work that is
scientifically sound is published, regardless of its perceived
importance. However, we recognise that this is a relatively new
and rapidly evolving model, and potentially open to abuse because
publication fees are involved. It is important that a high quality
of peer review is maintained across all repository-style journals.
18 Ev w4, para 3 [Richard Horton]; Ev 101, para 2
[Royal Society] Back
19
"The history of peer review", Elsevier, www.elsevier.com Back
20
Ev w4, para 3 [Richard Horton] Back
21
Ev w119, para 3 Back
22
"The history of peer review", Elsevier, www.elsevier.com Back
23
For example: Ev w36, para 1 [Lawrence Souder]; Ev w72 [Political
Studies Association]; Ev w77, para 3 [Royal Meteorological Society];
Ev w95, para 19 [British Antarctic Survey]; Ev w105, para 6 [Publishers
Association]; Ev 82, para 2 [Wellcome Trust]; Ev 104, para 16
[Royal Society]; and Ev 115, para 7 [Elsevier] Back
24
Ev w105, para 6 Back
25
Q 5 Back
26
Ev 89, para 53 [Philip Campbell, Nature] Back
27
Ev 101, para 5 Back
28
Q 7 [Dr Nicola Gulley, Dr Robert Parker and Professor John Pethica] Back
29
Ev 66, para 8.1 Back
30
Ev w57, para 3 [Academy of Social Sciences] Back
31
Ev 72, para 16 Back
32
Ev 72, para 15 and the original 2009 survey: "Peer Review
Survey 2009: preliminary findings", Sense About Science,
www.senseaboutscience.org.uk/index.php/site/project/395 Back
33
Ev 72, para 15 and the original 2007 survey: Mark Ware Consulting,
Peer Review in Scholarly Journals - perspective of the scholarly
community: an international study, January 2008 Back
34
Ev w95, para 21 [British Antarctic Survey] Back
35
Q 8 Back
36
For example: Ev w47, para 10 [Professor R I Tricker]; Ev 72,
para 14 [BMJ Group]; Ev w99, para 3 [International Bee Research
Association]; and Ev w130, para 2.6 [Dr Thomas J Webb] Back
37
Q 88 Back
38
Ev 80, para 32 Back
39
Q 162 Back
40
Q 2 [Nicola Gulley]; Q 95 [Mayur Amin]; Ev w5, para 13 [Richard
Horton]; and Goodman SN, Berlin J, Fletcher SW Fletcher RH, Manuscript
quality before and after peer review and editing at Annals of
Internal Medicine, Ann Intern Med, 1994, vol 121, pp 11-21 Back
41
Q 2 Back
42
Q 162 Back
43
Q 218 Back
44
"The Thomson Reuters Impact Factor", Thomson Reuters,
www.thomsonreuters.com Back
45
"The Thomson Reuters releases journal citation reports for
2010", Thomson Reuters Press Releases, www.thomsonreuters.com,
28 June 2011 Back
46
Q 217 Back
47
As above Back
48
Q 162 Back
49
As above Back
50
Q 162 Back
51
As above Back
52
Q 294 Back
53
Q 97 Back
54
Ev w107, para 16 [The Publishers Association] Back
55
Q 253 [Sir Mark Walport] Back
56
Ev w44, para 5 [Professor John Scott, University of Plymouth] Back
57
Ev 124, para 1.4 Back
58
Q 163; and Q 163 [Dr Mark Patterson] Back
59
Q 3 Back
60
As above Back
61
As above Back
62
Q 6 Back
63
Q 163 Back
64
Q 98 Back
65
Q 96 Back
66
Q 99 Back
67
Ev 103, para 9 Back
68
Q 97 Back
69
As above Back
70
Q 64 Back
71
Q 247 Back
72
Ev 67, para 3.0 Back
73
Q 247 Back
74
Ev 71, para 9; Merton R K. The Matthew Effect in Science.
Science 1968, vol 159, pp 56-63; and Wenneras C, Wold A. Nepotism
and sexism in peer review. Nature 1997, vol 387, pp 341-43 Back
75
Ev 67, para 3.0 Back
76
Ev w79 [Professor Grazia Ietto-Gillies] Back
77
Ev w91 and Ev w133, para 1.2.2 Back
78
Ev 78, para 13, and Ev 125, para 6 Back
79
Qq 16-17 Back
80
Research Information Network, Activities, costs and funding
flows in the scholarly communications system in the UK, May
2008 Back
81
JISC Collections, The value of UK HEI's to the publishing
process, June 2010 Back
82
JISC Collections, The value of UK HEI's to the publishing
process, June 2010, Summary p 2 Back
83
Ev 70, para 4 Back
84
Ev 114, para 5 Back
85
Q 103 Back
86
Ev 118 Back
87
As above Back
88
Ev 146, paras 6-7 Back
89
Q 2 [Robert Parker, Royal Society of Chemistry] Back
90
Ev 96, para 5 Back
91
Q 2 Back
92
Q 63 Back
93
Q 65 Back
94
Q 251 Back
95
As above Back
96
Ev 71, para 8 Back
97
Ev w6, para 18 [Richard Horton] and Ev 66, para 1.0 [Committee
on Publication Ethics] Back
98
Ev w6, para 18 [Richard Horton] and original quotes from: Jefferson
T, Alderson P, Wager E, Davidoff F, The effects of editorial
peer review, JAMA, 2002, vol 287, pp 2784-86 Back
99
Richard Smith, Classical peer review: an empty gun, Breast
Cancer Research, 2010, 12(Suppl 4): S13 Back
100
As above Back
101
Ev 66, para 1.0 Back
102
Q 105 Back
103
Q 251 Back
104
Q 290 Back
105
"The Thomson Reuters Impact Factor", Thomson Reuters,
www.thomsonreuters.com Back
106
Ev 115, para 18 Back
107
Ev 138, para 2 Back
108
Q 116 Back
109
As above Back
110
As above Back
111
Q 118 Back
112
Q 118 Back
113
Ev w95, para 18 Back
114
As above Back
115
Ev w99, para 7 [International Bee Research Association] Back
116
Q 118 Back
117
Q 11 Back
118
As above Back
119
Ev 103, para 13 Back
120
Q 11 Back
121
As above Back
122
Ev 94, para 1 Back
123
Q 8 Back
124
Q 15 Back
125
As above Back
126
Q 254 Back
127
Q 12 Back
128
As above Back
129
Ev 72, para 16 [BMJ Group] Back
130
Ev 72, para 16 Back
131
Ev 88, paras 42-45 Back
132
"Overview: Nature's peer review trial", Nature Online,
www.nature.com/nature/peerreview/debate/nature05535.html Back
133
Ev 117, para 30(a) Back
134
Ev 89, para 49 [Philip Campbell, Nature] Back
135
"Editorial Process", The EMBO Journal, www.nature.com/emboj/about/process.html Back
136
Ev 89, paras 50-51 Back
137
Q 192 [Dr Michaela Torkar] Back
138
Q 192 Back
139
"Open letter to Senior Editors of peer-review journals publishing
in the field of stem cell biology", EuroStemCell, www.eurostemcell.org,
10 July 2009 Back
140
"About PLoS ONE", PLoS ONE, www.plosone.org Back
141
"Open Access Publisher Accepts Nonsense Manuscript for Dollars",
The Scholarly Kitchen, 10 June 2009, http://scholarlykitchen.sspnet.org/2009/06/10/nonsense-for-dollars/ Back
142
Q 171 [Dr Mark Patterson] Back
143
Ev 135, para 2 [Academy of Medical Sciences] Back
144
Ev 83, para 8 Back
145
Q 170 Back
146
Q 171 Back
147
Q 173 Back
148
Q 6 Back
149
Q 121 Back
150
Q 166 Back
151
For example: D. Butler, "PLoS stays afloat with bulk publishing",
Nature, 2008, vol 454, p11 Back
152
Q 176 Back
153
Q 253 Back
154
"PLoS ONE Editorial and Peer-Review Process", Public
Library of Science, www.plosone.org Back
155
Q 164 Back
156
As above Back
157
Richard Poynder, "PLoS ONE, Open Access, and the Future
of Scholarly Publishing", 7 March 2011, http://richardpoynder.co.uk/PLoS_ONE.pdf,
p 24 Back
158
Q 167 Back
|