Peer review
Written evidence submitted by the British Antarctic Survey (PR 40)
1.
Peer review is extensively used in the scientific world to assess proposals for research funding, to assess individuals for promotions and awards, to assess departments for research quality, and to assess scientific papers for publication. We understand that the present inquiry refers only to the last of these and our evidence will focus on that; however we point out as an aside that there is considerable concern in the scientific community about the effectiveness of peer review as a tool for the assessment of research funding.
2.
The British Antarctic Survey (BAS) is a world-leading research institute, among whose staff are authors, reviewers and journal editors. Our evidence is therefore based on our experience in these different roles. Our staff submitted 273 papers to peer-review journals in 2009.
3.
It is important to recognise the purpose and capabilities of peer review. In our view, peer review is intended to identify manuscripts in which the science is of insufficient quality for publication, or in which the science is insufficiently novel or interesting to warrant publication in the particular journal; where manuscripts pass this test, the goal of peer review is to improve them in terms both of scientific method and clarity of explanation.
4.
An implicit purpose of peer review is of course to provide a quality mark that tells less expert readers that the contents should be taken seriously. However, no-one should assume that passing the peer-review process guarantees the paper is "correct". The whole evolution of science is that new papers overturn and refine existing ideas, and most papers, however good, will eventually be superseded by something with a more precise result, a better explanation, or a more convincing piece of evidence.
5.
Peer review is not the method by which it is decided whether a paper is within the scope of a journal: that is the job of editors making an initial or post-review decision based on many other criteria.
6.
Peer review is only capable of detecting scientific fraud in the most obvious cases. Reviewers cannot be expected (and in most cases are not able) to repeat experiments. Avoiding fraud requires suitable procedures in laboratories and vigilance by co-authors.
7.
It is also not a primary purpose of peer review to detect plagiarism, although many publishers are now filtering submissions with software to detect instances of plagiarism before peer review.
8.
In determining whether a paper is of suitable quality for publication, there are actually two nested questions: (a) is the paper suitable for publication as a peer-reviewed article anywhere, and (b) is it suitable for publication in this journal? We identify this distinction because, although a significant percentage of papers are rejected from their journal of first choice, the overwhelming majority of papers are eventually published in the peer-reviewed literature. It would be incorrect to interpret the number of manuscripts rejected by peer-review as indicating the number of manuscripts containing poor science.
9.
In our experience, for most papers that are rejected from a particular journal, the reasons given are that (a) it appears of good quality but is not of sufficiently wide interest for this journal, and should be sent to a more specialist journal, (b) it has significant flaws that will require extensive further work and should be submitted again to this or another journal when these identified flaws have been corrected. Both these verdicts are the common currency of scientific publication; most authors receiving such a verdict will first be annoyed, but will then react by improving their paper and submitting it elsewhere. There has been a trend in behaviour in recent times for some journals that sell themselves as providing a quick turn-around for manuscripts. These journals will often reject a manuscript that is actually fairly close to acceptable, but suggest a re-submission of a revised manuscript, rather than requesting revisions to the present version. Such "soft-rejection" means that the quoted time from submission to acceptance of the final version can be presented as shorter than could otherwise be achieved.
10.
In most cases a paper will eventually find a place where the reviewers and editor believe it should be published, but may find flaws in logic or explanation that require correction before publication.
11.
In very rare cases (in our experience as journal editors, we would estimate less than 5%), papers have such a fundamental flaw that the verdict is that they should not be published at all, and cannot be salvaged by revision. This can be for a number of reasons: the experimental method may simply be unsuitable for making the inferences in the paper; a crucial equation may be incorrectly derived; the paper may be based on a misinterpretation of previous work. Reviewers are of course inclined to apply a higher standard of evidence to papers that appear inconsistent with previously-published evidence, and this can make it harder for papers with sound scientific method but a novel hypothesis to gain acceptance in their journal of first choice. However, despite some well-publicised complaints, especially in the area of climate science, we see no evidence that scientifically sound papers are unable to find an outlet.
12.
Having discussed the general nature of peer review, we turn to its operation in practice, viewed from the perspective of editors, reviewers and authors.
13.
From our experience as journal editors, the first difficulty is often to find suitable reviewers. In most cases, editors are sufficiently knowledgeable about a field to know the identities of experts who would be suitable, and the resources available through publishers and the Internet in general make it relatively simple to find additional experts. However, many potential reviewers turn down an invitation to review (often for legitimate reasons, as we will discuss later), and it can be hard to ensure that the two or more reviewers of a particular paper have the balance of expertise required to assess all aspects of it. Nonetheless, this is the editor’s job, and we do not believe that it has become impossible to achieve.
14.
There is a perception that it is becoming harder to find sufficient peer reviewers. Those of us with editorial responsibilities are unconvinced by this. Our experience is that the introduction of automated review request letters has made it easier for reviewers to ignore or reject requests, and that a personal approach from an editor generally elicits a more positive response. Despite a growing literature, it remains the case that the growing literature means that there must be a growing cohort of authors, who also represent a growing cohort of reviewers.
15.
However, the availability of reviewers does rely on a particular model of science where it is understood by employers that unpaid work as a peer-reviewer is part of the obligation of every scientist. This is certainly our belief at BAS, where we expect that each scientist will do their "fair share" of reviewing. We are not aware of cases where employing institutes have set limits on this activity, and we would not welcome any such restraint.
16.
In our experience as peer reviewers, a typical review requires between 2 and 8 hours, depending on the length of paper and the experience of the reviewer. It is a simple sum to see that a typical scientist publishing 1 or 2 papers as first author each year should expect to review on average 3 to 8 papers each year. However, many senior high-profile scientists receive a disproportionate number of requests (as many as 50 per year in some cases, including funding and tenure reviews), and have to manage their workload carefully.
17.
As authors, our perception of the peer review system is mainly positive. It weeds out the very worst papers, and generally improves our work. The biggest complaint of authors is simply the time it takes for a paper to get through peer review (often this is many months). However, it must be admitted that authors, slow in making revisions, are as much at fault as editors, reviewers and publishing houses.
18.
The second complaint of authors is that, for the very high profile journals with high impact factors, competition for space is fierce, and decisions about which papers are accepted can seem rather random. However we note that these decisions are often editorial ones based on topicality, and not on peer review; and that papers rejected from such journals will generally be published elsewhere. If they are of sufficient importance this will usually be recognised by high citation numbers wherever they are published.
19.
Peer review is certainly not perfect. By its nature, in which often only 2 reviewers assess a paper, aspects of the paper may not receive sufficient attention, and preconceived attitudes may come into play. However, as part of a system of quality control that should, and usually does, involve the lead author, their co-authors and their colleagues and institutes, editor and reviewers, we think that it does a good job of ensuring that the peer-reviewed literature is of high quality. Peer review is not perfect, but like democracy, it is better than all the alternatives. It should not be seen as a stamp of perfection, but as a barrier that imposes a minimum standard of logic and plausibility.
20.
Although we support the peer review system, it is of course essential to assess whether it can be improved. In the final paragraphs, we discuss one or two of the refinements that have been proposed.
21.
Peer reviewers are generally anonymous by default, while authors are identified. It is sometimes suggested that bias, whether conscious or unconscious, could be reduced by "double-blind" reviewing in which the authors are also anonymous. While this might be an option in some fields, we do not believe it is realistic in most of the fields of science in which we work. Most papers are based on fieldwork or particular models that would immediately identify the authors to any well-informed reviewer; often work has already been presented at open symposia.
22.
An alternative way to even up the process is to require reviewers to reveal their identities: in reality many reviewers already take the option to be named to the authors. However, evidence suggests that making this compulsory would make it harder to find reviewers, presumably because they value the fact that anonymity allows them to comment, whether negatively or positively, on the work of colleagues without having to confront them as individuals. We would not be completely averse to a compulsory eponymous reviewing system, but are not convinced in our own fields that there is a problem that requires such a solution.
23.
An increasing number of journals with which we are involved operate an open reviewing system, in which, although reviews may remain anonymous, they, and the authors’ responses to them, are published on the internet. We believe that this does offer some advantages in ensuring a professional quality to reviews and responses, and in offering assurance to others that a thorough review and response process has been achieved.
24.
Despite our obvious support for the peer review system, it does raise ethical and professional issues that are rarely discussed. The RCUK Policy and Code of Conduct on the Governance of Good Research Conduct provides general guidance about the areas where problems might arise. We believe that it would be valuable for publishers to host fora for editors, reviewers and authors at which these issues could be openly discussed.
25.
Finally, it is notable that there is actually very little training available for new reviewers, and since review is generally expected to be a solitary activity it is difficult to learn best-practice. It is, perhaps, worth considering whether training in the skill of review should form part of the training of postgraduate students in scientific disciplines.
British Antarctic Survey
Professor Alan Rodger, Head of Science Strategy
Professor Eric Wolff FRS, Science Leader Chemistry and Past Climate programme
Professor David Vaughan, Science Leader, Ice Sheets programme
8 March 2011
|