Session 2010-11
Peer reviewWritten evidence submitted by Dr E J P Marshall (PR 50) Personal comments from the Editor-in-Chief of Weed Research, the journal of the European Weed Research Society that is published by Wiley-Blackwell (http://onlinelibrary.wiley.com/journal/10.1111/%28ISSN%291365-3180) Weed Research is an international multi-disciplinary peer-reviewed journal that publishes topical and innovative papers on all aspects of weeds, defined as plants that impact adversely on economic, aesthetic or environmental aspects of any system. Topics include weed biology and control, herbicides, invasive plant species in all environments, population and spatial biology, modelling, genetics, biodiversity and parasitic plants. The journal welcomes submissions on work carried out in any part of the world. In addition to research papers, the journal seeks review articles and shorter insights papers covering personal views, new methods and breaking news in weed science. The ISI Impact Factor is 2.033 and the journal is rated 13th out of 61 rated agronomy journals. The comments below are organised into the experiences of peer review within the journal (Sections 1-5) and are followed by comments on some of the wider issues (Sections 6-9). 1. The principles of peer review in Weed Research Peer review is the process by which Weed Research seeks opinions on the scientific research submitted for publication. The process has several aims. The first is to make sure that the science has been done well, the approach is sound, the designs and the statistical analyses are appropriate and the conclusions are soundly based on the work presented. Subsidiary aims are to aid the submitting scientist in improving their written communication skills and to make sure that the published paper is easily understandable to an international readership. The majority of submissions are from scientists for whom English is a second language. 2. The mechanics of peer review used by Weed Research The journal has a number of checks and balances that aim to arrive at fair, accurate and helpful decisions for authors. Papers are initially examined by the Editor-in-Chief (EiC). If the papers are within the scope of the journal and appropriately presented, they are allocated to an expert Subject Editor. If not, they are rejected with a report from the EiC. The journal has an Editorial Board made up of 30 Subject Editors, four Statistical Consultants and the EiC. The Subject Editor seeks two review reports from independent researchers on the submitted manuscript. The Subject Editor reviews the recommendations and reports of the reviewers, writes their own report and makes their recommendation to the EiC. Every decision sent to an author is made by the EiC only, having taken account of a) the recommendation and report of the Subject Editor and b) the reports and recommendations of the reviewers. Thus review reports are reviewed by an editor and the editor’s report is reviewed by the EiC. Editors can overrule reviewers and the EiC can overrule all. The process we use is single-blind reviewing, with the option available for reviewers and editors to reveal their identities to authors. A number do. The numbers of scientists broadly involved in weed science around the globe is not large. Nevertheless, the journal uses over 300 different reviewers in a year, from over 30 countries. We use an online system of manuscript submission and review (Manuscript Central from ScholarOne; http://mc.manuscriptcentral.com/wre). This facilitates submission from anywhere and likewise access to reviewers. Since the advent of online review, our reviewer base has increased enormously. Submissions run at approximately 250 manuscripts a year. 3. Strengths The three-level process that Weed Research uses is generally robust. Papers published in the journal contribute to an increasing Impact Factor and to the citation longevity (10 years). Papers are widely cited and contribute to advancing sustainable crop management, the management of invasive plant species and vegetation management in its widest sense. The standard of review reports can vary, but the roles of the Subject Editor and EiC ensure that reports to authors are clear. The journal uses a scoring technique (out of 3) to rate review reports. This aids editors in selecting reviewers. In addition, reviewers see their own report and those of the EiC, Subject Editor and other reviewer, as they are blind-copied into the decision e-mail. They can then evaluate their own efforts against their peers. In the five+ years I have been the EiC, we have had no cases of papers retracted, just five cases of papers withdrawn by authors following review and only two papers where there was disputed authorship (unconnected with peer review and ultimately resolved). 4. Weaknesses The process depends to a degree on the Subject Editor having some competence in the subject of the manuscript and on the reviewers also having appropriate expertise and providing a good report. Thus allocation to editor and the selection of reviewers are areas of potential weakness in the peer review process. It is the case that some review reports are not up to standard. As Weed Research is a specialist journal, problems of self-interest, e.g. of reviewers unfairly criticising competitors, are rare and editors are generally aware of such risks. The opposite scenario, of papers gaining acceptance through weak review, is also possible, but is likely to be very rare with the three-level process we use. 5. Current problems with peer review in Weed Research Increasingly, editors are finding it harder to find willing and appropriate reviewers. In general, we achieve two review reports and an editor report. However, this difficulty is reported by many journals and I have drawn attention to it while acknowledging the work of our own reviewers (Weed Research [2010] 50, 648-649). I quote, "Within academic publishing, the importance of reviewers is well-understood. However, we notice a worrying trend that academic institutions are increasingly not supporting this traditional role, but at the same time are demanding that their staff continue to publish in academic journals. We gratefully acknowledge the work of the reviewers for Weed Research and we urge all reviewers to lobby their institutions to have reviewing recognised as a measure of esteem. It is both an honour and a task to review the work of our peers and it ought to be recognised appropriately in scientific life". Academic institutions cannot have it both ways. If they demand that their scientists publish in the best journals, then they should recognise that reviewing is an essential role in academic life and should be given formal recognition. This could be by recording each year numbers of papers reviewed or journals reviewed for, in a similar manner to numbers of invited lectures. 6. Trends in global publishing standards It is the case that there have been significant changes in academic publishing over recent years. The original publishing model of hardcopy journals has moved on to mixed print and online copy and there is a trend towards online-only publishing. At the same time, there has been a significant increase in the number of journals available. This has partly come about via commercial publishers encouraging new journals, partly by demands from scientists and academic societies around the world, and partly by the internet facilitating open access and online-only models. Although a generalisation, it seems that there may be a dichotomy developing between journals that run peer review properly, pay attention to journal citation indices and often retain a print run and those that do not. It is certainly the case that a number of open access online-only publications in agronomy are publishing work that is poorly executed and written and with only limited applicability and sometimes no validity. I suspect that only lip-service is being paid to peer review in many such cases. 7. Weighting evidence and evaluating publications The implications for science and society of publishing trends are potentially serious. On what basis should evidence be weighted? Clearly, work published should be accorded higher value than unpublished. But how does society evaluate the publications? Currently, academic publications are assessed using citation metrics (ISI Impact etc.) and such measures are being applied to individual academics in the form of H-scores. I have been a critic of citation metrics in the past, because they were commercially derived, varied hugely between subject areas and were misused in evaluating scientists and their research areas. However, the metrics are evolving and now included 5-year impact, longevity etc. Perhaps further developments, for example a combination of journal and author metrics, might be developed. Within subjects, citation metrics do provide a measure of use and therefore of importance, which can be used to weight evidence. There does need to be some consideration of content in relation to international and national relevance. Internationally relevant science published in international journals typically can be generalised beyond the confines of the work reported, based for example on replication in time and space. More locally relevant work is less readily published in international journals, but more often is found in local journals. By definition, these have lower citations and have lower impact measures. It could be argued that for some evidence, such publications should have higher weightings. However, I would suggest that context and case-by-case evaluations are needed for rating such evidence. 8. Peer review and alternative models of evaluating science Peer review is a voluntary process, involving good will on the part of the reviewer, trust on the part of the editors and with the expectation that the task is taken responsibly. It is the case that there is room for error and abuse in the process. Selecting the appropriate reviewer is not easy. The reviewer can abuse the trust placed in them, by being overly- or under critical or by giving a poorly judged or poorly written report. How else might an editor judge the value, importance, novelty, validity and clarity of a submission? They might try to do it themselves, but no one can be an expert in everything. For a multi-disciplinary journal like Weed Research, we need access to molecular biologists, geneticists, cell biologists, physiologists, agronomists, ecologists, entomologists, plant pathologists, statisticians, spatial biologists and even experts in remote sensing from space. Access to experts is essential. But how might that access be organised? If every journal were to appoint an expert for every topic, would that improve the process? I doubt it. Science moves on and new areas develop, so a list of experts and the individuals must accept they are limited in their expertise. Therefore, several opinions on a piece of work or manuscript seem essential. Might there be ways of alternative peer review? One proposal has been a sort of pre-publication publication to gain voluntary opinions on a piece. The internet facilitates this sort of open approach, but this would be a) more open to abuse from unqualified commentators and b) would suffer from inertia in gaining comments. For the latter point, consider that I have an H-index of 21, derived from 72 publications of which half have fewer than 5 citations. If citation reflects likely comments, most material would not be examined and one has to ask, why would a scientist spend time looking at pre-acceptance material? Could one incentivise peer review? I would suggest that having paid professional reviewers would be worse than the current system. It would be difficult to regulate, as it would need to be a global system, and it would have less flexibility in new areas of science. In principle, peer review seems to be best approach. Nevertheless, it is not the easiest system to run and there are threats, associated with the volume of material, time available and the support or otherwise given to the process by institutions. Therefore, I welcome the Committee’s initiative to examine the process. 9. Conclusions My personal view is that peer review is the best system we have, but it requires more support than it is getting. As a system based on responsibility and trust, there need to be checks and balances and there needs to be universal acknowledgement and recognition of the responsibilities involved. Checks and balances can be achieved by different levels of responsibility within an individual journal and by adjusting the number of review reports required. Very high impact journals tend to have up to five review reports. Nevertheless, I would advocate that there should be at least three levels of reviewing – reviewers, editor and either an Editor-in-Chief or a Board. There also need to be methods of evaluating review reports and of training reviewers. Universities do teach critical evaluation, but formal teaching of science postgraduates in manuscript reviewing (and the correct use of English grammar and punctuation) would be a welcome initiative. However, the most important support required for peer review is for academic institutions to once again recognise that reviewing is an honour, a duty, a responsibility and indeed a measure of esteem. Academic life is more than numbers of students taught or size of research grant won. Why is a member of staff asked to review? The answer is that they have expertise, from within that institution, and that expertise should recognised appropriately to avoid the perception that peer review is a time-wasting chore. Peer review is important for the advancement of science. Dr E J P Marshall Editor-in-Chief, Weed Research 9 March 2011 |
|
|
©Parliamentary copyright | Prepared 17th March 2011 |