5 Post-publication approaches
Post-publication review and commentary
204. In addition to the checks and balances carried
out in pre-publication peer review, the "wider scientific
scrutiny post-publication is as important [
] indeed, this
is a form of secondary peer review".[366]
The British Sociological Association considered that:
Peer review is in fact a layered process in which
initial peer review of proposals leads into peer review of publications
and thence into post-publication peer review (the latter is sometimes
referred to as academic impact). The two are related and equally
necessary processes.[367]
205. Review after publication can be carried
out in a number of ways. Historically, where fellow researchers
either agreed or disagreed with an author's findings, they would
publish their own manuscripts or correspondence with the relevant
journal in order to progress scientific understanding in their
field. Professor John Pethica, Physical Secretary and Vice President
of the Royal Society, told us that:
[Post-publication review] is implicit in the fact
that people publish subsequent papers saying, "X was right,
Y was wrong, and we did this and produced that." That is
implicit in the whole structure of scientific papers and there
is a preamble about what has happened so far.[368]
206. In recent years, with the growth of online
communication systems, publishers have started to introduce more
formal processes for rapid responses to published articles. BMJ
Group explained that:
Many online journals encourage continuing discussion
of their content. The BMJ's Rapid Responses or eletters, posted
daily, provide a voluminous, lively, and often scholarly discourse
and constitute an important source of ongoing peer review.[369]
207. While the BMJ Group reports "voluminous"
commenting, others have been less successful with this approach.
The Royal Society has an e-Letters system, which allows researchers
to comment directly on a published article, the comment is then
linked to the article for others to see.[370]
This has not proven to be particularly popular as "remarkably
few people choose to use it".[371]
Other learned society publishers we consulted did not have any
formal processes for post-publication review and commentary.[372]
208. Other more informal approaches, such as
the use of online blogs and social networking tools like Twitter,
are becoming more widespread. Sir Mark Walport, Director of the
Wellcome Trust, told us that:
Web-based publishing brings new opportunities, because
it brings the opportunity for post-publication peer review and
for bloggers to comment. [
] This is a fast-evolving space.
As the new generation of scientists comes through who are more
familiar with social networking tools, it is likely that Twitter
may find more valuable uses in terms of, "Gosh, isn't this
an interesting article?" All sorts of things are happening.
It is quite difficult to predict the future. It can only be an
enhancement to have the opportunity for post-publication peer
review.[373]
209. The BMJ Group added that with Twitter, even
though "their [character limit] allow only the briefest comment,
tweets are facilitating rapid and widespread sharing of links
to articles and other online content and can, it seems, quickly
expose failings in peer review".[374]
For example, in December 2010, "many scientists blogged immediate
criticisms of [a] widely publicized paper [
] heralding bacteria
that the authors claimed use arsenic rather than phosphorus in
their DNA backbone".[375]
Many of the initial criticisms came from "the scientific
blogosphere".[376]
Since then, "Science, the journal that published the
original paper, has published eight papers criticising it, as
well as a response by the original researchers"; the debate
continues.[377]
210. We questioned whether a potential growth
in post-publication review and commentary would lead to declining
expectation of pre-publication peer review by publishers. Dr Andrew
Sugden, Deputy Editor & International Managing Editor at Science,
did not believe this would happen.[378]
Mayur Amin, Senior Vice President of Research & Academic Relations
at Elsevier, agreed, adding that post-publication review and commentary
would not "act as a substitute" for peer review.[379]
211. Post-publication review
in an era of new media and social networking tools, such as Twitter,
is very powerful. The widespread sharing of links to articles
ensures that research, both accurate and potentially misleading,
is rapidly spread across the world. Failings in peer review can,
rightly, be quickly exposed. However, there is no guarantee that
false accusations of failings will not also be spread. Pre-publication
peer review still has an important role to play, particularly
in relation to assessing whether manuscripts are technically sound
prior to publication. However, we encourage the prudent use of
online tools for post-publication review and commentary as a means
of supplementing pre-publication review.
ENCOURAGING PARTICIPATION
212. One of the reasons that post-publication
review and commenting is not yet considered to be a viable replacement
for pre-publication peer review is that the numbers participating
in it are low. The publishers, John Wiley & Sons, told us
that:
Evidence for the efficacy and usefulness of post-publication
comment is not yet convincing, both in terms of the quantity and
quality of such comments, although we expect to see links to blogs
and other post-publication comments as standard practice, and
our systems and processes will accommodate this if the academic
and professional communities whom we serve want it. Post-publication
comment is likely to be a supplement to pre-publication review
rather than a substitute for it.[380]
213. Dr Philip Campbell, Editor-in-Chief of Nature
and Nature Publishing Group, explained that the lack of commenting
might be because "there is no prestige or credit attached
[to it], there is the risk of alienating colleagues by public
criticism, and everyone is busy".[381]
Sir Mark agreed that academics do not like to "write critical
comments of each other alongside the articles".[382]
He added, however, that:
There are some very interesting community issues
here. In the humanities, there is a long tradition of writing
book reviews where one academic is scathingly rude about another
academic. [
] In the case of the scientific world, that tearing
apart is done at conferences and at journal clubs. The scientific
community does not have a culture of writing nasty things about
each other.[383]
214. One of the main challenges is therefore
to get post-publication commenting tools more widely used in order
to "get the critical views across" and "encourage
people to air their criticisms and put their names to them without
fear of any repercussions".[384]
215. The issue is not just to get more researchers
participating in public commentary; it is also essential that
comments be fairly represented online. Dr Fiona Godlee, Editor-in-Chief
of BMJ and BMJ Group, explained that:
There are great variations [in journal practices].
Some journals exercise a liberal view, which is the BMJ's
view. Others have a much more editorially tight control over what
gets written, post-publication. In some cases that I am aware
of, critical comment about papers does not get out into the public
domain. The other problem is that even when it does, the authors
often don't respond. One is left with a situation that is far
from perfect. There is a lot of progress with the Internet but
it is still not perfect.[385]
216. However, the system could be considered
to be "self-correcting" as "a scientist who wrote
something that was particularly egregious would be subject to
the peer review of their own community".[386]
Filtering content
217. While post-publication review and commentary
can be used to further improve the technical assessment of published
research, it can also be utilised to fulfil another one of the
functions of peer review: to filter research publications and
act as a guide for what readers might find interesting.
218. The extreme situation one could envisage
would be that in which all research is published and then filtered,
an approach advocated by Dr Richard Smith, former Editor of the
BMJ.[387] However,
we have already discussed why publishing research prior to reviewing
it could be problematic, in particular for the biomedical sciences
(see paragraphs 69-70). Mayur Amin, from Elsevier, explained the
consequences of such an approach: "Where everything is published
before it gets its first peer review filter, we may end up with
a system where it is hard to differentiate between evidence-based
conclusions and conclusion-based evidence."[388]
219. However, with the growth of online repository
journals (see paragraph 80) and the development of more advanced
tools for post-publication review and commentary, the role of
the publisher in filtering research prior to publication is diminishing.
Professor Ron Laskey, Vice President of the Academy of Medical
Sciences, told us that "if there is a move towards publication
in journals such as PLoS ONE and where impact is less important,
then a subsequent impact assessment such as the Faculty of 1000
could become increasingly important".[389]
220. Faculty of 1000 Ltd (F1000) is an online
service that collects the comments of selected experts on research
articles that have already been published in biology and medical
journals. F1000 told us that:
Our Faculties of 10,000 experts across biology and
medicine are asked to highlight those publications that they believe
to be particularly important, irrespective of where they are published
(the majority of our evaluations86%are not
from what are often thought of as the top-tier journals, e.g.
Nature, Science, Cell, NEJM, JAMA,
Lancet, BMJ). Faculty Members are asked to provide
a rating (recommended; must read; or exceptional) and then provide
a short commentary ("evaluation") on why they believe
the article to be so interesting and how it might impact their
own research or specialty, and their names are listed against
this. These evaluations are effectively short open referee reports
and the service acts as a positive filtering service.
Multiple Faculty Members can evaluate the same article,
providing a combined higher rating, or can write a dissent if
they disagree with an existing evaluation. The authors of the
article can write a comment in response to the evaluation, and
registered users can also write comments.[390]
221. F1000 has policies to prevent bias in expert
commentary; for example, the service is currently adding a specific
declaration that Faculty Members will confirm for every evaluation
they carry out. This declaration will state:
This work has been selected for evaluation entirely
on its scientific merit. Neither I nor my co-evaluators (where
applicable) have collaborated with the authors in the past year
or been influenced in the selection of this work directly or indirectly
by the author/s or by any third party. This evaluation presents
my opinions and those of any listed co-evaluators.[391]
222. Feedback on the usefulness of F1000 was
limited. Professor Ron Laskey told us that "its use is patchy
but it is recognised as providing a valuable service".[392]
Dr Robert Parker, Interim Chief Executive of the Royal Society
of Chemistry, added that it was generally a positive thing.[393]
At present this service is limited to biology and medicine .
223. While it is too early to
make a judgement on post-publication filtering mechanisms, such
as Faculty of 1000 Ltd, we recognise that such a system could
offer a valuable service if widely used. It is likely that such
services will become more important with the growth of repository-type
journals.
Measuring impact
224. The post-publication filtering of which
articles might be of particular interest and subsequent commenting
on those articles could be considered to be the foundation of
a new model for measuring impact. Indeed, by assessing a specific
article in this way, the status quo of using a journal's Impact
Factor to assess impact may be threatened. The Public Library
of Science (PLoS) told us that:
a new paradigm is emerging and is being tested in
several fields whereby articles are subject only to technical
assessment (by peer review) before publication, and impact assessment
takes place during the post-publication phase, which can broaden
the assessment of the work (by peers) to a much wider constituency
than can take place before publication.
[
] Rather than relying on the journal in which
an article is published, it is now possible to focus on the merits
of the article itself. An array of article-level metrics and indicators
can be deployed to filter and assess content. Coupled with tools
for post-publication commentary and addition of value, there are
tremendous prospects for replacing the current impact assessment
function of pre-publication peer review with a post-publication
system that has the potential to be more efficient and effective.[394]
225. Dr Mark Patterson, Director of Publishing
at PLoS, explained that:
It is not just about a blog comment [
] There
is a whole range of metrics and indicators, including resources
like Faculty of 1000, which can be brought to bear on the question
of research assessment. [
] We want to provide an indication
when [readers] come to [a] paper of how important [it] is and
what impact it has had through usage data, citation information,
blogosphere coverage and social bookmarking. There are so many
possibilities.
We have moved in that direction by providing those
kinds of metrics and indicators on every article that we publishwe
are not the only people doing this but we have probably taken
it further than mostto try to move people away from thinking
about the merits of an article on the basis of the journal it
was published in to thinking about the merits of the work in and
of itself. Indicators and metrics can help with that. They aren't
the answer to the question but they will help. Ultimately, there
is really no substitute for reading it and forming your own opinion.[395]
226. David Sweeney, Director for Research, Innovation
and Skills at HEFCE, was not convinced that such "article
level metrics [
] necessarily captured the intrinsic metric"
of a published article. He added:
I remain of the view that there will be no magic
number or even a set of numbers that does capture intrinsic merit,
but one's judgment about the quality of the work, which may well
be, [
] in the eye of the beholder, may be informed by a
range of metrics.[396]
Sir Mark Walport agreed with Dr Patterson's final
point that "if you want to assess the value of an individual
article, I am afraid that there is no substitute for holding it
in front of your eyes and reading it".[397]
366 Ev w77, para 4 [Royal Meteorological Society] Back
367
Ev w111, para 4 Back
368
Q 56 Back
369
Ev 73, para 21 Back
370
"eLetters", Philosophical Transactions of the Royal
Society B, http://rstb.royalsocietypublishing.org Back
371
Q 55 [Professor John Pethica] Back
372
Q 55 [Dr Nicola Gulley and Dr Robert Parker] Back
373
Q 282 Back
374
Ev 73, para 21 Back
375
A. Mandavilli, Peer review: Trial by Twitter, Nature,
2011, vol 469, pp 286-87 Back
376
"Arsenic-based bacteria: Fact or fiction?", New Scientist
Online, 27 May 2011 Back
377
As above Back
378
Q 159 Back
379
Q 160 Back
380
Ev 66, para 8.1 Back
381
Ev 89, para 47 Back
382
Q 282 Back
383
Q 284 Back
384
Q 212 [Dr Michaela Torkar] Back
385
Q 160 Back
386
Q 286 [Sir Mark Walport] Back
387
"Richard Smith: Scrap peer review and beware of "top
journals"", BMJ Blogs Online, 22 March 2010, http://blogs.bmj.com Back
388
Q 95 Back
389
Q 57 Back
390
Ev 143, paras 3-4 Back
391
Ev 144 [Faculty of 1000 Ltd] Back
392
Q 58 Back
393
Q 59 Back
394
Ev 80, paras 33-34 Back
395
Q 209 Back
396
Q 281 Back
397
As above Back
|