Examination of Witnesses (Question numbers
Good morning. Can I thank you for coming in this morning? Just
to start, would the two of you be kind enough to introduce yourselves?
Tracey Brown: I
am Tracey Brown, Director of Sense About Science.
Dr Wager: I am
Liz Wager. I am the
Chair of COPEthe Committee on Publication Ethics. My position
is slightly complicated; I am wearing two hats today, because
I am also an adviser to UKRIO, the UK Research Integrity Office,
where I represent COPE. I am primarily speaking for COPE, but
as the representative of UKRIO was not able to be here and as
a lot of our policies are rather similar, I will also try to represent
Thank you. Peer review is perceived to be "critical to effective
scholarly communication". If it disappeared tomorrow, what
would be the consequences?
Tracey Brown: Perhaps
the best way to understand it is this. We are faced with a sea
of material from which something has to determine what is going
to grab our attention, what needs our attention and what is important.
Something will do that, no matter what, because the reality is
that we cannot sift that sea of stuff ourselves, whether we are
researchers, members of the public or in policy. There are, fundamentally,
three ways that that can happen. You can slice up the sea of stuff.
For example, you can say, "We'll just look at clinical reports
and not apply particularly strict quality control to that. We
just want to look at a very narrow part of it." Or you can
try to implement something with aspirations to objectivity, which
we have for the peer review system, which is, "Is this valid,
significant and original?" and try to apply a fair test to
The third alternative is some form of patronage.
These days we often hear people talk about alternatives to peer
review and they sound really groovy because they talk about online
publication and getting people spontaneously to respond, but the
reality is that if we had no system for determining what is important
and worthy of attention, then something else would determine that,
and it would be some form of patronage. It would be the university
with the biggest PR department or those researchers who have the
best clubby contact books who would get their material recognised.
That is the choice that is faced in terms of whatever system is
Dr Wager: I think
if it disappeared it would be reinvented with a subtly different
name. There is great utility to it. As Tracey said, most researchers
are swamped by information. They don't know where to turn, so
they use filtering systemsvarious selective systemsto
decide what is reliable and what to read. As I said, it would
change. I came up with an analogy for peer review which the journal
editors may not like, but it works for me. It is a little bit
like the MOT system for cars. It is designed to keep the traffic
flowing, reduce accidents and make cars roadworthy. It does not,
though, guarantee that every car on the road is going to run tomorrow.
We could increase it, and we could say that every car owner must
have their car checked once a month, but that would be disproportionate;
that would be unreasonable. Similarly, you could have more draconian
methods and say, "Journals must review the raw evidence"
and so on, but that would be disproportionate. It is a reasonable
Another analogy that works is that peer review does
not necessarily spot that a car has had its milometer clocked
(i.e. put back to show fewer miles than it has actually travelled)
and that it is a bit different from what it looks like. It does
not pick up major fraud all the time. It also does not necessarily
tell you whether you are dealing with a Rolls-Royce or a white
van. Different journals are looking for different things. It is
a useful system. It is not a panacea, but in the way that the
MOT is helpful to the police, to motorists and to various people,
peer review is helpful to society as it keeps things rolling.
However, other systems are also needed.
I will quote from the memorandum that we had from UKRIO: "There
is a danger that the peer review process can stifle innovation
and perpetuate the status quo. Peer reviewers, for example, are
more likely to reject a paper or research grant if it challenges
their own belief system." Can you elaborate on how big a
problem that is for the progression of science and what can we
do about it?
Dr Wager: There
is some quite nicely crafted evidence that that is true. In general,
peer reviewers prefer positive findings. They prefer findings
that confirm their own hypotheses and so on. That is just human
nature. One of the very important roles of editors, though, is
reducing that kind of bias as well as other kinds of bias. One
of the things that COPE encourages is to make sure that systems
are as objective as possible, to make sure, for example, that
journals publish criticism, especially of things that they have
published in their own journal, to make sure that they are willing
to listen to alternative views and so on. There is a danger of
bias towards the status quo. There are other kinds of biases as
well, but a well set-up system and a good editor will minimise
Let me just put that in a slightly different way. Is enough being
done about it now?
Dr Wager: In the
last few years the opportunities to publish have greatly increased,
so we do have the less selective journals. In the days when journals
were limited by the cost of print and paper and when page space
was very limited, it was probably much harder to publish. There
are so many more journals now that are less selective. People
have done studies at the more selective journals to see what happens
to the papers they reject, and they found that about 80% of the
studies get published somewhere else. What happened to the other
20%? Maybe they shouldn't have been published at all because they
really were misleading or completely whacky. It is difficult to
tell. The opportunities to publish have increased. I don't know
whether it is too far skewed.
Tracey Brown: May
I add something to that? This Committee knows well that research
is a dynamic beast. You would expect publishing to reflect that.
Sometimes you get fields of research which ossify and stagnate.
Therefore you would perhaps expect some of the discussion in those
fields to reflect that. Similarly, what happens then is that people
go off and form new collaborations in more dynamic fields and
set up new journals, or they come into old, stagnating journals
and realise that the reviewers are few and increase the field
of reviewers. You would expect to see it almost become a mirror
image of what happens to research generally. We see departments
in universities stagnate and then get taken over by something
more dynamic. That is what happens.
The important thing with a system that produces 1.3
million papers a year is that it is self-reflective. A lot of
study goes on, as Liz has said, looking at the fate of papers
that aren't published and looking, just generally, at trends across
the system. So long as that is going on and patterns of behaviour
can be spotted, then the system can be self-correcting.
Q66 Pamela Nash:
Doctor, can I ask you to put on both your hats this morning and
explain to us more about the roles of both COPE and UKRIO, and
perhaps touch on why we are in need of both organisations?
Dr Wager: Sure.
COPE is the Committee on Publication Ethics. We have quite a narrow
focus. We were set up in 1997, originally by quite a small group
of about a dozen, mainly UK, medical journal editors. It has grown
hugely since then. We now have 6,500 members, all of whom are
journal editors or publishers. We are just looking at publication
ethics. We are not looking at research misconduct in the broader
field. Our members are not research institutions and so on; they
are journals and their publishers. We are a registered charity.
We are international. We provide advice to the editors, which
they are free to ignore. We don't have any particular powers,
except that we also provide a code of conduct and we ask all our
members to adhere to that code. If they don't adhere to that code,
then anybody, be it an author, another editor or a member of the
public, can bring a complaint to COPE against a member. We don't
get many complaints, but we get a few every year and we hear them,
so we feel that there is some accountability as well. That is
COPE. We look at the publication ethics issues, like plagiarism,
authorship issues, reviewer misconduct.
UKRIOthe Research Integrity Officeis
a more recent organisation. It was set up to address the concern
that there was no national body in the UK to look at research
integrity in the broad sense. It, too, is advisory, so it has
no statutory powers, but it is working more with institutions
and research bodies. It is providing codes of conduct about how,
for example, to conduct an inquiry into alleged misconduct, which
is the role of the university or the hospital rather than the
journal, whereas at COPE we are guiding the journal editors on
how to handle a specific issue. Very often we say to the editors,
"You shouldn't be judge and jury. You should hand it on to
the institution." So UKRIO is mainly working with the institutions.
It is on the broader spectrum, so it would look at all kinds of
research misconduct and not just the publication ethics aspects.
Q67 Pamela Nash:
Just to be clear, COPE deals more with the actual publication
and the journals, but UKRIO is talking about the research that
is done before.
Dr Wager: Precisely.
Q68 Pamela Nash:
Do you think there is any case for merging the two organisations?
Dr Wager: We work
very closely together. The fact is that I am one of the advisers
and have been on the board. COPE gave a small amount of money
at the outset to help UKRIO get established. There is a sufficient
difference that they go along quite nicely. There is some overlap;
we do work together. For example, COPE produced some guidelines
on how editors should handle retractions when a publication is
considered so unreliable that you need to withdraw it from publication.
UKRIO produced a complementary set of guidelines heavily referencing
the COPE ones, informing researchers and institutions of what
their responsibilities were on retraction. We have subtly different
Q69 Pamela Nash:
This Committee has received some evidence from the Academy of
Medical Sciences that has alluded to some of the problems that
UKRIO has had with funding, which it says stems from broadening
the remit of UKRIO from just medical science. Can you clarify
what the current situation is regarding funding for UKRIO?
Dr Wager: Yes.
James Parry, who is the managing officer for UKRIO,
would like to give some supplementary written evidence on that
point to give you some detail.
Yes, it is true that it started looking at biomedical more, with
some funding, for example, from the Department of Health and so
on. They also had some broader funding from organisations such
as Universities UK and Research Councils UK. UKRIO's aim is very
much to cover all the disciplines. That would be a great strength
if it wasn't sub-divided. One of the problems with the US system
is that there are so many different bodies you have to go to,
depending on whether it is physics or medical research. One of
the strengths of UKRIO is that it was going to be broad. With
the current climate and lack of funding in universities and so
on, for whatever reasons, RCUK and UUK decided that they did not
want to fund UKRIO at the moment. That is the current situation.
They are looking at alternative models of funding.
Q70 Pamela Nash:
Apart from the organisations that you just mentioned, are there
any other potential funders in the pipelinefor instance,
from the private sector, perhaps?
Dr Wager: Yes.
In the past it did get some funding from the Association of the
British Pharmaceutical Industry. That is one area it would look
at. As I said, I would suggest that if James Parry can give you
some detail on which funders they are planning to approach, that
would probably be more appropriate. COPE, on the other hand, gets
its money mainly from the publishers, who pay for their journals
to be members. So we are getting our funding, effectively, from
the private sector.
Q71 Pamela Nash:
My next line of questioning was to ask if any of those sources
would compromise the independence of UKRIO, but that might be
something on which James would want to give us some detail.
Dr Wager: That
is an important issue. One of the strengths of UKRIO is being
independent. If you are funded by a particular body, you may not
feel so comfortable in going to that body and asking questions,
whereas if it is seen as an independent organisation, that would
be a great strength. The Research Integrity Futures Working Group
made some recommendations last year with which UKRIO was very
happy, and it was happy to morph into whatever it recommended,
and it strongly recommend an independent bodythat is, independent
of RCUK and independent of all the different fundersbut,
sadly, it hasn't happened.
Q72 Pamela Nash:
Finally, do you think there is any case for UKRIO becoming a regulatory
body with full legal powers?
Dr Wager: This
is an interesting one. I have spoken to the people at UKRIO to
make sure that I am representing their views correctly. They are
not against there being a regulatory body, but they don't want
to be it; I think that would be the best way of putting it. They
still think that an advisory and voluntary group would have its
uses. That is their position. There has certainly been criticism
and people saying, "We do need a body with more teeth, with
some statutory powers", yes.
Q73 Graham Stringer:
How can we keep the different people involved in the peer review
process honestthe editors, reviewers and authors?
Dr Wager: A lot
of trust is involved, and that is necessary. How do we keep them
honest? There are various checks and balances. That is why COPE
works with editors on, sometimes, seemingly quite small changes
to processes that can make a big difference, such as asking reviewers
to declare their conflicts of interest and asking the authors
to declare their conflicts. Increasingly, though, technology is
being used. Publishers are able to use things like CrossCheck,
which is this very powerful text-matching software. It can pick
up plagiarism and duplication. Publishers are also using software
to pick up manipulated images and so on. Whereas the software
has also made it easier to commit the fraud in the first place,
it has also made it easier to detect it. Coming back to my MOT
analogy, it needs to be proportionate. You don't want to put yet
more barriers in people's way, but, equally, you don't want to
mistrust everybody and assume that you can't trust anything.
Q74 Graham Stringer:
You have done some research, have you, about the integrity of
reviewers and editors in this area?
Dr Wager: I don't
think there has been much research on the integrity of reviewers
or editors. Much more research has focused on misconduct by authors.
There have been some cases of reviewer misconduct. It is something
that COPE picks up now and again. I have done a survey of journal
editors to find out how big a problem they thought reviewer misconduct
was, and it came pretty low on their list. COPE has produced a
flowchart about how to handle allegations and how they should
be investigated, because a classic complaint by an author would
be, "Someone stole my idea," but that is really pretty
uncommon. I don't think it is a huge problem. Signing up to COPE
and getting the complaints procedure working will be one mechanism,
we hope, to deal with misconduct by editors.
Tracey Brown: Could
I answer that? It is important to separate out what can reasonably
be achieved through the peer review process, in terms of reviewers
looking at a paper and sending comments to an editor, and what
journals might try to achieve more broadly. It would be unreasonable
to ask reviewers to spot fraud or plagiarism on a systematic basis,
although, of course, there are cases where reviewers are quite
well placed to notice such things. Their main consideration is
whether the paper is valid, significant and original and whether
it provides the basis on which others can understand what has
taken place and, therefore, replicate or investigate those results.
There are other things that editors and publishers
can put in place to which Liz is referring. We perhaps need to
make a separation, rather than suggest that the process of other
researchers publishing in the field and reviewing the material
is falling down just because it doesn't always spot those things.
I would also draw attention to the fact that when things do go
wrong, particularly on a significant issue that, perhaps, has
implications for wider society, there is a blaze of publicity
and discussion. That is, perhaps, testament to how unacceptable
it is. When we had the controversy around the stem cell work,
for example, that was something that was being discussed on radio
programmes and across the newspapers and had been caught and addressed.
That tells you that there are ways in which these things get noticed
and cause quite a lot of self-reflection within the system.
Q75 Graham Stringer:
In your submission, you seem to imply that the research institutes
themselves should take responsibility if there are allegations
of fraud or misconduct on behalf of the authors. Do you think
they have the resources to do that? Is there not a conflict of
interest? I understand what you say about the stem cell research
case, but if you take the Andrew Wakefield case, which got a huge
amount of publicity, the institute itself, the co-researcher who
seemed to have been involved and the journal wasn't interested.
The hero of the hour, or the 10 years it took, was a journalist.
What do you learn from that and do you really think that research
institutes are going to be the answer?
Tracey Brown: A
lot of people along the way have learned a lot, including journalists,
publishers and editors. The Wakefield case may be an example of
bad cases making bad law, in the sense that that was a pretty
exceptional set of circumstances. There is, obviously, a big debate
about why that paper was published and also the lack of clarity
on allegations that were made in the context of a press conference
around that paper rather than within the paper itself. This is
something where it would be very hard to set out a one-size-fits-all
approach. This is much more Liz's area, but it seems to me that
the role of editors in evaluating what is taken up within the
journal and what needs to be taken up within the institution is
very important to that.
Dr Wager: I would
like to add to that by commenting specifically on the Wakefield
case. There is clear evidence that the institution did not fulfil
its duty in that case. It should have done a proper investigation.
Whatever its reasons were for not doing it, it was shoddy. It
was not properly done. It has now recognised that, and I believe
it is looking into their processes.
You asked if the institutions have a conflict of
interest. That is something that concerns me because, yes, they
do. Institutions don't like to proclaim when things go wrong.
I would like to campaign for a change, so that rather than a misconduct
finding against a university being a black mark, it is seen as
a badge of honour. You should say, "Don't go to a university
that hasn't had at least one person fired for misconduct, because
it means they are not looking for it properly." I come back
to you and ask: are the institutions well resourced and the right
places to do it? They are certainly better resourced and better
placed than the journals. It is not appropriate for the journals
to be doing that.
There is a great debate about how common misconduct
is. The evidence is that it is probably more common than we thinkat
least the questionable practices. If you are the University of
London with however many thousands of researchers, you are going
to expect a few bad apples and you need some systems that can
sort them out. I would like to see support for that system and,
perhaps, yes, a greater level of regulation. In the Wakefield
case, the institution clearly didn't do a proper investigation.
Some pressure should be brought to bear.
Even in the US, which has a more heavily regulated
systemyou are probably familiar with the fact that they
have their Office of Research Integritythe ORI doesn't
do the investigations. The institutions actually do them, but
with the ORI pushing and gently nudging them to do the right thing.
Q76 Graham Stringer: If
we moved to either voluntary or statutory regulation in that area,
do you think there should be an obligation on the institutes to
publish any findings that they make? Sometimes when there are
investigations by institutes, they say, "We have investigated
it", and that is all you find out.
Dr Wager: Sure.
I would welcome greater transparency. That is an issue that journal
editors have sometimes. They will go to an institution with an
allegation or a suspicion of misconduct and the institution will
say, "Oh, we can't tell you. It's confidential." The
journal editor may be put in a very difficult position, because
if, for example, they have published something, they need to know
whether to retract it or whether to publish an expression of concern.
That is an area where transparency would be a great advantage.
It would also help public confidence. The public are concerned
when they feel there is a cover-up. There is concern when they
feel that people are getting away with it. They would accept that
things go wrong sometimes, but if you don't react to them, or
don't react to them properly, that is when the problems occur.
Q77 Graham Stringer:
What are the consequences for editors and reviewers if they are
found to be behaving unethically? We have an idea of what happens
to scientists who produce fraudulent papers. What happens to reviewers
Dr Wager: Editors
tend to get fired if they fall out with the society or the publisher.
The publishers and the learned societies have an important role.
They employ the editors, albeit usually on a part-time basis,
so there is a contract. If the editor really steps out of line,
they can lose their editorial position. Obviously, that would
be quite public.
In terms of reviewer misconduct, which is relatively
rare but does occur, initially, they might well be sanctioned
by their employer. If an editor found that a reviewer acting for
a journal had acted improperly, they would report that to the
institution. There could be an academic or employment case against
them because that would be seen as professional misconduct. In
terms of the journal, they would probably not use that reviewer
again, or, if they were doing something and it was a matter that
they had not realised they were not meant to do, they would perhaps
provide some more guidance and so on. It could be taken up.
Dealing more, perhaps, with grant applications rather
than journal submissions, if somebody steals somebody's idea from
a grant application, then both the funder and the institution
would certainly take disciplinary measures against that person.
Tracey Brown: Could
I add a postscript to the question of transparency in publication?
As this Committee, I am sure, is aware, the Government are developing
proposals to reform the libel laws at the moment, in part in response
to threats received by scientists and publishers of scientific
information. One of the areas on which Sense About Science has
received evidence is a fear of publishing information about investigations
into research conduct. Even just publishing news items or discussions
on those things raises a fear of libel action. That is something
that we hope is going to be addressed in new legislation, but
it is something on which the Committee may want to comment because
it does limit the ability to put these things in the public domain.
Q78 Graham Stringer:
I have a final question to follow up on Pamela's question. You
recommend there being a research integrity officer within institutes
to look at these things. How would that operate? Is there any
evidence or experience?
Dr Wager: There
is certainly experience. That is how it is done in the US. For
any institution that receives federal funding, they must have
an appointed research integrity officer. It has various benefits.
One is the simple, practical matter of knowing who to contact.
It can be very difficult for a whistleblower, a member of the
public, or even a journal editor to try and find out who to contact.
That person acts as the point of contact.
It also means that somebody has, as part of their
job description, the responsibility of taking an active interest
in making sure that the institution is doing the right thing,
conducting inquiries appropriately and so on. It has benefits.
It can also be helpful in this way. Sometimes journal editors
say to us, "I've tried to go up the hierarchy of this institution."
We had a classic one not long ago. They described this terrible
situation. There were very serious concerns about the author.
We said, "This is obvious. You need to go to the institution."
There was a pausethe man was on the telephone from another
country, calling COPEand he said, "Ah. The author
is the president of the institution." That is a very extreme
example. If the person to whom you are trying to go is the head
of department, they have a stronger conflict of interest for covering
it up and keeping it local than a neutral body. Let's say you
have a concern in the physics department. If you can go to the
research integrity officer, who happens to be from humanities,
archaeology or something, they are, perhaps, more likely to deal
with the problem in a properly impartial way. If the person was
the head of the department involved, there would be a vice-research
integrity officer who would deal with it if they had a conflict
of interest. There is a clear structure involved.
Q79 David Morris:
This question is directed to Tracey Brown. Last week Dr Robert
Parker said that the public "probably don't care" about
peer review. What is your view on this?
Tracey Brown: The
context for that is, when a story takes off in the mainstream
media, whether people ask questions about where that story has
come from in terms of the integrity or validity of the science.
Sense About Science, as I am sure the Committee is aware, published
the leaflet that became the public guide to Peer Review.
We were rather taken aback; we published 10,000 and then we found
ourselves, 500,000 copies later, realising that there was something
of an appetite to understand not just the content of the findings
of a particular paper but its status. There are many user groups
of information. There are policy makers, journalists looking to
decide which papers are worthy of discussion, and health service
providers, libraries, teachers and information providers right
across society, who are looking to understand, when a story says
that Alzheimer's is being caused by aluminium foil, whether it
is based on peer-reviewed research published in a journal known
in the field and what others in the field say about it so that
they can begin to interrogate it on that basis.
We found that people, for want of a better word,
find this quite an empowering line of questioning. To take the
Wakefield example, you are not going to turn yourself into a gastroenterologist
overnight in order to assess whether you are going to vaccinate
your child or whether there is any credibility to the stories.
What you can do is ask questions such as, "How has this information
come forward? What do others in the field say about it? What status
should I give this?"
One of the reasons why we started doing this was
in the field of policy making. We were frustrated that when Government
consultations were under way it appeared to us that they were,
literally, weighing evidenceyou would have five submissions
on this side and five submissions on that side; one side suggested
one thing about the disposal of nuclear waste, and the other suggested
another, for examplerather than going into looking at what
status those different studies had. To take an extreme, is a study
a review of all the published papers on the subject or is it a
set of results that some bloke has got from doing an experiment
in his garage around mobile phone safety? Those are the extremes.
We are asking people to ask those questions. You can ask questions
about where something has come from.
Q80 David Morris:
At what stage of general school education do you think the concept
and understanding of peer review should be introduced?
Tracey Brown: There
has been some success already in introducing it at key stage 4.
The new Twenty First Century Science curriculum in schools has
had a mixed reception. One of its features that people most seem
to like is that it develops discussion around what science is,
and the nature of scientific ideas and information. We ourselves
ended up becoming involved with the people who were developing
that curriculum in order to take what is in the public guide and
bring it to life by talking about how research reaches the public
domain. There is certainly an appetite for that.
It also seems to chime with the point in education
where kids are doing experiments in which they might get different
results, and they are starting to ask themselves, "Why did
Jim and Joe get one set of results, and my experiments come out
with a different set of results?" It picks up on the ability
to step back from your own experience and evaluate what is going
on. They can see that mirrored in a much bigger system.
Q81 David Morris:
Is the kitemark of peer review really a gold standard that tells
the public and policy makers a particular piece of research is
Tracey Brown: I
don't think it is. For the reasons that Liz and I have already
outlined, it is a dynamic system that has all the benefits of
human judgment, in that it can recognise good ideas. It can sometimes
recognise ideas of which even the authors themselves don't recognise
the full implications. It has all the downsides of the system
with human judgment, in that it doesn't always recognise good
ideas and sometimes it can be a bit shoddy. It has all those benefits
to it. I don't think it is something that is a stamp of approval
beyond which we ask no further questions. It is seen by the scientific
community as the basis on which we select those things that are
worthy of further attention, but I would emphasise "further
"Peer reviewed equals true" is not something
that would get us very far. We were concerned about that, as
were many scientists, when we began popularising an understanding
of peer review. Would it be seen as, "Well, it has been peer
reviewed so therefore it must be true"? I am rather pleased
to report that the public seem a bit more subtle in understanding
that. To refer to the example that Liz gave of the MOT, people
know that if you give something a standard it doesn't necessarily
guarantee that that is going to be good and true for ever. It
simply tells you that it has passed an initial assessment.
Q82 David Morris:
You feel that a single peer review article may disagree with previously
Tracey Brown: Yes,
absolutely. It may disagree because it develops the science further
or because it is not taking into account the work of others. Asking
people to ask the question, "Is this peer reviewed?",
invites further questions, such as, "What do others in the
field say about it, where does it sit in the wider consensus,
and where is this research field going?" It opens up that
line of questioning rather than close it down.
Q83 David Morris:
Does the publication of fraudulent or incorrect papers that have
been through the peer review process damage public perception
of peer review as a mark of quality?
Tracey Brown: Inevitably,
the high-profile discussion about fraudulent activity in particular
damages not just the peer review process or publishing but also
science as a whole, which is why so many of us are concerned with
addressing those issues and there is such vigilance around them.
It is inevitably going to happen. The more that people can understand
the system through which scientific research results are generated
and come into the public domain, the more we can understand why
those things happen. You cannot build a world that is immune to
fraudsters. Not in any part of life can you build a world that
is immune to fraudsters. We have to accept that that is the case
and hope that we have systems that detect those as early as possible.
Q84 David Morris:
What differences are there in the way in which peer review is
perceived by the public outside the UK? Do any countries have
organisations or schemes for informing the public about peer review
from which the UK would possibly benefit?
Tracey Brown: We
are experiencing the opposite at the moment, which is quite challenging
for a small charitable organisation like ours. We are experiencing
a lot of demand internationally to make use of this to turn it
into something which is culturally specific to other societies.
There is a lot of interest. We have been working with people in
the US and recently with journalists and scientists in China to
develop similar things. There is certainly a recognition of the
need to build understanding about the context and status of research
results. The global discussions about climate change have particularly
underlined that. We have found that that has increased rapidly
the demand for this.
The answer to your question is that initiatives are
being thought of and are under way in a range of places. They
are, perhaps, not as under way in the European Commission as much
as I would like them to be because recognition of that in the
calibre of research that is used in European policy making would
be very useful. Elsewhere, people are recognising the need to
Q85 Stephen Metcalfe:
Good morning. We all accept that there is a limit to the peer
review process. You have said that the public accept that and
are able to understand that just because something has a mark
of quality, it does not necessarily mean it is true. Do you think
that is communicated across the whole of the public? You talked
about the public; presumably, that is the part of the public that
takes an interest in science as opposed to just having it fed
to them. Do you think that is communicated widely enough? When
you talk about it being taught at key stage 4, are they also teaching
the fact that it is a limited process as well?
Tracey Brown: The
simple answer to your second question is yes. I would be very
happy to supply further information about that peer review resource
that was developed. I believe the University of Reading is going
to be working on taking that forward. I will send some more information
about that to you.
The way to understand it not to think of society
as those people interested in science and those people not interested
in science; lots of different organisations have a role in mediating
information and ideas to others. That goes right the way through
to, for example, midwives on their morning rounds, who, faced,
with a story in today's newspapers about the fact that exposure
to very hot sun will harm an unborn baby, get questions from new
parents asking, "Is this true?" Their professional organisations
can have a role in helping them to understand, "Where has
this story come from? Does it come from a reputable study? What
do others in the field say about it?" They can mediate that
information through to midwives, who then mediate that information
through. It is a much broader process. I don't think those mothers
would identify themselves necessarily as being interested in science
by asking that question, but it is significant much more widely.
Q86 Stephen Metcalfe:
It is whether or not they understand that just because they have
read it in a newspaper, it doesn't necessarily make it true. It
is that wider approach. We get fed a lot of science because journalists
are interested in it. At times, as in the example that you have
quoted, it can distress people. How do we make sure that they
understand their limitations?
Tracey Brown: Let
me give you a good example. We published a booklet called I've
got nothing to lose by trying it, which is for people with
chronic diseases who go looking for miracle cures and then are
trying to work out, "Is this based on any kind of science
or not?" We had a really big post bag from that. People were
saying how much it really helps them to be able to ward off those
friendly neighbours who come round with press cuttings or something
off the internet saying, "You must try this diet". They
were able to say, "Actually, that has not been through any
kind of study. I can't find any published research that suggests
that that is good." People can use it in that way.
Going back to the question of whether a person asks
if it is peer reviewed, there is the potentialwe have seen
it take off in a number of placesfor a bit of a virtuous
circle to take place. If, in a Radio 2 programme in the afternoon,
the interviewer is equipped to ask the scientistthis question
was not asked in the Wakefield case"Which of these
claims has been published and peer reviewed? Do you have a study
that backs this up?", the more that question gets asked,
the more the listening audience expects that to be one of the
interrogatory questions. The more that the listening audience
expects that to be an interrogatory question, the more the radio
interviewer feels that they, representing their listening public,
must ask that question. We have seen these improvements. For example,
in its online material, the BBC always makes reference to where
a study has been published. We have worked very closely with science
journalists over recent years. That is now the case in many of
the newspapers as well, and certainly with online publication
that facilitates making links to where research is published.
Q87 Stephen Metcalfe:
Where that process goes wrong and fraudulent
or incorrect papers have been published, what lessons have been
learnt from that? What information is then fed back to the editors,
reviewers and the authors about how they can learn from these
things? Is there a two-way communication?
Tracey Brown: There
is very dynamic discussion around these things. My experience
is much narrower than Liz's. Within publishing circles and within
the scientific community there is very dynamic discussion of this.
For example, most publishers have editorial conferences on an
annual basis, if not more regularly, through which they can reflect
upon those kinds of experiences. There are also the popular publications
within science, which include journals like the New Scientist
and the science pages in the newspapers, but also the news, views
and comments sections of some of the journals that are published
that don't just have peer-reviewed content but also have discussion
content. Those are also places through which people discuss and
debate matters. At a general level, it is widely discussed.
In terms of specific learning, take something like
vested interests. I looked at the work that has been done over
the last 20 years for a paper that I wrote recently on vested
interests; it was very informally determined previously in publications.
How do you express whether you have a vested interest in the field
that might influence what you said in your paper or the way in
which you review a paper? It has now been much more formalised.
Post-Wakefield, most journals have much better ways of asking
people to express their vested interests or potential conflicts.
Over the past few decades there has been a general move towards
getting away from the informal, "Yes, they'll mention it
if it is a problem", towards a much more formal set of questions
and guidance to authors, reviewers and editors.
Dr Wager: Could
I add something on that?
Stephen Metcalfe: Yes,
Dr Wager: You were
asking about feedback to the authors. Dr Hwang Woo-Suk is no longer
the hero that he was in Korea. Jobs get lost. If there is a really
major case of fraud and a paper is retracted, there can often
be very serious consequences for the authors, which is why editors,
sometimes, are a little bit reluctant to set the ball rolling,
because they do know it can be serious.
The journals will also usually do some heart-searching
and say, "Was there a problem? What went wrong in this case?"
If you look at the retractionsthis is interestingthe
prominent journals with excellent peer review systems like Science,
Nature, The Lancet and so on publish more retractions or retract
more articles than the slightly lower-tier journals. It could
be because they are publishing more controversial research. It
could also be because they are better at spotting the problems.
I don't think there is any question, though, of their systems
being at fault. If you look at them, there is not generally a
systemic problem. There may have been occasional issues, but I
don't think there is a correlation. There has been a big increase
in retractions; this is something I have studied. They have gone
up about tenfold, in fact. There has been quite an increase in
retractions, if you look at the retractions on the big medical
databases. It is because we are better at correcting the mistakes.
I don't think there is any evidence that that correlates with
a systemic problem in peer review. As I said, peer review is not
very good at spotting the major fraud, but some journals and publishers
are good at making sure that they clear up the mess when it does
Q88 Stephen Metcalfe:
The UKRIO submission said that the process of peer review needs
to be confidential. Can you explain why that is important?
Dr Wager: This
is not necessarily something that COPE agrees with. In the traditional
system of peer review, the author does not know who the peer reviewers
are. The idea is that you protect the reviewers' identity so that
they are free to say whatever they want to say. A junior can criticise
a senior person and there is no fear of retribution if you bump
into that person or you apply for a job, and they say, "You're
the person who killed my paper." That is the idea behind
the blind peer review.
It sometimes goes one stage further and the author's
name may be removed from the paper. The idea of that is more to
reduce bias, so that you don't look at it and say, "This
is from Professor So-and-So at Oxford. It's bound to be good."
That is trying to reduce the reviewer bias. With some journals,
the author doesn't know who the reviewers are and the reviewers
don't know who the author is.
Some journals have said, "That's not such a
great system", because as an author you are being criticised
anonymously, and you think, "Isn't transparency a good idea?"
Particularly in the medical journals, some of them now operate
open peer review. The reviews are signed. The peer reviewer puts
their name on to the review. Before they launched these systems,
they tested them to make sure that it was feasible because there
was a concern that reviewers would say, "No, way. I'm not
putting my name on this", and to see if it had an effect.
They hoped it would improve the quality of the review. There is
some quite nice research. It did not improve the quality but it
didn't lessen it either, so they decided it was feasible and practical.
Some of the medical journals use this open review, so it is by
no means confidential.
Some of them have gone one stage even further and
publish the reviewers' comments. BioMed Central has been doing
that. You can click on "Publication History" where you
can see the submitted version, the reviewers' comments, how the
author has responded to it and then the revised version. That
is totally open. If you want to criticise somebody, that is good
because you will be able to say, "I know who that person
works for or who they are funded by" and so on. The conflicts
of interest are all out in the open.
The reason why COPE does not necessarily recommend
one system or another is because some editors have said to us,
"We work in a very narrow field. Everybody knows everybody
else. It just would not work to have this open peer review."
There are different options. UKRIO is referring to the fact that,
if you have a blind system of peer review, then, of course, it
is important to keep the names confidential. Obviously, on the
other hand, if you have an open peer review system, it is not
going to be kept confidential. There is contradictory evidence.
My opinion is that it depends on the discipline. With a discipline
as big as medicine, where there are hundreds of thousands of people
all around the world you can ask and they probably don't bump
into each other the next day, open peer review seems to work.
In much narrower and more specialised fields, it perhaps does
not, and the traditional system of the blinded review is perhaps
The narrower the discipline gets, the more likely it is that all
the parties will know each other anyway.
Dr Wager: You are
absolutely right. There are also some nice studies showing that
taking the names off doesn't necessarily prevent the people from
knowing it both ways.
You can work it out from the methodology that has been applied.
Dr Wager: You know
who is doing the research. For most authors, the first papers
they cite are their previous work, so you look at the references
and you can see whose paper it is. Some journals go to the length
of removing the author's papers from it. There is evidence that
sometimes it is a waste of time.
Tracey Brown: One
of the biggest concerns is what reviewers feel comfortable with;
there have to be enough reviewers attracted to reviewing. There
are very few incentives to review in the university system; there
is no time given for it and no recognition of it in your career.
These things need to be dealt with, but that is the current position.
If you have something that puts people off reviewing, then that
is ultimately going to cause the whole system to fall down. Sense
About Science ran the biggest ever international survey of authors
and reviewers in 2009 because of this perceived crisis in the
future of peer review and we wanted to look into it. It found
that 76% of the people responding who have reviewed papers said
that they feel most comfortable with, or described as best, the
double blind system that Liz described, but as she has said, there
can also be a lot of openness in doing things in other ways.
Q91 Stephen Metcalfe:
You touched on bias; presumably, the double blind eliminates almost
all the bias, doesn't it?
Dr Wager: That
is the idea, although if you know who the person is anyway, even
if they have had their name taken off, it does not. That is the
theory behind it. It was brought in for very good motives, but
it is not clear that it is a great mechanism.
Q92 Stephen Metcalfe:
Finally, if you were to have a more open system, can anything
further be done to minimise bias? Once the names are in the public
domain, it is too late.
Dr Wager: Quite
interestingly, recently concern was expressed by some stem cell
scientists who felt that there were cliques and groups and there
was bias in the system. One of the journal's responses was to
publish the names of the peer reviewers.
They did move towards a more open system, which is interesting.
In a lot of disciplines, the open system does work well and transparency
can be helpful. Training is also important. If you are recruiting
and employing someone, you go through training to make sure that
you have proper employment practices and that you are aware of
anti-discriminatory laws, diversity and that sort of thing. Sometimes
it is a case of making sure that you are doing that. Editors have
a fair idea. Sometimes they will pick reviewers because they know
they will disagree. That, in a way, balances out the bias.
Chair: Thank you very
much. I hope that not too many vice-chancellors take your message
too seriously of sacking some academics this afternoon. Thank
you very much for your evidence.
Examination of Witnesses
Witnesses: Mayur Amin,
Senior Vice President, Research and Academic Relations, Elsevier,
Dr Philip Campbell, Editor-in-Chief, Nature Publishing
Group, Robert Campbell, Senior Publisher, Wiley-Blackwell,
Dr Fiona Godlee, Editor-in-Chief, BMJ Group, and Dr
Andrew Sugden, Deputy Editor and International Managing Editor,
Science, gave evidence.
I thank the panel for coming in this morning. We have rather a
lot to cover in a relatively short time, so please feel free to
send us any supplementary notes if we cannot get your particular
answer to a question. May I ask the five of you to start off by
Mayur Amin: I am
Mayur Amin. I work at Elsevier and I head up a research and relations
Dr Campbell: I
am Philip Campbell. I am editor-in-chief of Nature and
of the Nature Publishing Group.
I am Bob Campbell. I am senior publisher at John Wiley and Sons.
Dr Godlee: I am
Fiona Godlee, editor-in-chief of the BMJ and the BMJ Publishing
Dr Sugden: I am
Andrew Sugden. I am international managing editor of Science
magazine at Cambridge.
Thank you very much. You will have heard me ask this same question
to the first panel. Peer review is regarded as "fundamental
to academia and research". What happens if it disappears
Dr Campbell: There
would be a sudden decline in trust by academics of what they are
reading, and by those in the media and among those members of
the public who take the literature seriously, correctly. Increasing
numbers of the public do engage with the literature. That, to
me, is one of the most important aspects of what you would lose.
Dr Godlee: It is
important to distinguishI am sure others will do thisbetween
pre-publication peer review and peer review generally. Pre-publication
peer review is only one aspect of the peer review process, which
begins with grant-funding peer review, ethics committees, the
pre-publication process, the editing process and then the peer
review that goes on after publication. Then there is correction
and, in some rare cases, retraction. All of those systems constitute
If you are talking about the decline or the loss
of pre-publication peer review, there are some areas in science
and medicine where that would be a problem, as Phil has said,
and others where it might be a benefit. The balance between the
benefit and harm of peer review is still very poorly experimented
If we look at the evidence that Richard Smith, the ex-editor of
the BMJ, sent us, he suggested moving from "filter,
then publish" to "publish everything, then filter."
Is there any sense in that approach?
He is ignoring the other very important part of peer review, which
is improving the article. Especially in some disciplines, that
is a lot of what peer review is about. It is not just filtering
but going back to the author, making revisions and even doing
new experiments. It is only taking one part of peer review.
Mayur Amin: In
the Sense About Science study that Tracey Brown mentioned, 91%
of the authors said that the peer review process helped to improve
their paper. Where everything is published before it gets its
first peer review filter, we may end up with a system where it
is hard to differentiate between evidence-based conclusions and
conclusion-based evidence. We end up in a situation where there
is a lot of noise and uncertainty as to whether it is credible
Dr Campbell: My
other reaction is that all the experience of allowing people to
comment online and our experience of open peer review in an experiment
that we did at Nature suggests that people are much more
motivated to comment and assess a paper if asked by an editor
before it is published than they are in any other way.
Dr Sugden: I would
endorse that, and add the fact that peer review is a system very
much for improvement of papers as well as filtering.
Mr Campbell said that part of the process is to improve the paper,
but some of the evidence we have had suggests that the process
has a rather conservative impact on the science. Is there not
a problem in that respect?
I don't see it as particularly conservative. A good editor will
encourage the author to write a better paper, develop those ideas
better and get them over more effectively than in the first draft.
It is a positive process. If you have a very conservative editorial
board, the journal will suffer. It is a market; the more proactive
entrepreneurial editorial teams will win out and build better,
more successful journals. It is a very dynamic market. A conservative
editorial board wouldn't last long.
Do you think that that process mitigates against the creation
of a risk-averse culture?
Yes. I don't see it as risk averse, no. There are some editorial
boards that are, perhaps, more conservative than we would like.
On the whole, they are trying to publish better papers each year.
There are higher impact factor scores. There is the reverse side,
which you picked up: it tends to be the more radical and original
article that will win more citations.
Dr Campbell: I
completely agree with that use of the word "conservative".
Another use of the word "conservative" concerns robustness.
For us, peer review helps us deliver robust publications. We,
at Nature, if anything, are more conservative than other
journals. We make researchers go the extra mile to demonstrate
what they are saying. I also celebrate the fact that we do not
want to be conservative with papers that go against the status
quo. We want to encourage radical discoveries.
Dr Godlee: We have
to acknowledge that there is a huge variety in the quality of
peer review across the publishing sector. Journals like Nature,
BMJ and The Lancet, which have big editorial teams
within them, do a very different type of peer review from those
with much less resource. At its very worst, peer review has been
describedmany will have heard this listas slow,
expensive, biased, open to abuse, stifles innovation, bad at detecting
errors and hopeless at detecting fraud. At its best, I think we
would all agree that it does improve the quality of scientific
reporting and that it can improve, through the pressure of the
journal, the quality of the science itself and how it is performed,
putting pressure back on the funders and the ethics committees,
We have to acknowledge that scientific communication
has changed enormously with the increased volume and sub-specialisation.
Technology has changed the equation. The economics of scientific
publishing has completely changed with the internet. There may
be better ways of speeding up innovation, dissemination and quality
control. We should not be frightened of those. We need to experiment
I would agree that conservatism is not a bad thing
in science or medicine in terms of making sure that what we publish
is robust, relevant and properly quality controlled. That is absolutely
crucial, but I don't think we should be conservative in how we
go about achieving that.
Chair: We also have different
meanings of the word in this place as well.
Q98 Graham Stringer:
Let me follow that up. One of the submissions we have had from
a Cambridge professor shows that the criticism of conservatism
is stronger than that. He says he tried to get a paper published
that showed that a large percentage of work in nanotechnology
was never going to result in any practical application. He found
it extremely difficult to get it published, and his view was that
it was because it was running against the interests both of the
other authors as well as the publications. Is that a criticism
that you have come across? Do you think it is a fair criticism?
Dr Campbell: We
would love to publish something that strongly made a provocative
case of that sort. That is not because we want to be sensationalist
but because, if there is a good reason to say that, it needs to
be out there and we would like to be the place to publish it.
Q99 Graham Stringer:
He should have come to Nature and not to a nanotechnology
Dr Campbell: Of
course. In the same breath as conservatism, sometimes things like
that are too easily said and not backed up well enough. A journal,
which also has a magazine role in Nature, has one of the
most critical audiences in the world. They love to be stimulated
but they also want to make damned sure that the evidence on which
we base the stuff we publish is reasonably strong.
Q100 Gavin Barwell:
In 2008, a Research Information Network report estimated that
peer review costs about £1.9 billion annually. Would the
panel consider that to be a fair estimate?
Mayur Amin: It
is an estimate that was made. It is an estimate of the non-cash
coststhe cost of the reviewers' time. Yes, on the basis
of an estimate, it is a reasonable estimate. The issue is that
it is the time spent by reviewers on behalf of others in the academic
community. It is a cost that is neither paid for nor charged for
in the system. It is a service to the academic community as a
Q101 Gavin Barwell:
Does everyone take a similar view to that?
Dr Godlee: I have
no doubt that peer review is an enormously expensive process.
It is expensive for publishers and it is an investment that is
made with a return on the investment expected through a number
of revenues and also the reputation of the journals they publish.
The unaccounted cost is the peer reviewers' time. One of the questions
is how we make that more of a professional activity for which
they get academic credit rather than something that gets no credit.
We need to make sure that it is understood to be part of an academic's
role in contributing to the forwarding of science. That is largely
how it is viewed, but it may not get the credit that it deserves.
Dr Campbell: The
Nature journals are working on giving more credit privately
to referees directly at the end of every year, letting them know
what they have done for us on the record. In my conversations
with senior people in universities, they recognise that they could
do more to give their academics credit. Academics themselves don't
think about it much. They do take it very much for granted. In
a very competitive academic world, when you are going for tenure
or for some other promotion, to be able to have something like
that stated on the record is helpful.
Q102 Gavin Barwell:
That leads me quite neatly into my next question. In 2010, the
Joint Information System Committee reported that UK higher education
institutions spend, in terms of staff time, between £110
million and £165 million per year on peer review and about
£30 million on the work of editors and editorial boards.
Does the panel think it is fair that higher education institutions
absorb this cost on behalf of publishers? Should reviewers be
paid for their time?
Dr Campbell: Yes,
I do, but I would change the question. It is on behalf of everybody.
Of course, you could get into a situation where publishers would
start being the intermediaries that pay, but we don't charge authors
to submit papers. At least, there are some systems where that
happens, but we don't hand on the cost of peer review, in so far
as it costs us anything. Were that to come in as a charging system,
there is no way that the publishers could absorb that. I return
to my primary point that everybody sees all sorts of peer review,
for journals, funding agencies and, informally, between colleagues,
as part of the business of doing science.
Dr Sugden: It is,
essentially, a reciprocal process. Authors are reviewers as well.
It is two sides of the same coin, essentially.
Mayur Amin: It
is a service that the higher education system provides to others
within the higher education system globally. It is not a countrywide
system. In the UK, for example, and certainly within Elsevier,
we find that we publish about 6% of the papers that are published
out of the UK and 6% of the reviewers are from the UK. So there
is a balance. It is a service to the community itself.
Q103 Gavin Barwell:
To pick up on the point that Elsevier stated in the memorandum
it submitted to the Committee, it says: "Publishers have
made significant investments into the peer review system to improve
efficiency, speed and quality." Can you give the Committee
an idea of the scale of those investments in recent years and
the kind of things you were referring to?
Mayur Amin: Overall,
one of the biggest investments for everyone in the publishing
industry in the last decade or so has been migration to some of
the electronic platforms. Across the industry, our estimate is
that somewhere in the order of £2 billion of investment has
been made. That includes the technologies at the back end to publish
the materials as well. The technology has included submission
systems, electronic editorial systems, peer review support systems,
tracking systems and systems that enable editors to find reviewers.
It is not just a question of their friends; they have systems
so that they can find newer reviewers that they don't know about.
There are also support systems, in terms of guidelines and signing
up editors to committees like the Committee on Publication Ethics.
There are a number of different ways, such as training sessions
and workshops for authors, editors and reviewers. Those are some
of the ways.
To clarify, did you say £2 billion?
Mayur Amin: Across
the industry, in terms of all the technology investments.
Q105 Gavin Barwell:
My final question is, particularly, for Dr Godlee. The BMJ Group
told us in their submission that "little empirical evidence
is available to support the use of editorial peer review".
How should a programme of such research be organised, and who
would fund it?
Dr Godlee: It has
long been felt that a system as important as peer review to most
known science is remarkably under-evaluated. There have been studies.
There has been an editorially led or research-led approach to
this, and some of that funding has come from the NIHR in the UK.
We have been very grateful for that. The overall level of evaluation
of peer review is very poornot only journal, editorial
peer review, but grant peer review, which is right at the beginning
of this process and has an enormous amount of influence on what
does and doesn't get funded. I am sure we should have it. The
UK could lead on this. As to where the funding should come from,
you could say that it is a combination of the journal publishing
world, the grant-giving world, industry, but also public funding.
It is a very important part of what we do. We can improve it;
there are huge flaws. Lots of good things are going on and there
are many new experimental ways of going about things. We need
to evaluate these so that different specialty areas can take on
different approaches as appropriate. A lot could be done with
some decent funding.
Q106 Stephen Metcalfe: Good
morning. Dr Campbell, you mentioned that you wanted a robust peer
review system. What do each of your individual journals do to
ensure that the process of peer review is both robust and delivers
Dr Campbell: We
talk to each other a lot about the way we do the process. Senior
editors on Naturethis would happen equivalently
on the other journalsand I will look at individual manuscripts.
Whenever there is any sort of complaint, I take personal responsibility
for ensuring that it is looked into. In terms of external responses,
we always respond as quickly as we can. In terms of due diligence
internally, we have the discussion groups and we will look at
particular cases where a manuscript may have caused certain types
of difficulty. Above all, I rely on the team editors to be looking
at every decision that is in any way controversial. Several editors
will be involved in discussions about their position. Within the
team, there is a quite a degree of transparency and oversight.
Dr Sugden: The
system at Science is very similar. Editors will always
confer with each other about any decision. No decision is made
Dr Godlee: The
same is true at the BMJ. Any of the large journals with
a big internal team, as we, Nature and Science have,
will have a similar process. There is a lot of consultation, and
a lot of expertise is brought in from outside, through a series
of stages, trying to make sure that we reject papers that are
not for us very quickly, so as not to delay their moving on elsewhere
and to keep the science moving, but also to make sure that those
we do pass through to final stages get very heavily scrutinised.
I want to say hereit may come up laterthat
we are reliant on what the authors send us. We have to acknowledge
that peer review is extremely limited in what it can do. We are
sent an article, effectively, sometimes with datasheets attached.
We have to go with what is sent to us. A vast amount of data do
not get through to journals. We know that there is under-reporting,
misreporting and a whole host of problems, and journals are not
adequate to the task that they are being given to deal with at
Q107 Stephen Metcalfe:
Do you sometimes send those back, and does the reviewer say, "Can
you do some more work or experiments on this?"
Dr Godlee: This
is not true of different peer review systems, but in the systems
you are hearing about here, all papers will be revised before
Q108 Stephen Metcalfe:
Are the reasons why you are asking for the additional information,
experiments to be conducted, et cetera, always made clear to the
researchers who are doing the work?
Dr Sugden: Yes.
It is made clear to them through the reports that they get from
the reviewers and the editors and accompanying recommendations
that go with that. They will always know why they are being expected
to do something.
Q109 Stephen Metcalfe:
Do they get an opportunity to challenge back and say, "I
don't think this is worthwhile"?
Dr Sugden: Yes.
Dr Campbell: There
was a recent discussion in the pages of Nature. Somebody
whom we published said that editors on journals such as Nature
can be rather supine in accepting the demands of a peer reviewer
and not protecting an author from excessive demands of that sort.
I went back to all of my editors and asked for examples where
we have not been supinerecent publications which had had
to be revised, but where we had made a judgment that in this particular
case this request for extra work was not required. That is an
example of the robustness of the discussions that take place.
Dr Sugden: Often
you will get two or three referees' reports on a paper, but those
referees may not agree with each other. It is the editor's job,
if they consider the paper worth pursuing, to then make a recommendation
as to which of those referees' revisions they should follow and
which they should not, and maybe do some extra ones, too.
Mayur Amin: In
addition to the vigilance of the editorial teams, there are in-house
editorial teams on large journals or editorial boards within smaller
journals. Certainly within Elsevierand I think other publishers
do the sameI do that, because it is my responsibility to
get feedback from the researchers, authors, reviewers and the
editors on the processes. We have so far collected something like
a million items of response from the community. That gives us
another measure of whether reviewers, authors and even editors
find that certain aspects of the processes are failing. So, as
publishers, we can take that on board and present it to an editor
or a journal and say, "Look, a whole lot of authors are getting
displeased about the way the process is working. We need to modify
the process." That is another process-level procedure that
we have in place.
Q110 Stephen Metcalfe:
Mr Amin, is it correct that, prior to 2005, you had a number of
publications that looked like journals and sounded like journals
but in fact were a collection of re-published papers that had
been sponsored by pharmaceutical companies, and that the data
and the articles within the publications came out in support of
those particular sponsors? Is that true?
Mayur Amin: Yes.
That was a case from early 2000 to 2005 in a division of Elsevier
that is not part of the formal peer review process. They are the
custom publication division in Australia. I would say that the
failure there was of the publishers not to hold the standards
that we have. I stress that it was not a peer-reviewed journal.
The issue was that there was not sufficient disclosure or sufficient
clarity about what the nature of the publication was. When we
found out, we acknowledged that. An internal review was done and
a completely revised procedure was communicated internally and
also externally. It is available on our site.
Q111 Stephen Metcalfe:
So you would say that that would have fallen short of publication
Mayur Amin: It
fell short of custom publication ethics. It was not a peer reviewed
journal at all.
Q112 Stephen Metcalfe:
How do you that happened imagine within a respectable and large
organisation? You said "failure", but it must have been
a systemic failure.
Mayur Amin: From
our investigations, it is a relatively isolated case. I suspect
in any human endeavour, in a large organisation, that there will
be some failings. The important thing is what we do when we recognise
and identify those failings. We have taken action to put procedures
in place to minimise those in the future, and also we went public
about this as well.
Q113 Stephen Metcalfe:
What are those procedures to minimise it in the future?
Mayur Amin: I don't
know every single one off the top of my head, but they are in
the public domain and I am happy to circulate those procedures
Q114 Stephen Metcalfe:
That moves us on to the wider point of where sponsorship comes
into this. Presumably, there are people who still want to get
sponsored publications out into the public domain. How do you
identify those? How do you make sure that there is a clear difference
between something that is a peer-reviewed journal and something
that is sponsored by someone who wants to get a message across?
Mayur Amin: Our
guidelines are all about total transparency and total disclosure
about any such sponsorship. That is what it clearly states. There
must be total transparency.
Dr Godlee: I think
we enter a very tricky area here. We have to acknowledge, and
I am sure my colleagues on this panel will be willing to acknowledge
this, that the publishing industry has a number of revenue streams,
one of which, certainly in medicine, is the pharmaceutical industry.
The pharmaceutical industry, for every good reason and lots of
bad reasons, wants to get their results out into the public domain.
The journals provide them with a very efficient route to do that.
Depending on their rigorous attempts to prevent this, journals
are variously used by the pharmaceutical industry, the devices
industry and other industries to get their points across. That
is not to say that there aren't hugely wonderful things going
on in the pharmaceutical industry that need to be disseminated,
and perhaps we could do a better job where those are concerned,
but there are also extremely dubious practices. The journals are
largely naïve on them. We do our best. I don't know the extent
to which this happens in biomedicine as opposed to clinical medicine,
but it is certainly a major problem in clinical medicine. Sponsored
publications can be very blurring at the edges. The reader may
not be aware that this has been conjured up within industry and
then sold to a publisher to publish to clinicians and others.
Even if the publisher tries to make it obvious, it may not be
as obvious as they think.
Even on the peer-reviewed side of things, it has
been said that the journals are the marketing arm of the pharmaceutical
industry. That is not untrue. To a large extent, that is true.
Much as I hate to say this and much as it distresses me, we, as
a publishing industry, have to acknowledge that and must have
many more better systems for making that clear to clinicians and
preventing it from happening on the scale that it is happening
at the moment.
Q115 Stephen Metcalfe:
Can you give an example of how that situation might be addressed?
What sort of things should be done?
Dr Godlee: All
efforts for transparency are good. Some people think that people
pushing for this have gone too far. I personally don't think we
have gone far enough. We need centralised systems for conflicts
of interest to be declared. In the States, for example, if you
are at the Mayo Clinic or the Cleveland Clinic as a clinician
or researcher, your conflicts of interest are posted and updated
every year. It becomes much easier for people to become accountable
for the funding they might get. I don't think we have such good
systems in the UK. Obviously, journals and journal editors need
to be vigilant about this. Open access to research and data deposition,
mandated, eventually, if we could find good systems for doing
that, will help. Trial registration has been very important, but
we need to push further on that so that the results are made available.
This is a big conversation to be had. It is absolutely
not in the pharmaceutical or device industries' best interests
in the long term to be involved in the scandals that have been
a major part of their lives, certainly in the States; less so
here, but is that because the practices aren't happening here
or because we don't know about them? It may be a combination of
both of those things. It is not in the industries' best interests
and it is certainly not in the public interest that data on patients
and the public as participants are not made available, are not
properly reported and are misreported. Evidence-based practice,
which we all want to see in medicine, becomes impossible if guidelines
have been created based on distorted evidence. It sounds an extreme
point of view. The evidence shows that it is not extreme and we
have to begin to acknowledge this situation and take action to
Dr Campbell: In
the areas in which we publish, we have some clinical review journals,
but predominantly, we are in the life sciences and physical sciences.
I wouldn't use that language at all. I feel very secure in the
internal boundaries and in the transparency that we try to instil.
We have internal guidelines that we make available to people.
This is not in relation to original research, but where there
is sponsored publication there are absolute and rigorous Chinese
walls between the interested parties. Editors have the final say
on what is published. We have statements that make that absolutely
clear. It is essential. I recognise completely that in the clinical
world, the pressures and boundaries can be far more difficult
We heard from COPE that there has been a huge change in the last
10 years. There is much greater awareness throughout the editorial
and peer review community. There has been a very good editorial
in learned publishing just last month by Diane Scott-Lichter,
using the analogy of the mitigation of cancer with publishing
ethics. Better education and better screening can reduce the incidence
of cancer, and she made that analogy with publishing. If we do
more in terms of education, screening and training, we will reduce
the problems later on.
Chair: I am afraid that
we are going to have to move on. This is a fascinating area, but
we have huge other areas to cover.
Q116 Roger Williams: We
are told that some of the top journals may reject 95% of the papers
that are submitted to them. Can you tell us why journals like
the ones that you edit are so selective in dealing with these
Dr Sugden: Part
of it is simply that they are weekly magazines with a print budget.
We are publishing 20 papers, say, a week, and a lot of people
want to be published in them. We are receiving 10 times as many,
roughly. That is the straightforward answer. We need to publish
in a timely manner. We want to showcase the best across the range
of fields in which we publish, so we have to be highly selective
to do that.
Dr Campbell: That
is an interesting question. As we move online and as the prospect
of the decline of the print journal happens, that pressure is
lessened. I still think that we would publish the same number
of papers that we publish, pretty much. We are receiving increased
numbers of submissions because the output of the scientific community
is going up. It might go up for that reason, but the proportion
would stay the same. It is to do with our judgment of what is
Q117 Roger Williams:
With so many journals publishing peer-reviewed work, does almost
all research get into a peer-reviewed journal at some stage?
Dr Sugden: The
evidence is that it does. We heard that said earlier. More than
80% of what passes through our hands will get published somewhere,
and mostly somewhere quite good.
Q118 Roger Williams:
Does everybody agree with that? Do researchers have multiple
submissions? Are they allowed to submit to more than one journal
at a time?
Mayur Amin: No.
Dr Sugden: That
is absolutely not on.
Dr Campbell: Not
at the same time. I have no ideait would be an interesting
statistic; maybe someone else on the panel knowsif you
looked across the UK research community, what the average number
of submissions per paper is before it gets published.
Dr Godlee: Just
to go back, the reasons for publishing so few have changed. As
Phil says, print is no longer the constraint. Editorial resource
is obviously a constraint, and for a general journal, so is wanting
to capture the very topwhat we consider to be the top.
Impact factor is an issue. Certainly a lot of journals find that
if they reduce the number of research papers they publish, their
impact factor creeps up quicker. That is a commercial reputational
As to the question about where stuff goes if it doesn't
get into one of the high-end journals, increasingly people are
going straight into one of the big open access journals, such
as PloS ONE. BioMed Central has one. BMJ Group has one,
as does Nature Communications. A lot of the publishers are beginning
to open up so that people can get speedy publication if they haven't
got into the journal of their choice. That is a good thing. That
means we will see authors being able to move on to the next thing
rather than spending a lot of their time adapting a paper for
yet another journal which is going to reject it and then move
on. That is an improvement, in my view.
Q119 Roger Williams:
Do you, as individual journals, have some sort of time target
by which you will reject their articles, to be fair to the people
who are submitting?
Dr Godlee: Absolutely.
Again, it is a market. We will try and be as quick as we can so
that authors want to send us their next paper. That is an author
service that we want to provide.
Editors are screening a higher percentage. Where initially they
are saying, "This is out of scope for the journal",
they send it straight back, so the author is only losing days.
Mayur Amin: Ultimately,
good science will find an outlet. To follow up on Fiona Godlee's
point, the important thing is to speed up the processthe
waiting time between going from one journal to another.
Q120 Roger Williams:
So most of your initial decisions are based on an editorial viewon
what will have the biggest impact and interestrather than
on the quality of the science.
Dr Sugden: On both.
Q121 Roger Williams:
You have already talked about the model that will publish everything
that is scientifically sound, regardless of impact and interest.
Is there any evidence that that is expanding, in terms of the
opportunities for research?
Dr Godlee: I don't
know about the scientific reports that Nature is launching,
but the model of BioMed Central, PLoS ONE, BMJ Open and
other people who are doing that is very much to say, "We,
the editorial group managing these bigger online repository-type
journals, will not make a decision about editorial relevance."
If it is relevant to two people in the world and can help them
with their work, then that is fine, with no limitation on space.
We want to make sure that it is properly reported and is valid
science. That is the bar that peer review will help us to achieve.
It is not an editorial decision but a science decision.
Dr Campbell: In
my conversations with scientists, there are people who are sick
to death of editors and who value something like, in our case,
scientific reports, which have, as Fiona said, no editorial threshold
but do have a peer review process just for the validity aspect
of it. There are others who want to be a part of the "badge
of honour", if you want to use that phrase, of one of the
big journals. They will therefore submit themselves to editors.
Q122 Roger Williams:
In this initial sorting out of submitted papers, what are the
benefits and disadvantages of editorial boards against staff editors?
Dr Godlee: Cost.
Dr Sugden: We don't
pay our editorial boards. Most of our submissions will go to one
or more members of the board in the first week they arrive. Then
the staff editors will make their decision based partly on that
Q123 Roger Williams:
Do you all have editorial boards as well as staff editors?
Dr Campbell: Nature
and the journals do not have editorial boards. We make extensive
use of the peer review advice, of course, that we get. We never
have had editorial boards. I guess, therefore, that I haven't
lived with an editorial board. All I can say is that our ability
to act quickly is helped by the fact that we develop our own standards
and depend on them.
Dr Godlee: The
BMJ has a similar process.
Q124 Roger Williams:
As staff editors, you have built up terrific expertise and a broad
knowledge, but you are miles away from having done the science.
Is that an advantage or a disadvantage? Does it give you more
Dr Campbell: We
are miles away from having done a very particular piece of science,
but we have well over 150 post-doctoral editors that we have working
for us. They have all done research. They all go to meetings and
to labs for several weeks in a year. I think they have a better
overview and a better sensitivity to what is important, but we
absolutely depend on the peer review expertise even of distinguished
scientists. We are more likely to want to go to the post-doc in
the lab of a distinguished scientist, because they are the people
right now at the cutting edge of fast-moving techniques.
Q125 Stephen Mosley:
In previous evidence we have heard claims that the peer review
system is in crisis. Professors Fox and Petchey said: "Scientists
face strong incentives to submit papers, but little incentive
to review." Would you agree with those sentiments?
There is no quantitative evidence that it is in crisis. I think
the peer review system, as a whole, is more robust than ever.
In our submission, we gave you some data that in 2010 we had about
12% more submissions. There was no impact on publishing schedules
and no added delays, although we only published 2% more articles,
so the rate of rejection was higher. A study has been published
in Nature by Tim Vines and colleagues where they did try
to quantify this issue and tracked all the reviewers. They found
that the population of reviewers is increasing with the 3% to
4% increase in the research community, as you would expect. Therefore
the load on each reviewer is, if anything, slightly less than
10 years ago.
Q126 Stephen Mosley:
I know that in the written evidence, Dr Sugden, you put forward
some evidence which said: "For an editor, the process of
finding referees can be time-consuming", et cetera. You implied
that it can sometimes be difficult to find reviewers. Is that
Dr Sugden: Yes.
It is usually because they are over-committed. It is not usually
because of an underlying unwillingness to review or about not
having an incentive to review. It is simply because they are doing
too many other things at the time. It may take us a week or two
to find the three referees that we need for a paper sometimes.
It is rare that it takes much longer than that.
Q127 Stephen Mosley:
You say they are over-committed. Has that changed in recent years
or is it the case that it has always been that way?
Dr Sugden: I don't
have quantitative data on that. I haven't noticed a particular
change in the situation. Others may have.
Mayur Amin: I would
agree with Bob Campbell that the potential pool of reviewers has
increased in proportion to the number of researchers, because
the reviewers come from that research community. There may be
issues, I suspect, with geographical imbalances. If you take somewhere
like the USA, which produces about 20% of the output of papers,
it conducts something like 32% of the reviews in the world, whereas
China is producing something like 12% to 15% of the output of
papers but is probably only conducting about 4% to 5% of the reviews.
This is just a transitionary thing. China and India have grown
very fast in the last few years; there are a lot of young researchers
who will come up and take their place in peer review and start
peer reviewing papers. It is incumbent upon publishers to help
out here, both in terms of technical infrastructure to help editors
find a broader pool of reviewers, and also in terms of training
needs, appointing editorial board members in those developing
countries as well as running workshops and providing literature
to help train new and young reviewers to come on to the system.
Dr Campbell: As
I said in our submission, we are not experiencing problems in
finding reviewers for the most part. Interestingly, Nature and
the Royal Society co-hosted a discussion of Royal Society research
fellows. They are the young researchers who have been given prestigious
positions by the Royal Society. There was definitely a sense that
their lives were getting more burdensome. Although the numbers
are indeed growing, and although some of us are not having this
difficulty, the time that academics have available for refereeing
is under pressure. That is, therefore, all the more reason for
us to support peer review by giving appropriate credit and so
Q128 Stephen Mosley:
Does the type of peer review that you do have any impact on the
number of reviewers you have? I know that the BMJ uses signed
open peer review. Other organisations, like PLoS Medicine, tried
it and then discontinued it a few years ago. I know that in the
BMJ evidence, you talk about a survey that says that 76% did prefer
the double blind system. Does the type of peer review have an
impact on the supply and number of people who are willing to review?
Dr Godlee: On that,
we found that reviewers are willing to review openly and sign
their reviews, that authors very much appreciate that and like
it. It has been helpful in revealing some undeclared conflicts
of interest amongst reviewers. It is a very important process
that works well for us. But we are a general medical journal;
the point was made in the last session that specialist journals
might find it more difficult. We do have people who decline to
review for us openly, which is fine, but we haven't found it a
problem in terms of recruiting reviewers. One of the aspects of
the open review process is that it is part of the credit system.
We are beginning to post those online as well so that the reviewers
get a credit for that.
Probably it does add a burden. It means they have
to do a better job, which is why we do it; that is a good thing.
I take Phil's point entirely: scientists are under a lot of pressure
on a whole host of things, such as getting funding and the bureaucracy
surrounding scientific research, and peer review is just one other
thing. Going back to the previous point, the more we can do to
make it something that they gain proper recognition for, the better.
Q129 Stephen Mosley:
We are going to be moving on to that point in the next question.
I will move slightly away from that. We have had some conflicting
evidence on cascading of reviews between publications. Do you
have any strong views one way or the other?
Dr Sugden: In the
sense of sharing reviews between us?
Stephen Mosley: Yes.
Dr Sugden: We haven't
done that so far, but we have had conversations with other journals
about possibly doing it. We have not taken that leap so far. Within
Science and its two sister journals, there is the possibility
Dr Godlee: Within
the BMJ and its sister journals, we do the sharing. Some
journals are a bit squeamish about the idea of acknowledging that
the paper went somewhere else before it came on to them and would
rather not know, but we are very happy to receive a paper. If
it has been elsewhere and it is a good paper, we would like to
see the reviewers' comments from the previous journal. We also
would probably seek our own comments. There is no doubt that there
is duplication of effort. That is the point of the question, I
Dr Campbell: The
sharp edge of this issue is whether competing publishers are willing
to share their signed referees' reports internally, even if they
don't reveal to the authors who the referee was. We have a journal,
Nature Neuroscience, that has participated in such an experiment.
The neuroscience community has done so. We did it with some misgivings
because, as I said in our submission, we invest a lot in getting
editors out into the field and using referees whom we value because
of the relationships that we have developed with them. To hand
on, as it were, the outcome of that relationship to a competing
publisher is something that hurts slightly. At the same time,
you do have this competing interest of the research community
to save people work. We found that the uptake of this facility,
where authors can elect to have the referees' reports of the rejecting
journal handed on to the next publisher, is not very great.
Dr Godlee: They
are hoping that the next reviewer will be more positive. That
is the answer.
Dr Campbell: Of
course, they may decide that they want to have a different set
of reviewers anyway.
Mayur Amin: We
participate in that same neuroscience consortium. Yes, the results
are mixed. There is generally willingness amongst the publishers
and editors to participate, but the authors are somewhat reluctant
at the moment. There are also some successes. PLoS ONE
is a good example of one where they are cascading material from
their other PLoS journals into it. There are other journals
such as Cell. I think Nature practises it. Internal
cascading is working. We are trying out a number of areas, largely
to reduce the burden on referees and reduce that time.
Chair: We must move on
fairly rapidly because we are going to lose a few Members to Welsh
Questions; they have come up in the ballot today.
Q130 Gavin Barwell:
I want to pick up on an issue that has just been touched on in
response to Stephen's questions and to my earlier question as
well. Some of the people who submitted to us said that a lack
of formal accreditation for peer review is a problem. Several
times, in answer to other questions, it has been touched on that
some way of recognising those people who are giving their time
to this process would be a good thing. Dr Parker of the Royal
Society of Chemistry told us last week that, because of the very
large numbers of reviewers that journals use, it would be very
"challenging" to have an accreditation system. What
do the panel members think about that?
Dr Campbell: In
principle, I don't think it is. A manuscript tracking system can
be easily programmed. If what is needed is that the referees themselves
get a proper statement of credit, that is fine. It is equally
easy for a journal to decide to publish a list of everyone who
has peer reviewed for them over a particular period. Again, a
manuscript tracking system should be able to do that very easily.
I don't think in principle it is difficult.
Mayur Amin: I would
agree. Individual journals practise this already in terms of listing
the referees that they have used over the year, particularly recognising
the ones who have done a lot of work. Some I know recognise them
at conferences and they acknowledge their efforts. With the advent
of ORCID, which is this unique author identifier, publishers are
all working together to support this system. That may give us
an opportunity also to be able to track with an unique identifier
those people who have refereed and acted as referees. That may
help to provide a stronger accreditation platform than is currently
Dr Sugden: In a
journal for which I used to work, I published a list of referees
at the end of the year and received a rather anguished phone call
from one of them saying, "Now the author", whoever it
was, "will know it is me." There can be a downside to
Q131 Gavin Barwell:
Dr Sugden, can I pick up next on something you said in your submission?
You said: "We would recommend that journal editors and academies
work together to produce guiding principles for the peer review
process that can be adopted and used for instruction at the institutional
level." Do you think there is a will among publishing organisations
to work together to do that?
Dr Sugden: I don't
know. It is something that we think would be a good idea. I don't
know whether there is a wider desire for that. It springs from
the evidence that we have that the quality of peer reviewing is
quite variable. That may well have its roots in the quality of
training that scientists get, not just between countries but within
countries as well. I know that some institutions and some publishers
are working on this kind of thing. There was some evidence from
the Institute of Physics last week, wasn't there, on this matter?
It is a general recommendation.
Mayur Amin: We
are already carrying out workshops and trying experiments of training
and support. We would welcome and be supportive of any guidelines
that come from the industry.
It is happening internationally. The International Council for
Science is running a meeting later this month on peer review and
how it can be improved. The debate is pretty active.
Q132 Gavin Barwell:
Mr Amin, Elsevier mentioned in their submission their Reviewer
Mentor Programme. How well received was that by higher education
institutes? If it was well received, what plans do you have to
scale-up that pilot?
Mayur Amin: There
was a small-scale pilot where one or two editors and a single
institution took on a few post-docs and encouraged them, in a
test environment, to peer review and then they were given guidance.
That was a manual hands-on approach. That pilot was received very
well at that institution and by the people involved. We are now
currently looking at how to scale that up and make it much more
of an electronic and online system. We are hoping that by early
next year we might well have a system to be able to start scaling
Q133 Gavin Barwell:
A final question from me, Chairman. Lots of people who gave evidence
referred to the way in which peer review publication is being
used as a metric in the Research Excellence Framework. Does that
put undue pressure on publishing organisations? Has it affected
the number and quality of submissions that you have received?
Is it a concern on which any of you would like to comment?
Dr Godlee: We definitely
see a spike in the months before the deadline. In that sense,
yes. We welcome it. From our point of view, it has not been an
overwhelming burden. These are good UK papers. All of us would
say that we want to attract the best papers and this is a route
to doing that. From our point of view, the answer is that it is
not a problem.
Dr Campbell: Without
wishing to seem flippant, the biggest pressure point of that sort
comes in the summer when everybody sends their paper in, goes
off on holiday and is therefore unavailable to peer review.
Q134 Graham Stringer:
Dr Campbell, a Nobel Laureate has said in the literature that,
in this commercially competitive world, top journals such as Nature
and Science are "cutting corners" in looking
for positive reviewers of the articles. Is that fair? What are
your comments about that?
Dr Campbell: That
is completely wrong. I totally refute that statement, as you would
expect me to, I am sure. It is not in our interests to cut corners.
As I said before, we have one of the most critical audiences in
the world, and any paper that makes a strong claim is going to
be absolutely hammered in the form of testing in the laboratory
or scrutinised in terms of discussions at journal clubs, within
universities and so on. It is simply not in our interest, for
our reputation in the long run, to publish papers that have any
degree of cutting of corners in the assessment process. I am not
sure if it was the same person, but someone else also said that
we would select reviewers, because we wanted to publish the paper,
who would help us publish the paper by being soft. That, again,
I refute in exactly the same terms.
Q135 Graham Stringer:
Staying on this line for a moment, if you get a hot papermaybe
something confirming cold fusionwhich would have worldwide
interest, how does that affect your sales?
Dr Campbell: It
doesn't have a direct effect on sales. It is another hot paper.
Of course, if there is an immediate stream of interest, the chances
that people will subscribe to Nature or buy a copy of that
paper may go up. In no sense, even implicitly within the company,
is that particular sort of relationship seen as a measure of success.
There is a big barrier of independence, institutionalised within
the company, in fact, between the commercial side and the editorial
side. I am absolutely charged with making sure that the reputation
of the journal is upheld at whatever cost.
Q136 Graham Stringer:
We have had discussions in this Committee about published articles.
It is fundamental to science that the science that is done is
reproducible, yet we found in other inquiries that computer codes
are not always available. What is the attitude of the different
journals represented here to the complete reproducibility of the
science that is described in articles?
Dr Campbell: This
is a hot issue as far as I am concerned and it is one where we
do need to do some work with the communities. Journals like Science
and Nature will work with the research communities
to enforce deposition in databases, for example, if they are publicly
available. When it comes to something like software, if you take
a discipline like climate change
Graham Stringer: That
is the debate we were having it about.
Dr Campbell: Right.
I was talking to a researcher the other day and he had been asked
to make his code accessible. He had had to go to the Department
of Energy for a grant to make it so. He was asking for $300,000,
which was the cost of making that code completely accessible and
usable by others. In that particular case the grant was not given.
It is a big challenge in computer software and we need to do better
than we are doing.
Q137 Graham Stringer:
It rather undermines the science if it can't be reproduced, doesn't
Dr Campbell: Yes,
but there are other ways of doing that. You can allow people to
come into your laboratory and use the computer system and test
Q138 Graham Stringer:
Do you believe that all journals should publish a publication
Dr Campbell: Yes.
Q139 Graham Stringer:
Dr Campbell: If
you look in our Guide to Authors, we certainly do have
statements about ethics in terms of declarations of conflicts
of interests and such things.
Dr Godlee: The
BMJ Publishing Group has a policy of transparency. I know that
Wiley-Blackwell has an openly published policy. We all hope for
a forward-looking, rigorous and ethical policy on transparency.
That is one of the big things that journals and publishers should
take on as their responsibility because we have the ability to
put pressure on the research community to raise their game in
a whole host of ways.
Q140 Graham Stringer:
Should there be consequences if the policies are not followed?
Dr Godlee: Yes,
and there are consequences.
Q141 Graham Stringer:
What are the consequences?
Dr Godlee: It would
depend on the ethical breach. If it was a plagiarism, then the
paper might be retracted or there might be a statement of the
offence. The institution would be informed. The author would be
penalised via the institution. If it was a duplicate publication
or a conflict of interests that was undeclared, all of these things
have very straightforward remedies both through the journal and
through the institution. The understanding of how to deal with
what are now pretty standard ethical breaches is very well developed.
More difficult is what you were discussing earlier where institutions
or journals fail to pursue something adequately. The scientific
community is probably not doing enough. There may be a further
discussion, but the fact that we don't have a proper research
integrity oversight body in the UK is a real scandal.
We will see publishers investing more in higher ethical standards
because, as we have to set apart what we are publishing from all
the social media initiatives and the anarchic approach there,
the way we can justify what we are doing and what we are charging
for it is to have much higher publishing standards. It is something
we will all be investing in.
Dr Sugden: For
some years now we ask all authors to declare all conflicts of
interest before we can even accept the paper for publication.
That is quite tight transparency in our author instructions.
Mayur Amin: I would
agree. We have similar policies that are made publicly available.
There are, again, consequences where people flout those policies.
There are retractions and removals in occasional cases, but we
have public retractions so that they are visible and the reasons
for the retraction of that article are known publicly.
Dr Campbell: If
somebody hasn't declared a conflict of interest and it is subsequently
uncovered or if somebody does not fulfil one of the conditions
of our publication, which is that you will make as much of your
research materials as is reasonable available to others, then
we will publish a statement next to that paper that makes that
clear. In really egregious cases we will go to the head of an
institution that employs the scientist concerned.
Q142 Graham Stringer:
Let me be clear. If you have plagiarism, fraudulent claims or
people not declaring important conflicts of interest, will you
always publish that in subsequent journals?
Mayur Amin: If
it comes to our attention, absolutely, yes.
Dr Godlee: We would
publish a correction.
Q143 Graham Stringer:
Is that standard throughout the industry?
I think it will be. The industry is developingyou may have
come across it in the submissionsa new project called CrossMark.
Every paper that has gone through the peer review process has
the ongoing stewardship of the publisher picking up on retractions
or corrections. By clicking on to the CrossMark logo, you can
go to the metadata and find out if there have been any updates
or even retractions. That is a technical solution which is being
launched this year.
Dr Campbell: One
of the ways in which you can highlight misconduct is to write
about it in our magazine pages. We are constrained in that respect.
In a recent case, a retraction had to be issued and the author
of the paper wanted to highlight the fact that the reason for
the retraction was a misconduct case that had been investigated
by the university. We published the retraction but we found that
we were not able to include the material about why because of
the current libel laws. I do want to impress on this Committee,
given the draft Defamation Bill that is under consideration, that
it is something that really does affect us in many ways.
Dr Godlee: I would
like to make a brief point about the hot papers. I agree with
Phil that it is in no journal's interest to publish hot papers
that turn out to be invalid. Editorial decisions are too often
directly influenced by reprint revenue. Medical journals publish
articles which then get sold on. I defy any editor who is presented
with a large drug trial not to know, as they are accepting that
trial, that it will generate revenue for their journal. It is
an enormous industry. It is an enormous part of the revenue streams
of publishers both in the US and the UK. I would say less so for
the BMJ but it is an issue. Something that would be really interesting
for this Committee to look at would be what is reprint revenue,
how does it influence editorial decisions and is it a good thing?
Publishers benefit, but I don't think science benefits.
Chair: That takes us neatly
to a question that Stephen is going to ask.
Q144 Stephen Metcalfe: Dr
Sugden, in your submission, you referred to the fact that the
US Congress has codified the use of peer review in Government
regulations. Can you explain how that works and what the consequences
Dr Sugden: You
have got me more or less at the limits of my knowledge on that,
I am afraid. This was something that came in, I think, in the
early 1990s, with the case of Daubert v. Merrell Dow Pharmaceuticals.
The result of that was that the Supreme Court decided on the
standards of scientific evidenceI am not sure if I am going
to get this rightthat should be applied in court. That
standard was defined, partly, on the basis of it being peer reviewed.
I can find out more.
Q145 Stephen Metcalfe:
That may well be useful. Do you think there is a need to do something
similar here in the UK?
Dr Sugden: I am
not sure that it is for me to say. Perhaps my colleague would
know if there is anything of that kind here. I am not aware that
there is, but I think it would be useful.
Q146 Stephen Metcalfe:
You think it would be useful.
Dr Sugden: I think
it would be useful, yes.
Q147 Stephen Metcalfe:
Do you think it would have any effect on the quality of the publications
if you know that your articles are then being peer reviewed but
they can then be used in Government regulations or the courtroom?
Do you think that drives standards?
Dr Sugden: I honestly
don't know. I am not sure that it would affect it, but I don't
know, because you don't know what future cases that evidence might
be used in.
Q148 Stephen Metcalfe:
As far as you are aware at the moment, are any of the UK scientific
advisory groups mandated to use peer-reviewed literature?
Dr Sugden: Not
that I am aware of.
Stephen Metcalfe: Perhaps
it is an area that we need to look at in some detail elsewhere.
Q149 Pamela Nash:
I am aware that we have only a few minutes left, so I will try
and put all my questions into one and I would ask you to keep
your answers quite brief. I want to move on to international issues
with peer review. Is there any difference between the standards
of peer review, both in terms of the journals and the referees,
in different countries and areas of the world? Also, do you have
any experience of there being an additional burden placed on peer
reviewers, either in the UK or in other established scientific
communities with the increase in research that is coming from
emerging scientific nations, such as China? Are any of your publications
involved in training with reviewers from overseas? I know you
touched on that earlier, Mr Campbell.
Dr Godlee: One
of the issues that I am most aware of, and this is a brief point,
is that American peer reviewers are prone to publish and to push
for American work. There is a terrific American bias in PubMed,
which is hard to address. There are differences in attitude to
research in different countries. In terms of the quality, that
is a matter of resourcing. Many countries in the world cannot
afford the kind of publication processes that we are talking about.
That is a big problem. As was mentioned, there will be a transition
where the developing world will rely on the developed world for
peer review for a while until systems get developed.
We have been carrying out a lot of training since 2005 in China,
particularly in chemistry. We are increasing the percentage of
peer reviewing from China now. It is still not parity but it is
moving towards 20% of our papers. I am sure that the others are
doing the same thing.
Dr Godlee: Yes.
We are involved closely in training in Africa, China and India
at the moment. It is exactly similar.
Mayur Amin: I would
not say, necessarily, that the standards themselves vary internationally
across regions, but maybe the practices do. Maybe that is partly
to do with experience. Interestingly, in the case of one particular
journal, I have an anecdotal piece of evidence. There was a sense
that, if we appointed members of the board of a journal in China
to peer review material in China, they might be softer on that
material. In fact the contrary was the case. Reviewers in China
are harder on material that comes out from China than, say, people
in the UK were. There is a tendency for people in the UK and the
US to be seen to be not overly critical of material that is coming
out of scientifically developing nations. My sense is that the
developing nations and other nations will come up to a level of
practice that is seen in the UK and the US. Certainly publishers
and all participants have a role to play in training and also
supporting that mechanism.
Dr Sugden: The
increased mobility of scientists over the past couple of decades
has evened out the quality, in terms of the peer review we get.
We try very hard to recruit referees from any good scientific
centre, wherever it is.
Duplication is also a problem where English is the second or third
language. Authors are more inclined to copy text as it gets their
message over much more easily than they can by re-writing it.
We do pick up more duplication from some areas overseas. As you
will have read in the submissions, publishers have set up a system
called CrossCheck for picking up duplication. That is being taken
up at a good speed. About 20,000 submissions a month are now being
processed through CrossCheck. By the end of this year, about 10%
of all submissions will be scrutinised through CrossCheck for
duplication, which can mean plagiarism.
Dr Campbell: I
wouldn't deny that the countries in which our referees are working
are hugely skewed towards the developed scientific nations. I
guess that is because that's where we feel safe. Nationality and
the point of origin is never an issue in the choice of a referee.
There is no question about that. Also, I am sure we are all aware
of the growth of science in China and the way in which that is
being spurred by people coming over, having spent time in other
countries. We are engaging with the Chinese community in a way
that will increase referees from there, especially.
I have a final couple of questions. Dr Sugden, you said that there
is a challenge in providing confidential access to large and complex
datasets during peer review. You touched on this slightly with
Graham's questions about large datasets. Why are there currently
no databases that allow for secure posting during the peer review
Dr Sugden: I am
not sure that I can answer that. The challenge is, essentially,
because we use a blind peer review system. We don't want the author
to know who the referee is. If the author is the person who is
hosting the dataset, that can be an issue.
There are ways round that, surely.
Dr Sugden: There
are, but it can be time-consuming.
Even in the cases of very large, voluminous datasets, they may
not easily be uploaded online, but a DVD could be sent to the
publisher and that could be put on a secure site.
Dr Sugden: Yes.
There are a number of ways in which it can be done.
So there is an answer to the question that Graham raised about
the specific issue that cropped up in the climate change inquiries.
There would be a way mechanically of doing that, would there not?
One of you mentioned a $300,000 grant.
Dr Campbell: That
was for software. I understood that question to be about software
and not data.
What I couldn't understand about your answer was that that software
must exist, otherwise the researcher couldn't have read his own
Dr Campbell: Of
course you can just send people the software, but you will find
that this is not off-the-shelf software. This has been specifically
built for the system. You can't just transport it elsewhere without
doing extra work to make it transportable.
That applies just as much for any piece of laboratory equipment.
Dr Campbell: Yes,
Lots of laboratory equipment is custom made. You can describe
it in your text.
Dr Campbell: You
can describe it, absolutely. The policy that we have with a computer
code is that you do have to describe the algorithm. We do have
a policy of that sort.
Dr Godlee: For
clinical data we have a big challenge, but it is one that we must
head up. The journals must move to a mandatory approach.
Presumably, part of the challenge in clinical data is because
of patient confidentiality.
Dr Godlee: That
is a challenge, but when one is talking about large datasets,
confidentiality has already been dealt with, and we should not
use that as an excuse for not looking at this. There are no doubt
practical issues, but it would be great if this Committee were
to give a push forward for the kind of approach that, nationally,
we ought to have systems for data depositioning. The practical
problems will be resolved, as with trial registration, which seemed
impossible five or 10 years ago, and it is now routine.
Do you all offer post-publication commenting for all of your journals?
Dr Godlee: Yes.
Dr Campbell: Only
some of our journals at the moment. We are introducing it.
If there were a growth in post-publication reviews, would there
be a lower expectation of pre-publication review?
Dr Sugden: No.
One doesn't cancel out the other.
Dr Sugden: No,
I don't think so.
Mayur Amin: There
needs to be a fundamental difference between first publication
commentary as a supplement to the peer review process as opposed
to post-publication commentary as a substitute for the peer review
process. I don't think it will act as a substitute because peer
review doesn't just comment on the paper; it helps to improve
the paper. But you will end up with less quality or even bad science
being made public. People may not comment on it. Therefore, lack
of commentary doesn't mean that the paper is good or bad. It will
just stay in the public domain.
Dr Godlee: I wouldn't
want this Committee to go away with the view that because we all
nod dutifully and say that we have post-publication peer review,
that is the case across the industry. There are great variations.
Some journals exercise a liberal view, which is the BMJ's
view. Others have a much more editorially tight control over what
gets written, post-publication. In some cases that I am aware
of, critical comment about papers does not get out into the public
domain. The other problem is that even when it does, the authors
often don't respond. One is left with a situation that is far
from perfect. There is a lot of progress with the internet but
it is still not perfect.
Chair: Thank you very
much. We have at least a couple of promises for some additional
information from Mr Amin and Dr Sugden. That would be extremely
helpful. Any other comments that you would like to add would be
extremely helpful. It has been a very interesting morning. Thank
you very much for your contributions.
1 Note by witness: My full name is Dr Elizabeth Wager.
I should have used this on my written submission so that it was
clear when I was citing my own work (which is published as E Wager). Back
Note by witness: James Parry's correct title is Acting Head of
Ev 126 Back
Note by witness: Research Integrity Futures Working Group report
is available at http://www.rcuk.ac.uk/documents/documents/ReportUKResearchIntegrityFutures2010.pdf Back
Note by witness: The name of the guide is "I Don't Know
What to Believe..." - Making sense of science stories. Back
Note by witness: I referred to a journal that had responded to
stem cell scientists' allegations of bias by publishing the names
of the peer reviewers. This was incorrect. The journal in question
is the European Molecular Biology Organization (EMBO) Journal.
It does not publish the names of its peer reviewers, but it has
started to publish the peer reviews. Further detail is available