Peer review in scientific publications - Science and Technology Committee Contents

Examination of Witnesses (Question 249-319)


Q249   Chair: Good morning, gentlemen. I would be grateful if you would just introduce yourselves for the record.

Professor Rylance: I am Rick Rylance. I am the Chief Executive of the Arts and Humanities Research Council, and the Chair-elect of the Executive Group of the RCUK.

David Sweeney: I am David Sweeney. I am the Director for Research, Innovation and Skills for the Higher Education Funding Council for England.

Sir Mark Walport: I am Mark Walport. I am the Director of the Wellcome Trust.

Q250   Chair: Thank you. Evaluation of editorial peer review is poor. Should you, as funders of research, contribute towards a programme of research to, perhaps, justify the use of peer review in publication and find out how it could be optimised? Could that be something that you could usefully do among yourselves?

Sir Mark Walport: It all depends what you mean by "research". It is quite important to have a very straightforward understanding of what peer review is. Peer review is no more and no less than review by experts. I am not sure that we would want to do a comparison of a review by experts with a review by ignoramuses.

Q251   Chair: That's not very nice, is it?

Sir Mark Walport: Having said that, we do conduct studies of peer review. The Wellcome Trust published a paper in PLoS ONE a couple of years ago in which we took a cohort of papers that had been published. We post-publication peer-reviewed them and then we watched to see how they behaved against the peer review in bibliometrics. There was a pretty good correlation, although there were differences. Experiments of one sort or another are always going on, but the fundamental question of whether you should compare expert review with just randomly publishing stuff I don't think is something that anyone would be very keen to do. It lacks equipoise.

David Sweeney: Through our funding of JISC and through our funding of the Research Information Network, much work has been carried out in this area and we remain interested in further work being carried out where the objectives are clear.

Professor Rylance: Yes. We, too, would be open to trying to think about how that might be researched. We have to bear in mind that peer review is not a single phenomenon. It is peer review in relation to publication, grant awards, REF and so on. Again, there are differences between the natural sciences, the social sciences and the humanities. You would have to define the task a bit more carefully. We do, from time to time, fund research on, for example, the influence of bibliometrics and its relationship to peer review, so work is going on in that way.

Q252   Chair: The Wellcome Trust highlighted a common criticism of peer review by saying: "It can sometimes slow or limit the emergence of new ideas that challenge established norms in a field." Do the others agree and what can be done about this?

Professor Rylance: Churchill once said that democracy was the worst system in the world apart from all the others. I think the same about peer review. Peer review is absolutely crucial, but, of course, it carries limitations of one kind or another in that it can slow down things. The volume of work load and so on and so forth is increasing but, none the less, we need to remain committed to the principle of doing peer review because, in the end, it is always the first and last resort of quality.

David Sweeney: We think that there is a risk, but we also look at the many experiments that are going on with social networking and modern technological constructs. We hope that the broad view that is taken of those will mitigate the risks which the Trust identified.

Sir Mark Walport: To be clear, the Wellcome Trust, in our submission, said: "Other commonly raised criticisms of peer review are…" We didn't say that we agreed with that criticism. The issue is that peer review or expert review is as good as the people who do it. That is the key challenge. It has to be used wisely. It is about how the judgment of experts is used. It is about balancing one expert opinion against another. The challenge is not whether peer review is an essential aspect of scholarship because there is no alternative to having experts look at things and make judgments.

Q253   Chair: If that common criticism has validity, is the growth of online repository journals like PLoS ONE technically sound?

Sir Mark Walport: It is entirely sound. PLoS ONE has very good peer review. Sometimes there is a confusion between open access publishing and peer review. Open access publishing uses peer review in exactly the same way as other journals. PLoS ONE is reviewed. They have a somewhat different set of criteria, so the PLoS ONE criteria are not, "Is this in the top 5% of research discoveries ever made?" but, "Is the work soundly done? Are the conclusions of the paper supported by the experimental evidence? Are the methods robust?" It is a well peer-reviewed journal but it does not limit its publication to those papers that are seen to be stunning advances in new knowledge. It is terribly important to put to bed the misconception that open access somehow does not use peer review. If it is done properly, it uses peer review very well.

Professor Rylance: It is important to distinguish between peer review that is looking at a threshold standard, i.e. "Is this worthy of publication?" and peer review that is trying to say, "What are the best?" when you are over-subscribed in terms of the things you can publish.

Q254   Chair: Should other journals adopt this methodology?

Sir Mark Walport: Other journals are beginning to. Different communities behave in different ways. For example, the physics community have pre-print circulars. They put papers out online and those are reviewed. When they have been peer-reviewed by the community to some extent, they are eventually published in their final format. One of the issues in the biological sciences is that the volume of research is extremely high. An important issue in the medical sciences is that an ill-performed study can have harmful consequences for patients. Therefore, there need to be filtering mechanisms to make sure that things are not published that are, frankly, wrong, misconceived, the evidence is bad and conclusions are drawn which means that patients could be harmed. Different communities require slightly different models.

Q255   Stephen Mosley: We have heard that the quality of journals, often determined by the impact factor of those journals, is becoming a proxy measure for research quality. Would you tend to agree with that assessment?

David Sweeney: With regard to our assessment of research previously through the Research Assessment Exercise and the Research Excellence Framework, we are very clear that we do not use our journal impact factors as a proxy measure for assessing quality. Our assessment panels are banned from so doing. That is not a contentious issue at all.

Sir Mark Walport: I would agree with that. Impact factors are a rather lazy surrogate. We all know that papers are published in the "very best" journals that are never cited by anyone ever again. Equally, papers are published in journals that are viewed as less prestigious, which have a very large impact. We would always argue that there is no substitute for reading the publication and finding out what it says, rather than either reading the title of the paper or the title of the journal.

Professor Rylance: I would like to endorse both of those comments. I was the chair of an RAE panel in 2008. There is no absolute correlation between quality and place of publication in both directions. That is you cannot infer for a high-prestige journal that it is going to be good but, even worse, you cannot infer from a low-prestige one that it is going to be weak. Capturing that strength in hidden places is absolutely crucial.

Q256   Stephen Mosley: We have had some very good feedback about the RAE process in 2008 and the fact that assessors did read the papers, did understand them and were able to make a subjective decision based on that. But we have had concerns. I know that Dr Robert Parker from the Royal Society of Chemistry has expressed a concern that the Research Excellence Framework panels in the next assessment in 2014 might not operate in the same way. Can you reassure us that they will be looking at and reading each individual paper and will not just be relying on the impact?

David Sweeney: I can assure you that they will not be relying on the impact. The panels are meeting now to develop their detailed criteria, but it is an underpinning element in the exercise that journal impact factors will not be used. I think we were very interested to see that in Australia, where they conceived an exercise that was heavily dependent on journal rankings, after carrying out the first exercise, they decided that alternative ways of assessing quality, other than journal rankings, were desirable in what is a very major change for them, which leaves them far more aligned with the way we do things in this country.

Q257   Stephen Mosley: That is a fairly conclusive response, is it not? Lastly, you were talking about PLoS ONE in answering the Chair's questions. From what you were saying, there is a difference in standard between papers in PLoS ONE that might not be in that 5% most excellent bracket, but just so long as the work is technically sound and correct, they are in there without being excellent. With the impact factor of those repository journals gradually increasing, does it mean that the proxy use of peer-reviewed publications is even a less valid approach to assessing the quality of research in institutions in the future?

David Sweeney: I think we just don't do that. We are not keen to do that. We want to assess—all the time we do work every few years—on how much we can use bibliometrics in a robust way, particularly as you aggregate the information over a large number of publications. At present we do not feel that the role that that should play is beyond informing the expert judgments that are made by panels. We are very conscious of the fact that our research assessment exercise has to go across all disciplines. There would be little argument that the use of metric information is really quite difficult in many disciplines. We are trying to have a consistent way of doing things. We are very keen to be abreast of the latest research but confident that peer review should remain the underpinning element.

Sir Mark Walport: If you are assessing an individual, there is simply no substitute for looking at their best output. If you are assessing a field, that is when you can start using statistical measures. You can start using things like the number of citations. If you look at most funders, they are very focused on asking people to tell them what their best publications are, sometimes limiting the numbers. For our Investigator Awards, we limit the number of publications to people's best 20.

Professor Rylance: Following on from David's point, in my field, in the humanities, the majority of publications are not in journals. They are in other forms like books or chapters in books and so on. There simply is not the bibliometric apparatus to derive sound conclusions for that reason.

Q258   Pamela Nash: Given the importance of peer review in both academic research and publishing, do you think that formal training in conducting peer review should become a compulsory part of gaining a PhD?

Sir Mark Walport: Part of the training of a scientist is peer review. For example, journal clubs, which are an almost ubiquitous part of the training of scientists, bring people together to criticise a piece of published work. That is a training in peer review. Can more be done to train peer reviewers? Yes, I think it probably can. PhD courses increasingly have a significant generic element to them. It is reasonable that peer review should be part of that. People sometimes talk about the opportunity cost of peer review. Peer review is a form of continuous professional development. It forces people to read the scientific literature and it gives a privileged insight into work that is not yet published. Most laboratories would involve, if not their PhD students, their early post-docs in peer review work.

Professor Rylance: I would echo and support that. It seems to me that research is a collective enterprise and that anyone who wishes to enter that field either as an academic or in some other capacity needs to understand that. So an engagement with the work of others of a judgmental or other kind is really quite important as part of that process.

Q259   Pamela Nash: I am aware that the "Roberts funding" provided training for PhD students until recently. Would any of you have any ideas on who could be responsible for continuing that funding for that training?

Sir Mark Walport: That funding is available. For example, the Wellcome Trust funds four-year PhD programmes, so we are providing funding for a longer period. The research councils can speak for themselves, but the four-year model of the PhD is becoming well established and that gives universities the opportunity to provide that transferable skills training.

Q260   Pamela Nash: But should specific peer review training be recommended when that funding is given?

Sir Mark Walport: We are not prescriptive in what universities teach. As I said, that would be a reasonable component of it.

Professor Rylance: Shall I say something about the Roberts funding?

Pamela Nash: Yes, please.

Professor Rylance: The amount we are giving to universities for training and developing postgraduate research will increase, and it will include components which replace part of the Roberts funding. The issue we have to think about is that, on average, around only 25% of the UK postgraduate population are funded through agencies like the research councils. The rest of it is coming through other sorts of routes. How are universities going to provide a system for three quarters of the population who are not getting money from us? There has to be a joined-up conversation about how we develop that.

Q261   Pamela Nash: Thank you. Both Research Councils UK and the Wellcome Trust mentioned in their contributions to this inquiry that it would be favourable to reduce the burden—the bulk of the work—on referees of the peer review process. What would each of you propose to help streamline that process and reduce the burden on referees?

Professor Rylance: I would identify three things and I will say a little bit about each one of them. One thing you can do is demand manage. If the burden is increasing, and we recognise that it is just in terms of volume and the complication of frequency, if you start to reduce the number of applications, that work load starts to reduce and the quality of peer review goes up, presumably, how do you demand manage in that situation? You could do it in a draconian way. You could, for example, say, "The quota for this university is whatever it is", based on historic performance. You could do it developmentally working with universities to filter their own application processes, such that ones which are not going to go anywhere in any reasonable scheme are filtered out at an early stage, or you could go for what, in the jargon, is called "triage" processes when you receive them. So you do a relatively light-touch first stage application and then you reduce others.

My personal view—there are differences of opinion about this—is that measures like quotas have quite significant downsides, of which probably the most significant is that they would discourage adventurous, speculative, blue skies applications because, naturally, if you have a quota, people tend to be conservative about what they are putting in in order to try and gain the best advantage. The future of this lies in the direction of dialogue with universities, in trying to develop their processes, share good practice and work with the research councils, other funders and HEIs who are trying to do it. After all, in the end, it is in nobody's best interest to continue in this way. It cannot be the case. We must collectively try and make some headway with this.

David Sweeney: For us it is a volume problem. Obviously, more research is being done and more findings are being produced. We think that the amount that needs to go through the full weight of the peer review system need not continue to increase. Indeed, we are seeing initiatives in that. As part of our assessment exercise, we require four pieces of work over seven years from academics. In most disciplines, they will publish much more than that, but they do not submit it to the exercise because we are interested in selectively looking at only the best work. We would want to encourage academics to disseminate much of their work in as low burden a way as possible, but submit the very best work for peer review both through journals and then, subsequently, to our system. That is the only way to control the cost of the publication system. We must look for variegated ways of disseminating and quality-assuring the results.

Sir Mark Walport: The first thing is that the academic community is still highly supportive of the fact that peer review is an intrinsic part of the scholarly endeavour. To put some numbers on it, between 2006 and 2010 the Wellcome Trust made about 90,000 requests for peer review. We got about 50% usable responses. The response rate was a bit higher but not every referee's report added value. That is a pretty good response rate, and much of that was international. We used the global scientific community to help review and they do that very willingly. People who are in environments where they know they cannot themselves get a Wellcome Trust grant are, nevertheless, willing to referee for us.

We work hard to reduce the burden. For example, we do some shortlisting of grants by expert committees. So rather than sending out the grants to lots of people to have written comments, we bring an expert committee round a table like this and they do the shortlisting. When you get down to the shortlist, there really is no substitute for written peer review. That is where we use that. We use things increasingly like a college of referees. Instead of every grant going to completely different people, you would have one or two people who would look at several grants. We are constantly trying to make the process more efficient. At the end of the day, the system is not broken, it is working and it is an important part of scholarship and research.

Q262   Pamela Nash: As a result of those answers, can I ask two more specific things? Would you say that reducing the burden is just a matter for the entire academic community or is there one group of people who are particularly responsible for that? Also, in the last few weeks, we have heard from publications that are using the cascade system to pass on submissions that they might not publish themselves. Do you think there is any value in that in reducing the burden for academics?

Sir Mark Walport: If I may start on that, peer review, of course, is used for different purposes. The predominant reason why the Wellcome Trust uses peer review is to make decisions about whether to fund a grant or not. Clearly, a cascade system is not appropriate for that. In terms of journal publishing, for a publisher that has a stable of journals, that may work, but then it is up to the authors of the work as to whether they want their paper, if it is rejected by journal A, to go to journal C in that series or whether they would rather try a different publisher. In principle, it is a good idea, but I think its effectiveness is yet to be fully tested.

Professor Rylance: I would like to amplify that. What we are talking about from the research councils' point of view is how you decide to fund this grant application rather than that one. We are not deciding "Should this be published or should that be published?" Again, if the focus is on journals, it is important to recognise that some disciplines don't primarily publish through journals and do engage with publishers about, for example, books. There are IP issues and the rest of it to do with cascade systems. It seems to me that that is an issue for the publishers rather than ourselves.

David Sweeney: We have a proper concern because of the cost of the system to universities and the way in which that is inflating above most other costs in universities. We are very keen to look at every way there is of reducing the burden. The cascade system is interesting, but we do not think that this is something where our funders should be prescriptive. It is a collaborative exercise between the community, the funders and the universities, and we should work together on that.

Q263   Pamela Nash: You have been very diplomatic. I wanted your opinion, although you are not directly responsible. Finally, do any of you have any ideas about how taking part in peer review can be formally recognised? Would you support a form of accreditation? Do you think there are benefits in that?

Professor Rylance: There are two points. One is whether I think that peer review should be part of professional development for researchers. The answer is, resoundingly, yes, because that is the world they are moving in. It is quite important that their employers recognise quite how much labour is put into it and how important it is in terms of not just their personal but their general benefit. Should it be accredited? There would have to be quite a complicated cost-benefit analysis on that. My instinct is that probably it is not worth whatever cost-plus distraction labour time such an exercise would produce.

David Sweeney: I agree.

Sir Mark Walport: I think this is one of those things where it is easy to say that you need to give people recognition for peer review. The reality is are you going to promote someone from a lectureship to a senior lectureship or from a senior lectureship to a readership on the basis of review? You are not going to do that. You are going to do it on the core scholarly activities which are education and the research itself. It is something that the community has to recognise. It is beneficial to do peer review. As I said before, it is part of your continuous professional development. It is about keeping up to date with the field. It is not broken. I think that system works. You could ask the question, "Should one pay the cost directly of peer review?" That is difficult. Academics do all sorts of things and they are not paid per unit item that they do.

Professor Rylance: Within the academic and research community at large there is a broad consensus that if you did not do certain kinds of activity—peer review would be one and external examining would be another—the whole system would be reduced to jeopardy. That is a general broad recognition and a willingness to support it.

Q264   David Morris: Professor Rylance, have Research Councils UK and Universities UK withdrawn funding from UKRIO, and, if so, why?

Professor Rylance: It is quite a complicated tale, so forgive me. The original RIO was set up primarily with a remit for the biomedical sciences. It was set up on a fixed-term basis through a multi-agency system, which I am sure you are aware of, that included not just the funding councils, research councils and the Department of Health, but Wellcome were involved and other bodies. When that came to the end of its term, we had to make a decision about whether to continue. In other words, funding had stopped. It was not a question of withdrawing it. Do we continue that funding or do we not? There was a sense of two things. One is that it was really important to establish a body that had a remit and that that body should cover a broader range of disciplines than was the case with the original RIO. Secondly, we needed to disentangle various sorts of functions which were caught up within that original body. Could one be, for example, both a funder and an assurer of it, because you are clearly in quite a complicated relationship? Also, could you be both an assurer and an adviser, because, clearly, if you are giving advice which then turns out to be wrong, you would then be policing your own mistake at some level. We had quite a hard look at this in tandem with all of these bodies. Dame Janet Finch chaired a body that has produced a report on it. The general conclusion was that, in its current format at that stage, RIO was not going to meet the sorts of needs that I have just described. We continued its funding for a little and we are now thinking about different ways in which we can put together a collective agreement on research integrity in the UK, largely through, probably, a concordat style arrangement. The key player in this, just to complete the story, will be Universities UK. The reason why Universities UK are key to this is because they are not funders themselves of research.

Q265   David Morris: Are you saying that it is moving more towards the subscription funding model? Is this a necessary change?

Professor Rylance: It will be a subscription model in the sense that it will involve a series of agencies that will participate in the funding of it. Will that be a subscription based on very specific activities? You can buy in for this bit but not that bit, that kind of thing? We are a long way from agreeing that at the moment. There is a genuine sense among the bodies that I have just described that we need a cross-disciplinary and cross-organisational arrangement to provide assurance and link up the various sorts of assurance mechanisms that each funder has, to look at consistency and so on and so forth. That will be done, as I have described, through a concordat arrangement largely run through UUK, but that is as far as we have got at the moment.

Sir Mark Walport: May I comment on that? The Wellcome Trust was fully supportive of Research Councils UK on this matter. Research integrity is important. There is no argument and no debate about that. The question is where the responsibilities lie for ensuring that it happens. We believe very strongly that the responsibility for the integrity of researchers lies with the employers, so by and large that is the universities for university academics. It is clearly the research institutes for people employed by research institutes. That is why we support moving to a concordat between research funders and the employers whose researchers we fund that it is their responsibility, in the same way that health and safety is a responsibility that is delegated to employers. Frankly, we did not believe that UKRIO in the form that it was constituted was delivering what we needed.

David Sweeney: We are entirely supportive of that. This is something we have got to get right. We can only get it right by being collaborative. I do not think that funding is the core issue. Research Councils UK, working with Wellcome and the UK funding bodies, will support universities through what is needed. Of course, we have a broader assurance role in regard to universities. We are very keen that that should play in full support of the work that we are doing collectively.

Q266   David Morris: What do you think about the recommendation to create a new research integrity body, when one already exists, and should the body in the UK responsible for research integrity be a regulatory body with formal legal powers? Do you think that should be the case?

Sir Mark Walport: No. Let me be clear. UKRIO was not delivering what we needed.

Q267   David Morris: What are the potential future sources of funding for an organisation such as UKRIO? Could any of those sources be compromised in any of its independence?

Sir Mark Walport: But it is not clear, with respect, whether a third party body is needed to do this. This is an intrinsic responsibility of an employer. It is not something they should be delegating to somebody else. The integrity of the research is absolutely intrinsic to the good functioning of the university or the research institute. This is a responsibility that they must have.

Professor Rylance: The issue is not whether assurance should or should not happen. Clearly, it has to. If you give money to a body to do certain things, you must have steps in place to test that that is being used appropriately so there are assurance mechanisms. The issue is how we get consistency and joining-up between the different funders and agencies. That is the problem at the moment. There is no appetite for regulation in this at the moment for the various reasons that people have given.

Q268   Chair: Can I just be clear? In the report of the UK Research Integrity Futures Working Group there was a recommendation to create a single independent body to lead on "research integrity across all disciplines"—I think that was the phrase—and across all research establishments. Are you supportive of that concept?

Sir Mark Walport: No, I am not sure that I am. I believe that this is a responsibility of individual establishments.

Q269   Chair: I thought that is what you were saying. What about research councils?

Professor Rylance: We want a framework that is applicable in its different modes to different sorts of projects and disciplines. The situation in the old RIO—there is a successor body—was that it was only affecting a part of the community. Increasingly, there are cross- disciplinary projects which need attention across the piece. That is our anxiety.

Q270   Chair: The report, of course, was published by RCUK and UUK in September, but there are differences of opinion about what should happen. Where is there agreement? Is there agreement that it should be dealt with by a single body, or is that controversial?

Sir Mark Walport: Let me try. First, there is agreement that research integrity is extremely important. There is no argument about that. There is also agreement that this is a fundamental institutional responsibility. As to whether an added body is needed, the question is what form that takes. There is certainly a need for a common repository of skills and information as to the processes you might go through. Whether there is any need for some sort of external quasi-judicial body, there may not be agreement on that. As I have said to you, my opinion is that that is not needed.

Professor Rylance: I entirely support that last point. There is no appetite for trying to find a regulatory body. But there is significant progress in two respects. One is towards developing this concordat that could then shape the activities of the various responsible agencies. The second is in terms of data and process-sharing between the various parties. If we, in RCUK, are doing a certain kind of thing that we would like to commend to other people, we will share that with others. Whether they choose to take it up is then their own business.

Q271   Gavin Barwell: Can I just ask a question on the timing? If I heard you correctly, when you were responding to Mr Morris, you said that you are still quite a long way from a final solution to this. You have stopped funding UKRIO at this point. What is the timeline?

Professor Rylance: We have provided some transition funding for UKRIO. I cannot remember exactly when that ran out, but it was about the end of the last calendar year or thereabouts. At the moment, there is the continuing activity by each of the separate funders monitoring their own projects. An early meeting, which included representatives from all three parties here, agreed on these core principles about information-sharing and concordat. The second meeting to try and work out the details of that is currently in the planning stage. That is where we are at the moment.

Q272   Gavin Barwell: Could you give the Committee an idea of when, roughly, you think this might be finalised and the concordat will be in place? I know you cannot predict exactly but could you give a ballpark idea?

Professor Rylance: I would be disappointed if we were getting too far into the late autumn and this thing is not in at least a fit-for-purpose stage.

David Sweeney: That is my understanding.

Professor Rylance: Let me stress that the one body that is charged, for the reasons I have given, with taking this forward organisationally is not here, which is UUK.

Q273   Graham Stringer: Sir Mark, can I follow up your answer to David? At our last evidence session we had the Pro-Vice-Chancellor responsible for research at Oxford here—I could give you the exact quote but I will not read it—who, basically, said that in his experience there had not been an occasion when they had had to investigate somebody for fiddling their results for fraudulent practices in research. On the other hand, we had another witness who told us that, if research institutions had not sacked at least one person, then they were not trying. Taking Oxford as an example, if you take your assertion that it should be the employers, that indicates that the employers are not carrying out that job. Certainly, in the case of Wakefield with the MMR scandal, the employers of Wakefield did nothing. I will now come to my question. Doesn't that mean to say that there has to be a huge change in employers' practices if your view was to be maintained?

Sir Mark Walport: Employers are responsible for the integrity of their employees in all sorts of aspects of life. They are responsible in business for making sure that they do not commit fraud and that the accounting is done well. I can't possibly comment on whether individual universities are immune from the malpractice of their employees. I do not think it alters the fact that, as in health and safety, and all sorts of other aspects, such as the good behaviour of employers in respect of how they deal with students, this is an employer's responsibility. Increasingly, universities are taking this very seriously. Of course, you can pick examples of where things go wrong. You can pick examples of where peer review hasn't worked well. The Wakefield sad story is a very good example of that. That paper should never have been published. But that is not an argument against organisations doing it well. In a sense, the importance of the concordat will be that it sets out in extremely clear terms what the relationship is and what the roles and responsibilities of universities as employers are for the integrity of their employees.

Q274   Chair: It is clear that the universities would have responsibilities, but, taking your two examples of health and safety or fraud in conducting their business, in both of those instances there is an external regulator with statutory powers.

Graham Stringer: Precisely.

Sir Mark Walport: The question is what those statutory powers should be. Ultimately, it is clear that a scientist who has committed some form of scientific fraud, if I can put it that way, should lose their job. Does that then fall under some other regulator? Is it something that the courts should deal with? Probably not very often. In the case of medical research, Andrew Wakefield eventually met his come-uppance at the General Medical Council. There are ways of doing this.

Q275   Graham Stringer: But he did not, did he? He was struck off for bad ethical practice. The General Medical Council did not deal with whether his research was fraudulent or not. In a sense that is a bad example. If I can repeat Andrew's point, yes, it is the employers' responsibility, but who is going to keep the employers good?

Sir Mark Walport: That is where the funders will play a very serious role. We take research integrity very seriously as well. It is a grant condition that the work is done properly. From our perspective, in relation to an institution that failed to manage the research integrity properly, we would have to question whether that was an institution at which we could fund research. It is not that we don't take it seriously, but we believe that the mechanism for dealing with this has to be through the employer. Frankly, if the employer is unaware of things going wrong in the research, it is difficult to see how others would be aware and the employer would be completely unaware. They are doing it in whistleblowing procedures. As I say, a well-constructed concordat should make it absolutely transparently clear what are the responsibilities of the employer, whoever it is. We need to make sure that the employer takes that seriously, as they take all other aspects of employees' behaviour.

David Sweeney: In England, as the charities' regulator for most universities and as a regulator under the Act, universities are required to report incidents to us and we monitor the way in which they handle incidents. Actually it is routine.

Q276   Pamela Nash: If I could take up that point, without an external regulator—you have just said that funders have a responsibility here on who they fund—surely, that is then an incentive for an academic institution to keep things quiet so that they don't lose funding.

Sir Mark Walport: Not at all. It is the nature particularly of scientific research that errors are found out, and it can't be in the interests of any good university not to have the research done to the highest possible standard. As David has pointed out, there is a regulator. There are major funding sources that have substantial sanctions. There is no incentive to cover up.

Professor Rylance: I would like to make two quick points. One is that the public visibility of data and research is quite important. It is one good argument for open access, in my view. The second issue is that in the 18 months or so that I have been part of the AHRC I have had, perhaps, two or three occasions where relatively minor malpractice has been reported. The institutions involved have acted very readily. There is a working system between the funders and the institutions.

Q277   Graham Stringer: That is precisely the point I was going to move on to, which is access to data. Can I do it by reading a quote from last week's Scientific American, which makes the point really well? I would be grateful for your comments. It is by John P.A. Ioannidis: "The best way to ensure that test results are verified would be for scientists to register their detailed experimental protocols before starting their research and disclose full results and data when the research is done. At the moment, results are often selectively reported, emphasising the most exciting among them, and outsiders frequently do not have access to what they need to replicate studies. Journals and funding agencies should strongly encourage full public availability of all data and analytical methods for each published paper." Do you agree with that and do you follow those policies?

Professor Rylance: I do not work in a science area so I will defer to my colleagues here. The answer is yes; I endorse the broad principles of that. The one slight reservation I would have is that, quite often, research is a process of discovery and you don't quite know at the beginning what the protocols and procedures are that you are going to use, particularly in my domain. I would have a slight reservation about that, but the principles are right.

Graham Stringer: Fair point.

Sir Mark Walport: This is one of the arguments in favour of good peer review, because a good peer reviewer when reviewing a scientific paper actually probes and says, "Where are the controls? Where is the missing data?" That is the first thing. Secondly, we do explicitly ask investigators when they are generating datasets how they will handle the data. In general terms, we do encourage openness. In fact, at the moment there is a Royal Society inquiry on openness in science which is looking at the whole issue of openness of data. One has to recognise that there are both real costs and opportunity costs. Data is not an unalloyed good, as it were. It is something that has to be interpretable. It is quite easy to bamboozle by just putting out billions of numbers. It is actually a question of presenting the data in a way that is usable by others. But the principles of openness in science, of making data available and open, are something that the Wellcome Trust and other funders of biomedical research around the world are fully behind and completely supportive of.

Q278   Graham Stringer: Is what lies underneath that answer that you believe that codes, computer programs and all the data that would enable other researchers to replicate the work should be made available publicly?

Sir Mark Walport: Bearing in mind the feasibility and garbage in/garbage out, one has to be careful that the data is usable. Yes, increasingly very large datasets are generated. We want to maximise the value of the research that we fund. Therefore, openness is a very important principle. There are some other issues that need to be dealt with as well, so if you are dealing with clinical material then the confidentiality of participants is paramount. You have to manage data so that they are appropriately anonymised and people cannot be revealed. It has to be in the general interest of the advancement of science and knowledge. As you say, science is validated by its reproducibility. If you cannot see the data, that is a problem. Of course, the revolution of the power of the internet to make data available has meant that it is possible to put out data in ways that were never possible before.

There are no new principles. The way a scientific paper is structured is that it has a materials and methods section which should set out in sufficient detail for anyone else to be able to reproduce the work. There is nothing new here. Broadly, it makes complete sense to make as much data available in as usable a form as possible. That is something that we strongly support. It is why the funding of institutions like the European Bioinformatics Institute, which is housed at Hinxton, is so important. The UK Government has a good track record in supporting the EBI and funding has recently been announced for an extension there as part of the European ELIXIR project. Making data available is something that is incredibly important.

David Sweeney: We believe in openness and efficiency in publicly funded research. Dr Malcolm Read took you through some of the issues at a previous hearing. We have funded and continue to fund projects that will push this area forward—UKRDS—and now some projects are looking at how cloud computing can help. Of course, we have learnt a lot from the research councils that the ESRC data archive has been a stunning success over many years. As Sir Mark says, the principles are all there. Technology is now allowing us to make advances, and through the work we fund we will learn a lot. Our objective is openness.

Q279   Graham Stringer: Where research is publicly funded, if I can paraphrase what you say, you are saying that the data should be publicly available. If there are good reasons for it being confidential, do you think it should be made available in a confidential depository to the reviewers and, potentially, for other researchers so that it is available in some form?

David Sweeney: That requires consideration of the particular circumstances and the sensitivity. Reviewers should have access to all the information. They need to assure themselves of the quality.

Professor Rylance: You start from that principle and then you think why it is that you shouldn't reveal that rather than thinking you should not make it publically available and then think of execptions.

Q280   Graham Stringer: You have mentioned that you could have a huge dataset. Some of it may be good data and some of it may be rubbish. Are there real problems of costs and, if there are, who should pay for those costs of storage? Are there any other practical problems of storing huge datasets?

Sir Mark Walport: There are very major costs. For example, the Sanger Institute this year alone has generated 1,000 human genome sequences. That is a massive data burden. Indeed, the costs of storing the data may in the future exceed the costs of generating it. Who should be responsible for doing that? It is, ultimately, a research funder issue, because we fund the research and so we have to help with the storage. It is like all of these things. Our funding is a partnership between the charity sector and the Government and it is a shared expenditure.

Professor Rylance: There are issues as well about obsolescence. At what point does this data become simply not relevant any more? The length of time for that will be discipline- specific and so on. There are a whole host of practical issues about how you do this. IP—intellectual property—is one, particularly, in my area, to do with creative works, for instance.

Q281   Stephen Metcalfe: I would like to turn now to the importance of articles versus journals, if I may. As I know you are aware, PLoS ONE instituted a programme of article level metrics. Do you believe that that is a good way to judge a piece of published science and, therefore, you are judging it on its intrinsic merit rather than the basis of the publication that it is in?

Professor Rylance: Yes, absolutely. To echo what we were saying earlier on, it is intrinsic merit that we are after. It is not reputational or associational value.

David Sweeney: I am not entirely sure that I would say that article level metrics necessarily captured the intrinsic metric merit. We should look at metrics of all kinds and try and judge where the collection and development of the metric does add value. As you drill down to individual articles, some metrics really are not entirely helpful. We have seen that with certain solid evidence in bibliometrics. Equally, we can see, with some of the networking metrics, that they may provide helpful information. I remain of the view that there will be no magic number or even a set of numbers that does capture intrinsic merit, but one's judgment about the quality of the work, which may well be, in any way, in the eye of the beholder, may be informed by a range of metrics.

Sir Mark Walport: I complete agree with David Sweeney on that. You can alter the number of times that an article is downloaded by merely putting some words in the title. There is good evidence that the content of the title influences the number of times that something is downloaded, so measuring download metrics can be very misleading. Different fields have different types of usage. Methods papers, typically, are extraordinarily heavily cited. There can be a long time before the importance of a paper is picked up. It is like all of these things; at a mass scale the statistics are helpful. If you want to assess the value of an individual article, I am afraid that there is no substitute for holding it in front of your eyes and reading it.

Q282   Stephen Metcalfe: You don't see the article level metrics as a potential threat to the more established high impact journals.

Sir Mark Walport: They are not a threat. Web-based publishing brings new opportunities, because it brings the opportunity for post-publication peer review and for bloggers to comment. There are things like the Faculty of 1000, which provides commentaries on papers. There are more and more ways for finding papers among a long tail of publications. This is a fast-evolving space. As the new generation of scientists comes through who are more familiar with social networking tools, it is likely that Twitter may find more valuable uses in terms of, "Gosh, isn't this an interesting article?" All sorts of things are happening. It is quite difficult to predict the future. It can only be an enhancement to have the opportunity for post-publication peer review. It has turned out to be quite disappointing in that scientists have been surprisingly unwilling to put detailed comments. When the Public Library of Science started, it had plenty of space where you could comment. Academics are remarkably loath to write critical comments of each other alongside the articles.

Q283   Stephen Metcalfe: Does anyone else want to add to that? Is traditional publishing a threat to the journals themselves?

Professor Rylance: No. I, personally, do not think it is a threat. There are two issues here. One is the recognition of merit. I entirely agree with my colleagues that, in the end, you have got to read the bloomin' thing to see whether that is true. Then there is the issue about how people gain access to the good and the strong. That is a slightly different question.

David Sweeney: I don't care if they are a threat to the base journals because the journal ecology will develop based on competition and alternative ways of doing things. I am sure they will respond. In some ways, I hope they are a threat.

Q284   Stephen Metcalfe: You touched upon scientists being unwilling to get heavily involved in post-publication peer review. Philip Campbell from Nature told us that that may well be—I am summarising here—because there is no prestige or credit attached to that particular role and there is the risk of alienating colleagues by public criticism. Do you agree with that? Do you think that there should be a system of crediting people?

Sir Mark Walport: There are two separate issues. There are some very interesting community issues here. In the humanities, there is a long tradition of writing book reviews where one academic is scathingly rude about another academic.

Professor Rylance: That is constructive.

Sir Mark Walport: They feel more or less constructive than insipid. In the case of the scientific world, that tearing apart is done at conferences and at journal clubs. The scientific community does not have a culture of writing nasty things about each other. This is an evolving world.

Q285   Stephen Metcalfe: So introducing a system of credit—

Sir Mark Walport: On credit, I think one has to be realistic. Are you going to promote someone on the basis of the fact that they wrote a series of comments on other scientific articles? The hard reality is that the core activities of an academic in terms of their promotion and pay recognition are going to be around their own scholarship and their own educational activities. It can only be at the margins that you will get brownie points for having done post-publication peer review.

Q286   Stephen Metcalfe: Finally, if post-publication commentary were to grow, are you concerned about how you could ensure that there was no bias in that commentary, either positive or negative, either those wanting to build up someone's reputation or those wanting to tear it down without anyone actually challenging them?

Sir Mark Walport: It is quite clearly a risk. We see that in every other walk of activity on the internet. You have only got to look at the world of bloggs, Twitter or anything else. Openness brings its own risks. If anyone can comment, then they can all say what they want, so of course there are risks like that.

Professor Rylance: You could end up in the rather ludicrous receding world of having to peer-review the post-review and the rest of it to find out whether it has worth. Sir Mark was talking about the way humanities review each other's things in print. Of course, one function for the journals that do that is to act as a quality filter to make sure that nothing defamatory, inaccurate or prejudiced is being said. Clearly, if those filters are removed, there is a danger that people will be relatively unbuttoned about things.

Sir Mark Walport: It is self-correcting in that the scientific community is constantly scrutinising each other. A scientist who wrote something that was particularly egregious would be subject to the peer review of their own community.

David Sweeney: I think those risks exist but there are benefits. We will have to adjust to the use of social networking in this area.

Stephen Metcalfe: Thank you very much.

Chair: I am sure, gentlemen, that a lot of what we say in here will be subject to comment in the social media as well. Thank you very much for a valuable session.

Examination of Witnesses

Witnesses: Professor Sir John Beddington, Government Chief Scientific Adviser, and Professor Sir Adrian Smith, Director General, Knowledge and Innovation, Department for Business, Innovation and Skills, gave evidence.

Q287   Chair: Good morning, gentlemen. Sir John and Sir Adrian, thank you very much for coming in this morning. You are familiar with the piece of work that we are undertaking. We have heard that researchers perceive peer review to be "fundamental to scholarly communications". Is peer-reviewed literature also fundamental to the formation of Government policy?

Sir John Beddington: Good morning, everyone. The answer to that question is that scientific evidence is clearly fundamental to Government policy and peer review is a fundamental part of scientific evidence. That is not meant to be a cute response, but it is absolutely clear that the process of science involves peer review, and properly so, and that scientific evidence is essential for the evidence-based policy of the Government.

Q288   Chair: Is the proxy use of the impact factor of peer-reviewed publications to assess the quality of researchers and institutions a useful approach? Does it result in pressure on researchers to publish in high impact journals? Is it good for science?

Sir John Beddington: I would turn to Adrian to comment on that.

Sir Adrian Smith: It is a little circular, is it not, because why would a journal be designated as high impact? It will be related to the quality of the journal, which, in some sense, will be related to the selectivity of the journal, which will be related to the fact that it is sifting out, to some extent, the cream of the things that are submitted to it. I do not think any of the processes that we have relating to the RAE and so on actually builds in, in any formal sense, some kind of measure of impact factors. In different disciplines and communities, there will be a very clear peer group sense of the ranking of journals, which ones are more difficult to get published in and so on and so forth. They are all related back, essentially, to quality as perceived by the peer group.

Q289   Chair: Do you see the failure to get published in a high impact journal as a failing on the part of the researcher?

Sir Adrian Smith: It is a rationing process, is it not? If you take conventional journals, each issue will have a certain number of pages and a certain amount of space, so the editorial board will be sifting the best of what it has. It does not mean at all that the one that did not get in might not be a very valuable paper. There is, certainly, a knowledge in most disciplines of which journals are more selective and harder to get into than others.

Q290   Chair: Evaluation of editorial peer review is poor. Do you think that there is a need for a programme of research in this area to test the evidence for justifying the use and optimisation of peer review in evaluating science?

Sir Adrian Smith: The short answer is no. It is an essential part of the scientific process, the scientific sociology and scientific organisation that scientists judge each other's work. It is the way that science works. You produce ideas and you get them challenged by those who are capable of challenging them. You modify them and you go round in those kinds of circles. I don't see how you could step outside of the community itself and its expertise to do anything other. You have probably had it quoted to you already, but there was a paper in Nature in October 2010 when six Nobel Prize winners were asked to comment on how they saw the peer review process. Basically, it was the old Churchillian thing that there are all sorts of problems with it but it is absolutely the best thing we have.

Sir John Beddington: Peter Agre makes that point in that same article, saying: "I think that scientific peer review, like democracy, is a very poor system but better than all others."

Q291   Chair: That is twice that that has come up today.

Sir John Beddington: Sorry.

Sir Adrian Smith: That is no reflection on the Committee.

Sir John Beddington: Absolutely not; perish the thought.

Q292   Stephen McPartland: I would like to ask you about Government use of peer review research. The US Congress has codified the use of peer review in Government regulations using the "Daubert Standard". In the US, the Supreme Court codified their use in the courtroom. Have you had any discussions with your American counterparts regarding how this works and what any of the benefits are?

Sir John Beddington: I think I probably could answer this. We would not see particular merit in excluding non-peer-reviewed information, because we have to recognise that there is a whole set of information that comes in as Government makes policy, some of it via the media, for example, evidence that is coming in to deal with emergencies. A basic decision on that I don't think would be helpful. The issue is obviously going to be that, when we provide scientific advice to Government, there will be a weighing of that advice and the fact that certain advice is peer-reviewed and appropriately so, or indeed has been highly cited in a praiseworthy way, will go into the balance of that advice. I think I would advise against a piece of legislation saying that only peer reviewed evidence would be considered. One would also have to question the definition of peer review and so on. I don't think it would be something that I would be recommending to Government to think about adopting.

Q293   Chair: In the case of an emergency—I do not know how you are gathering evidence about, for example, the E.coli outbreak—that is happening in real time and, presumably, cannot be subject to any form of peer review. You have to make judgments on it.

Sir John Beddington: Very much so, Chair. That will always be the case. In other sessions of this Committee we have talked about scientific advice in emergencies. What is important is that the basis of that scientific advice is transparent after the event, but when real times are happening we are not going to be able to get a proper peer review of DNA sequencing of this new E.coli outbreak.

Sir Adrian Smith: There is an implicit peer review, however, because the individuals on whose judgments you draw for that short-term thing when you are not doing a proper peer review in some sense have risen to the surface as the experts through the fact that they have been peer-reviewed to death in their normal working scientific life and have emerged as the people with tremendous track records. There is an implicit peer review filtering of who you get the advice from.

Q294   Stephen McPartland: Do you believe that a test should be developed to identify whether or not peer review is reliable? This Committee recommended in 2005, in a report entitled Forensic Science On Trial, that a test for expert evidence should be developed, building on the US Daubert test, and the Law Commission has now built on that and published a draft Criminal Evidence (Experts) Bill.

Sir John Beddington: I would think that this has to be thought about on a case-by- case basis. Peer review is not a homogeneous activity. If one is starting to see that there are, for example, problems of peer review in a particular journal or in a particular area of science, that needs to be addressed by that journal and by the people who work in that particular area of science. If you posed the question, "Is the peer review process fundamentally flawed?" I would say absolutely not. If you asked, "Are there flaws in the peer review process which can be appropriately drawn to the attention of the community?" the answer is yes. From time to time that will happen and that's the way to do it.

Sir Adrian Smith: And there will, from time to time, be misjudgments in that system. You can distinguish the system from particular cases within the system.

Q295   Stephen McPartland: Are UK scientific advisory groups mandated to use peer review?

Sir John Beddington: No, for the very reasons I gave in my answer to the Chair's earlier questions. We would certainly always take into account peer-reviewed information in providing advice to Government. I don't think we would ever exclude it, but that would not be the sole evidence. In fact, some of the evidence that would come in would depend on the area of science. For example, in a large part of social science the scholarship is developed by the production of books, quite often well after the event. Yet social research is extremely important to Government policy. We would have this but it would not necessarily have been published in a social research journal. By contrast, for example, if we are thinking in the context of some work on genomics, then one would be expecting that to have been peer-reviewed and that would be going into the evidence. Again, I just don't think that one would seek to make regulation. I emphasise again that the evidence we use in scientific, including social research, evidence, will sometimes be peer reviewed. Obviously, we would not seek to exclude peer-reviewed material but we would not wish to exclude material that had not been peer reviewed for these sorts of reasons.

Q296   Roger Williams: This is a fundamental question. In your opinion how well does the peer review process validate the assertions made in articles put forward for publication?

Sir John Beddington: In a sense, both Adrian and I have answered that question earlier. Peer review does not guarantee that the results are correct. Science moves on by its use of scepticism and challenge. We see all the time in the journals that are published this week that there will be people who have challenged peer-reviewed papers that were published some years ago and pointed out fundamental flaws in them or new evidence that undermines the conclusions of those papers. That is the progress of science. We can't say that it is a guarantee, and manifestly not.

  We can say that it is an awful lot better than bare assertion without evidence. Particularly when you are looking at scientific issues that are fundamental to policy—I have talked about this to this Committee before—the emergence of scientific consensus is very important. That is not to say you do not have sceptics or appropriate challenges, but peer review does not guarantee that and it never could.

Sir Adrian Smith: It does have a lot of checks and balances in the system. In a past life I spent a lot of time refereeing mathematics papers for journals. In some sense, your own personal reputation does depend, as a reviewer, on not letting through things which are incorrect. The whole system and the direction of travel is to filter and get it as correct as possible, but it can never be a guarantee that you don't miss something.

Q297   Roger Williams: Today, and increasingly, I guess, in the future, submissions in science will be accompanied by very large and complex sets of data. Do you think that the reviewers should be assessing that underlying data as well as the article that is being produced?

Sir Adrian Smith: In an ideal world, but that is rather difficult, is it not, because data will come out of laboratories and field studies. As a reviewer, you can't go off and replicate that. If you are trying to study somebody's derivation of a mathematical formula, you can replicate. The difference between the scientific argument and the data is rather different, but the protocols that are in place for collecting data, for example, in medicine, in conducting proper clinical trials and all the rest of it, are in an environment where all the pressures and checks and balances are to get that right.

Q298   Roger Williams: The problem is getting access to that rather than the burden that is put on the reviewer in doing it.

Sir Adrian Smith: Yes. There is a great movement now and a recognition of openness and transparency, which has always been implicit as a fundamental element of the scientific process. But the more we collect large datasets, you have to give other people, as part of the challenge process, the ability to revisit that data and see what they make of it with openness and transparency. There is general support these days for the presumption that the research, the associated data and if you have written a computer code to assess it, should all be available and up for challenge and testing validation. In fact, explicitly the Research Councils encourage that, as Government Departments do. However there can be complex and legitimate reasons for not necessarily, at least in the short term, being that transparent. An awful lot of policy in recent years has meant that we have been trying to lever more out of public investment by joint working with business and industry and levering additional funding. Once you get into that territory, you do have commercial and intellectual property constraints on a temporary basis at least, for openness and transparency. The presumption is that, unless there is a strong reason otherwise, everything should be out there and available.

Sir John Beddington: Adrian has made a good point that in some of the areas some things are, arguably, not even replicable. For example, field studies are taken at a particular point in time and things may move on. In that case, the first key is to examine the basic methodology for the study and that would be subject to peer review. But in terms of saying, "Did they really do what they said they did in the methodology?" it is impossible to do that in certain areas of science. On the other hand, if something is coming up in a very odd way, it is highly likely, over a period of time, to be very significantly challenged.

Q299   Roger Williams: Sir John, the Government is, obviously, a very substantial funder of science. Should it, as a matter of principle, require that all this raw data should be made available?

Sir John Beddington: Adrian has made a parallel point. With Government-funded science, the push is to have data out into the open. There are some areas, for example, shared data, which means you have a mix of data where some of the ownership of that data is outside the UK. You cannot make a hard and fast rule. In principle, though, the answer is that the more people who will look at the scientific problems from which we are wanting to get evidence, the better. Therefore, transparency is, obviously, extremely attractive. From time to time, there will be timing issues, IP issues and so on, which will mean that transparency can be problematic. In the area we were looking at—the community of Chief Scientific Advisers deals with this a lot of the time—we would be looking at material, and if it was not out in the open they would ask why not. If there is no good reason, they would urge that it would be put out into the open. Indeed, Research Councils push exactly along these lines.

Sir Adrian Smith: There will always be issues of personal data protection, commercial interests and intellectual property and national security, so the situation is quite complex. I understand that the Royal Society will be doing a study some time over the next 12 months that the Committee may well be interested in.

Q300   Roger Williams: I think there is agreement that this data should be made available, subject to all the concerns that you have expressed about IP and commercial interests. Another matter is the cost of all this. Who should bear that cost if it is going to happen on a greater scale than it has in the past?

Sir Adrian Smith: That is one of the issues that the Royal Society may well look at. Different communities, different cultures and different forms of data pose different issues, but there is a real problem. Yes, you are right.

Q301   David Morris: Gentlemen, should peer review be a requirement of gaining a PhD to take part in formal training, and who do you think should pay for the training in this peer review now that the "Roberts funding" has ended?

Sir Adrian Smith: I don't think that one size fits all in this situation. We have to allow a lot of scope for particular research organisations or supervisors to decide on what is appropriate. Peer review training is already part of the Research Councils' postgraduate training. There is a formal expectation that students—I am going to quote—"obtain an understanding of the processes for funding and evaluating research." The terms and conditions of training grants actually put some of this in. If you think about it, if you are doing a PhD, you are having to read and access a lot of literature and synthesise that literature. In fact, it is part of what I said earlier. It is an inherent part of the scientific process itself that you are constantly peer reviewing in a way. Every time you read something you are re-evaluating and seeing how it fits into what you are doing. I, personally, do not believe that any form of accreditation scheme is necessary. The amount of effort that has gone on in recent years on the part of the research councils to better codify their expectations of what research training should consist of and making that part of the conditions when they give out either doctoral training grants or research grants takes us most of the way. I do not think there is much that we could do in going further.

Sir John Beddington: I would add that a number of universities have exercises where PhD students and some academics examine individual papers. In that case, everybody goes away, reads a paper over the weekend and then they have a meeting and discuss and critically appraise that paper. That is part of the process. Obviously, that practice will differ between universities and subject areas.

Q302   David Morris: What do you think the Government can do to encourage formal recognition of the peer review element of researchers' work loads? Should a formal accreditation scheme be introduced, in your opinion?

Sir Adrian Smith: In my opinion, no.

Sir John Beddington: I agree. I don't think there is much merit in that.

Q303   David Morris: Do you think that steps should be taken to streamline the peer review process and help reduce the burden on researchers? If you do, who is responsible for ensuring that this burden is reduced?

Sir Adrian Smith: I would take issue with the words you are using. I do not regard peer review as a burden which is somehow additional and keeping fabulous researchers away from their day job. Peer review is an integral part of the scientific and research process and is part of the day job.

Sir John Beddington: Yes, I would agree with that.

Sir Adrian Smith: Not only is it a kind of inefficient extra burden but, if you think about it, if every individual researcher had to start from scratch with everything that was produced by somebody else and review it as their own individual reviewer, you would have a mountain of work to do. A system where, in submitting a paper to a journal, one of us takes it upon ourselves to review that and quality assure it for the rest of the community reduces that kind of burden incredibly. It is not only not a burden but—

Q304   Chair: It is a burden in the sense that it is time-consuming and labour-intensive.

Sir Adrian Smith: It is time-consuming and labour-intensive, but that is doing science. Doing science is time-consuming and labour-intensive. This is an essential part of the process. Peer review for journals is an incredibly efficient way of divvying up the labour so that each of us has less of a burden, in your language.

Q305   Gavin Barwell: I want to ask about research integrity and issues around misconduct. The report of the UK Research Integrity Futures Working Group was published in September of last year. It recommended, and I quote, that "the UK should have a single body to lead on the common issues of research integrity across all disciplines, all types of research, and all research establishments". Do you agree?

Sir Adrian Smith: Yes, that is what it concluded. What happened subsequently was that that analysis and those recommendations were carefully considered by the RCUK executive group. There were a number of people involved in those kinds of discussions. Basically, their conclusion was that they could not go in that direction because they thought that there had not been sufficient attention paid to the appropriate relationship between advisory and assurance functions, the need to keep those apart, the opportunity and operational costs of implementing those and, in fact, that there were some substantial divergences of opinion between partners involved in those studies on what is best for research in terms of assurance. In terms of the current climate of fiscal austerity, it was not thought that that was the optimal way to go. Personally, the direction of travel in RCUK and the way they are trying to take this forward reassures me at this time that we are doing enough. I don't think you should take the fact that that particular recommendation wasn't taken forward in that particular way means that the spirit of what we are trying to do is not being taken forward.

Sir John Beddington: I have nothing to add to that.

Q306   Gavin Barwell: I am not sure if you had the chance to hear the evidence that we took in the previous session but, essentially, the point came out that what they seem to be looking at is some kind of concordat where the primary responsibility would lie with the employer. Reference was made to the parallel with health and safety where the employer has a responsibility. The point that a number of members of this Committee made to the previous witnesses was that in those situations there is a statutory regulator. There is somebody above the employer who has the responsibility for checking assurance. The Government does not believe there is any necessity for that in relation to research integrity.

Sir Adrian Smith: Without going through the full whack of regulation, we do have the UK Research Integrity Office, which is arm's length.

Q307   Gavin Barwell: But its funding has now come to an end, has it not?

Sir Adrian Smith: The funding for the group that Sir Ian Kennedy was involved with.

Q308   Gavin Barwell: UKRIO.

Sir Adrian Smith: I guess the stuff you got was from Rick Rylance, who has been running this. He would have said that the matters that fell under UKRIO—so you are actually trying to mimic some of that—are what the RCUK is trying to take forward in a different form in line with the spirit of the age and the sense of direction. If we can avoid getting into a heavy-handed regulatory framework, most of us would prefer to see if we could do it in another way.

Sir John Beddington: I would add that in terms of the role of the Chief Scientific Advisers, and indeed Government analysts more generally, the key thing is to make certain that the research is of a high quality and has been assessed under peer review, as we have already discussed, and has also been examined to see whether it is good, bad or uncertain. In my time as Chief Scientific Adviser I have not come across papers that have been going into evidence when there is some significant problem of research integrity. I have seen submissions from organisations that are not entirely scientific where I would query the integrity of the research behind them, but that is perhaps another matter.

Q309   Gavin Barwell: Coming on to that issue, Dr Liz Wager, who was speaking to us in her role on the Committee on Publication Ethics, told us—I quote again—"if a university hasn't fired at least one person for misconduct, they aren't looking for it properly". Do you agree with that?

Sir John Beddington: I was not present to hear the exact evidence she gave. Fraudulent activity in a research community is absolutely something that we have to stamp out and stop. For example, let's take a largish research group in which, perhaps, the head of the group is depending on material that has been done by post-docs or PhD students, and one of those post-docs or PhD students does something that is completely fraudulent. It is perfectly reasonable to give a fairly hard time to the head of that research group and say, "Why was this process not picked up?" I think that is a perfectly reasonable line of inquiry. The individual who has committed the fraud is the one that is culpable and the failure to detect the fraud has a degree of culpability. We should be thinking about learning from that. That being said, the detected incidents are pretty low.

Sir Adrian Smith: An awful lot of research is done in big teams. There are hierarchies in teams. There are principal investigators and so on. There could be things lower down in the chain which are hard to spot higher up. The case like that in Korea of fraudulently parading experimentation at principal investigator level is pretty rare. You have the checks and balances in big groups. You have a hierarchy of researchers working together. For any one individual to do something that leads to disaster is pretty unlikely. It happens but you are not going to be able to regulate it out of existence.

Q310   Gavin Barwell: In the past there has been a perception that publication fraud or misconduct has not always been investigated by the institutions in a timely fashion. Wakefield and MMR is an example. Should there be a legal requirement on institutions to conduct a timely inquiry and to publish the full findings of that inquiry and any disciplinary action that is taken?

Sir Adrian Smith: I don't know whether you need to go to what "legal" means, but, if you think of the funding that goes into universities, some of it will come through the Funding Council, for instance, through the QR stream and some through research grants. Both with the research councils and the Higher Education Funding Council conditions of grant are attached which make it clear what the expectations of behaviour are. I think those are sufficient sanctions in themselves. An institution that would not follow up properly would be putting at risk its funding from HEFCE and the research councils.

Q311   Gavin Barwell: Are there specific conditions relating to what institutions should do if there is a suggestion that misconduct was taking place?

Sir Adrian Smith: Probably not.

Q312   Gavin Barwell: Do you think there ought to be?

Sir Adrian Smith: It is an interesting thing to discuss. My own view, having run a university for 10 years, is that the constraints you are under in terms of conditions from the many funders that one has are quite sufficient to frighten one into doing appropriate things.

Sir John Beddington: The RCUK's code of conduct, too, is a good look guideline in terms of conflicts of interest and appropriate behaviour. In the sense that universities depend on a significant income from the research councils, then they would be extremely unwise not to take forward any issues very quickly where they had detected fraud. The media would be commenting on it and other people in the same scientific area would be commenting on it. There would be a very substantial incentive for the universities to take this forward rather quickly.

Q313   Graham Stringer: If I could follow up Gavin's questions, in terms of fraudulent behaviour, we are in an area where we don't know what we don't know, really. There is a certain amount of evidence that very little fraud is detected in universities and major research institutions in this country. Do you think we should be doing more to try and detect that, because in one sense there is an interest within those bodies not to discover or expose the problems they have, to sweep it under the carpet, isn't there? If you are running a university and you find you have a researcher who just writes down his figures without doing the work, which has happened in one or two cases, the university doesn't want to say that it has been employing a fraudster for 10 years, does it?

Sir Adrian Smith: I would disagree. When I ran a university, I would put it exactly the other way round. The institutional reputation will suffer much more long-term harm if you allowed fraudsters to exist and you don't do anything about it. In fact, I think you would get a lot of brownie points in many communities if you publicly identified such people and threw them out. I think the incentives are all in the opposite direction.

Q314   Graham Stringer: It is surprising, therefore, is it not, to follow Gavin's question, that there are no cases in Oxford, as the Pro Vice Chancellor told us, and that there are very few cases in other universities and research institutes where people have found fraudulent behaviour? In the case of Wakefield, even when fraudulent behaviour was found out, the institution investigated itself and found nothing wrong. The evidence we have is in the other direction, isn't it?

Sir John Beddington: I would not seek to comment on the Wakefield case. The issues here are that there is so much in the checks and balances in the way that science operates that fraudulent behaviour is highly likely to be detected by, initially, I suspect, gossip and then increasing concern that there is something wrong. That will happen. It may happen in the community and the attention will then be drawn to the university, and it would be very unwise for the university to ignore that information. I have not experienced it in 25 years at Imperial College.

Q315   Graham Stringer: Can I ask why you won't comment on Wakefield, because it is one of the great scandals of the last 10 or 12 years? It was not dealt with very well. Are there not things to be learnt from that?

Sir John Beddington: Yes, there are. My reason for not commenting is that I haven't read into it for a while, and I would like to re-familiarise myself before I commented, Mr Stringer, rather than any shyness on my part. I am not on top of the detail.

Q316   Graham Stringer: That's fair enough. That's fraud. Are there problems with peer review in other areas? For instance, there is a huge amount of research sponsored by pharmaceutical companies and companies that produce biomedical products. Do you believe that a lot of researchers in those areas are biased towards the products that those companies are selling?

Sir Adrian Smith: I will make an initial comment. I don't think a lot of the research itself is biased. There are biased reporting effects, because if you are doing clinical trials and you get negative results, there isn't a journal on clinical trials that didn't work. It is the ones that work that get published. There is a selection bias in that sense. Do not forget that at the end of the day these things have to get through the FDA or drug regulatory authorities if they are to come on to the market. Then you have incredibly close scrutiny of the protocols, the trials that were done, the conditions under which they were done and so on and so forth. I think there are tremendous checks and balances in the system against that.

Q317   Graham Stringer: Are there structural problems where there are only three experts in a particular field, so that they are, effectively, all peer reviewing each other and they either agree or disagree? In one sense, that was the major criticism of those people who criticised the researchers at the university of East Anglia for their research, was it not? There is a very small pool of researchers in that area.

Sir John Beddington: Yes, you have that, but people are always moving out of their own fields. There is academic interchange. If things are of sufficient importance, they are likely to get challenged, not necessarily by the top two experts in the field but by others who are around the fringes, particularly if they are of significant interest. That is what one would expect to happen. There are issues, for example, where journals will have an issue in terms of finding sufficient people on a particular area of expertise to provide assessments. In that case, the usual practice, it seems to me, goes rather outside the field so you get challenged from different directions. That is quite common, in my experience.

Q318   Graham Stringer: To finish on a fairly obvious question, nearly all of our witnesses have used the Churchillian quote, but when you get fraudulent papers that have been through the best process we have of peer review, do you think that peer review is damaged in that process? Getting back to Wakefield, his paper was peer reviewed. Do you think the peer review process has been damaged?

Sir Adrian Smith: How far do you want to take the Churchillian democracy analogy? There are bad things that happen within the peer review system. Not every MP who has been elected has behaved totally honourably.

Graham Stringer: What a shocking thing to say.

Sir Adrian Smith: You would not abandon the democratic process, presumably.

Graham Stringer: No. That would be terrible. Thank you.

Q319   Chair: Finally, are you aware that RCUK has ever cut funding because of fraud or allegations of fraud? If so, could you give us any examples?

Sir Adrian Smith: I would have to go back and look through the archives, as it were, and directly ask that of chief executives. I am not directly aware of a case.

Sir John Beddington: I have no experience of it.

Chair: Thank you very much indeed.

previous page contents

© Parliamentary copyright 2011
Prepared 28 July 2011