76.In 2011 our predecessor Committee explored the extent to which the peer review process can reasonably be expected to identify misconduct. It concluded that “the integrity of the peer-review process can only ever be as robust as the integrity of the people involved. […] Although peer review is not designed to systematically identify fraud or misconduct, it does, on occasion, identify suspicious cases”. In our current inquiry, Dr Wager told us that “conventional peer review done by journals does not involve scrutiny of raw data and so cannot be expected to detect most cases of fabrication or falsification”. Similarly, Professor Leyser believed that peer review “is never going to be very good at picking up fraud […] It is not the job of the system to spot fabrication”.
77.Our predecessors concluded that “in addition to relying on the vigilance of the people involved in the process, publishers must continue to invest in new technology that helps to identify wrongdoings.” In 2012 the InterAcademy Council also recommended that journals should use technological means to protect the integrity of the research literature, noting that “an increasing number of journals are using software to guard against plagiarism and the inappropriate manipulation of figures”.
78.The BMJ listed some of the techniques that journals use, including “statistical analysis of patterns in datasets, image checking tools, linguistic analysis, investigative journalism, post-publication peer review, and policies that require full reporting of methods and results (using reporting guidelines such as the CONSORT 2010 Statement for clinical trials), and data sharing”. They told us that “none of these approaches is perfect or foolproof but each has its merits and deserves further evaluation”.
79.One example of using software to detect errors is ‘Statcheck’, a programme based on the statistical package ‘R’ which has been designed to automatically identify statistics used in journal articles and re-compute them independently to check for certain kinds of errors. Statcheck was initially used to assess what proportion of psychology journal papers that included a ‘null hypothesis significance test’ contained a statistical error. Half of the papers assessed by the programme were found to contain at least one problem, with one in eight papers containing “a grossly inconsistent p-value that may have affected the statistical conclusion”. Later, Statcheck was used to identify individual papers containing potential errors and automatically contact the authors. Professor David Hand, representing the Royal Statistical Society, told us that, while fraud was “particularly pernicious”, more common problems with data were “oversights in pre-processing the data; ignoring missing values or inadequate ways of handling them; introducing errors when pre-processing the data, which happens quite often; and misunderstanding the statistical tools you are using”.
80.Professor Hand told us that while StatCheck was capable of checking for a particular kind of problem, the range of potential problems was so broad that it would not be possible to entirely automate checking for errors with software. A move towards more datasets being available for secondary analysis by such statistical tools nevertheless presents a greater opportunity to use these techniques to check for potential problems, albeit with the caveats we explore in Chapter 4.
81.Other techniques we heard about during our inquiry included software for detecting image manipulation (see Chapter 1). Damian Pattinson told us that Research Square was working on software that can help with identifying “pixellated areas”, with automation in the early stages of development. He told us that investing in manual checking of images for signs of manipulation was expensive, and said that
Some publishers I work with question whether it is cost-effective to spend that money. An outlay of between $20 and $30 [per paper for scrutiny of images] is significant when it is not quite clear what the repercussion is. […] A journal may have to retract a paper that has clear problems with images, but that is about as bad as it gets for them. Journals often feel that that is not enough of a threat to them to require a million-dollar investment in fixing the problem.
82.There is a continuing need for publishers to invest in techniques and technologies to spot problems with research papers. While the purpose of peer review is not to detect fraud, the sector’s responsibility for the integrity of the research base includes taking reasonable steps to ensure that technology to detect problems is developed and put to good use. This may be an area in which market forces do not obviously support this investment of resource. A Concordat-style set of commitments in the academic publishing community to invest jointly in software for the detection of image manipulation—or common standards for checking images—may be required. We recommend that UKRIO convene a discussion with publishers to explore this.
83.The Concordat to Support Research Integrity explains that the primary responsibility for investigating allegations of misconduct rests with the employers of the researchers involved. It notes that employers of researchers should already, as a condition of the grants they receive, have “robust, transparent and fair processes for dealing with allegations of misconduct that reflect best practice”.
84.We were concerned by a perception in the submissions we received that institutions were in effect ‘policing themselves’ when responding to allegations, and asked witnesses about practice in relation to external input to the process. Professor Sir Ian Diamond, representing UUK, told us that he was “comfortable” with the process being undertaken by colleagues within the same organisation, although he would be “pretty uncomfortable if it was someone from the same laboratory or something like that […] I do not have a problem with someone being brought in from outside, but I do not think the way the system is at the moment is broken”.
85.UKRIO provides guidance to institutions on procedures for investigating misconduct allegations. These include a ‘screening’ stage as a precursor to the ‘investigation panel’; the guidance states that a Screening Panel should consist of at least three senior people, and that “It is desirable, but not essential, that one or more members of the Screening Panel be selected from outside the organisation, rather than members drawn from within the organisation. Allegations that involve senior staff and/or that are judged to be especially serious, complex or controversial may particularly benefit from the presence of someone external to the organisation on the Screening Panel”. However, if an allegation progresses to the investigation stage, “it is a requirement that one or more members of the Investigation Panel be selected from outside the organisation”.
86.Wendy Appleby, the Concordat-recommended ‘named person’ responsible for research integrity at UCL, explained the processes followed, and the stages at which external panel members are sought:
Within our procedure, as named person, my responsibility is to oversee the operation of the procedure. I make judgments in the initial stages of the procedure and help to provide advice on its operation, but I do not make judgments on the latter stages. […] It is a three-stage process, which is standard in the [UKRIO] guideline procedure.
The initial stage of the process is a preliminary assessment, which is the stage I take. Effectively, it asks, “does the allegation of misconduct fit within the definition of research misconduct, or should it be dealt with under a different process—for example, financial problems or a staffing process?”.
If I decide that it fits within the definition, the next step is for it to go to screening. At UCL, we establish a screening panel, which is effectively a peer review. […] Typically, our screening panel is three individuals drawn from within UCL, but we are very careful to check that there is no conflict of interest with the research or the researcher where there is concern. […] Screening is very much about saying, is there meat on the bones of the allegation? Is there prima facie evidence of research misconduct? The important thing to emphasise is that it is about an intention to deceive, because things can go wrong.
[…] If it goes to the third stage, which is the research misconduct investigation panel, we establish a fresh panel of a minimum of three people, which will include an external member—I recall one panel where the membership was entirely external—and that panel will conduct an in-depth investigation.
[…] It is not all self-investigation; indeed, in our screening panel, our procedure allows us to use an external member if we wish. It might be that we are seeking a particular form of expertise; it might be a very complex case. If we wish to, we can do it at screening level as well.
87.However, she suggested that the UKRIO guidance on screening panels might not be being followed at every institution:
I have noticed that in some universities’ procedures they still refer screening to the head of the department where the allegation sits, or a single person. There is greater danger for conflict of interest there, but panellists tend to operate in very independent ways, and use the process and their expertise in forming judgments. In an internal employment process, say a disciplinary, a grievance or something like that, typically an internal set of staff would be involved in hearing that.
88.Universities and other employers of researchers need to be able to demonstrate that they are following best practice in the way that investigations are conducted. The annual narrative report recommended by the Concordat (see Chapter 3) is one opportunity for institutions to review their processes and set out whether they reflect UKRIO’s guidance. Any suggestion that best practices are not being followed is a concern, particularly given the reputational risk of, for example, not using external panel members in some stages of the process. UKRIO’s guidance on misconduct processes was published in 2008; it is worrying that, ten years on, some institutions may not yet have acted on it. We recommend that following best practice in use of external panel members form an explicit part of a strengthened Concordat.
89.We also received evidence on some of the additional steps that institutions take in complex and high-profile cases, going beyond the standard UKRIO model. Box 2 provides an example of this at UCL.
Box 2: The case of Paulo Macchiarini
In January 2016, Swedish Television broadcast a three-part documentary, Experimenten (The Experiments), exposing several examples of misconduct concerning transplantations performed by Paolo Macchiarini, a visiting professor at Karolinska Institutet (KI). During his tenure at KI, Macchiarini performed synthetic trachea transplantations in three patients at the Karolinska University Hospital. A Guardian article from September 2017 states that the documentary “argued convincingly that Macchiarini’s artificial windpipes were not the life-saving wonders we’d all been led to believe. On the contrary, they seemed to do more harm than good—something that Macchiarini had for years concealed or downplayed in his scientific articles, press releases and interviews”. A written submission to us from two academics at the University of Liverpool explains that part of the misconduct lay in ‘over-hyping’ patient outcomes:
The Swedish Central Ethics Review Board has recently published its report on research misconduct relating to scientific articles authored by Macchiarini and co-workers, the conclusion being that a series of six papers should be retracted. A key problem identified in the report was that the scientific articles contained over-hyped descriptions of patient outcomes, which gave the impression that the health benefits of the synthetic tracheas were much greater than they actually were.
Macchiarini was also a visiting professor at UCL until 2014, and a collaborator of UCL academics. Given Macchiarini’s connection to UCL and related research undertaken there, we asked Wendy Appleby (the university’s ‘named person’ for research integrity) to describe the “special inquiry” process that UCL followed to explore a range of allegations of misconduct in relation to regenerative medicine. She explained that “a number of allegations came in on the area of regenerative medicine research, focusing particularly on some of the sorts of methods Macchiarini was using, with slightly different angles in each allegation and lots of questions. One interesting thing about the way a research misconduct process works, or the stages within the overall procedure, is that it relies on allegation, a respondent and so forth. At UCL, we felt we had had a connection with Macchiarini, even though it was not current. We were doing research and working in the area and we had received a number of slightly different variants of misconduct allegations. We felt that we needed to step back from the approach where you need an allegation to look into a specific thing, and take a more generic approach, which was why we did the special inquiry, and that that should be independent. We had an entirely external panel for the special inquiry, with separate legal advisers we appointed and paid for, to help them in their work”.
Wendy Appleby told us that the UCL Special Inquiry “made a number of very helpful recommendations; indeed some of them were around the operation of our overall research misconduct procedure, and you can see them. Some of them were around scientific practice, and working appropriately within the regulatory environment. There are lessons in terms of how we work with research councils, [including] balancing the interests of the individual and their rights and the expectations of funding councils, and the contract we have with them. Finally, within UCL itself, we are looking at the body of activity. It is a very wide-ranging area of research activity, and includes about 1,000 individuals, a huge number of staff across a wide range of organisational units. We were looking at whether we had the governance right, so that if ever there were a rogue action in the future we would have more robust oversight of it”.
90.Publishers have a key role to play in maintaining the integrity of the research record through retracting problematic articles. The British Medical Journal (BMJ) outlined the various steps that publishers can take when problems are detected:
For proven misconduct a journal may publish a correction or notice of concern about the article; retract the article; publish commentaries about the case; tighten its peer review, statistical review, and publishing policies; ban authors; and/or ask the authors’ institution (and any institutional review board or ethics committees involved that approved the research) to investigate.
The Publishers Association added that:
Once a query about an article is received, journals will investigate and decide upon the appropriate action to take in accordance with the Committee on Publication Ethics guidelines. This process can be incredibly detailed and time intensive, but is crucial to the integrity of research and the reputation of the publisher of that research.
91.Dr Elizabeth Moylan, representing the Committee on Publication Ethics, outlined some of the problems that publishers encounter when raising issues with an institution:
If an issue arises and is brought to a journal’s attention, perhaps on a published paper, in the first instance we go to the authors and ask for an explanation, and we loop in their institutions. That can be quite tricky sometimes, because some institutions can come down on people quite harshly, and some institutions might not respond. […] The publisher does not have the tools to do that investigation and the published article is, effectively, on hold until the investigation is completed.
That is where it is tricky, because publishers have a responsibility for the integrity of the published literature. What do they do in the interim? Often, people put an expression of concern on a published article or an editor’s note, because they are waiting for the outcome of an investigation that might determine whether the paper is corrected or retracted. […] The publisher is waiting for the institution to get back to them.
92.Dr Trish Groves, the Director of Academic Outreach at the British Medical Journal (BMJ), told us that in her experience of 28 years at the BMJ, a university has never proactively contacted that journal regarding the outcome of a misconduct investigation to suggest that articles may need to be retracted, and that instead “we are often the ones banging on the door of the institution”. In contrast, she told us that journals talk to each other though, and that “if one journal retracts, it often contacts other journals”.
93.The BMJ suggested that publishing corrections to papers was not always an effective way to correct the record, since “the original, erroneous versions of papers that have subsequent published corrigenda are cited at roughly the same rate as the corrected versions”. Similarly, a recent article for Wired magazine notes that:
For every retracted paper, the original unmarked copy still lives on in print (where you might have read it in the first place). And if you have cited that paper in your own work, you don’t receive an alert that one of your citations has just imploded. Which means that you might be totally in the dark. […] One stem cell paper published in 2005 and retracted in 2010 has been cited 667 times — so far. Nearly half of those citations occurred after the retraction was made official. Here we are, in 2017, seven years after its retraction, and authors continue to refer to it as if nothing happened (including half a dozen times in the past couple of months alone). Nobody knows the extent to which the mistakes in that paper have affected any of those papers downstream.
94.Dr Trish Groves, representing the BMJ, suggested that it was the responsibility of authors to ensure that they check the references they are citing:
We know anecdotally that a lot of people put references in their papers without actually reading the papers they cite in the reference list. They have not bothered to check at the journal website or in an index, such as PubMed or MEDLINE, that it has a big thing that says, “Look out. Retraction.” They do not check. That is initially the responsibility of authors. Some journals have systems where, when a paper is to be published, during the technical/copy editing phase all the references are checked. At that point, a good copy editor ought to pick it up and say, “Hang on. This one’s been corrected,” and it should come back to the handling editor and the author, but I do not know how often that happens.
Catriona Fennell added that there was a “lag” in authors adjusting to a paper having been retracted:
If a paper is retracted, papers may have already been written that cite it; they are in the editorial process, and do not come out for maybe five or six months. It could be that the person was not aware of it at the time they wrote it, and we would hope to try to catch it in the editorial process. After about a year and a half, if I remember the data, you see the citations drop off, because it becomes well known that the paper is retracted.
95.The case of Paulo Macchiarini and UCL’s special investigation into research integrity (see Box 2) has some implications for our predecessor’s 2017 report on regenerative medicine. The Committee’s report noted that:
In 2008, MRC-funded researchers at University College London carried out the first transplant of a human trachea (wind pipe) reconstructed using stem cells. By 2013, the group were ready to build on this success by developing the first clinical trials of a stem cell-derived larynx transplant in a project known as “RegenVOX”. The RegenVOX procedure involves preparing a reconstructed larynx made from the patient’s own stem cells and a donor larynx. The team removes the cells from the donor larynx, leaving behind a scaffold onto which the patient’s stem cells are grafted. This means that the new larynx will not be rejected by the immune system so patients do not need immunosuppressant medication.
The Committee’s report also quoted a witness as referring to “the first successful transplant of a tissue-engineered trachea, utilising the patient’s own stem cells”.
96.Since then, misconduct processes have revealed that the research on using stem cells to support artificial trachea transplants is not reliable, and is based on exaggerated patient outcomes (see Box 2). The ‘RegenVOX’ clinical trial of stem cell-based tissue-engineered laryngeal implants referred to above is now listed as ‘withdrawn’ on the Clinicaltrials.gov website. Having explored the issue of correcting the research record with our witnesses, we resolved to find a way of flagging the now contested evidence that the Committee received to readers of its report. We have arranged for a note to be attached at the relevant places in the online report with a forward reference to this inquiry. Our intention is to help readers of that earlier report to find further relevant information, not to alter the formal record of our predecessor’s work.
97.Dr Elizabeth Wager noted that there was an interest in a university keeping the outcome of any misconduct investigations quiet, which could lead to fraud occurring at other institutions in the future:
If results of investigations are kept confidential, or worse, if deals are made so that researchers are “let go quietly” with favourable or neutral references to avoid perceived bad publicity surrounding a proper investigation, researchers are likely to move to other institutions which are unaware of their track record, and the chance to rehabilitate or retrain them will be missed.
There are examples of this happening in the UK; the case of neuroscientist Jatinder Ahluwalia was highlighted by Dr Wager as an example of a lack of communication between institutions and checking of references. According to a summary published by the Times Higher Education (THE), Ahluwalia was dismissed from the University of Cambridge’s doctoral programme in 1998 for suspected research misconduct, and subsequently completed a PhD at Imperial College London in 2002. He then took a postdoctoral position at University College London (UCL), working with Professor Anthony Segal. THE reported that in 2004, Professor Segal attempted to repeat Ahluwalia’s experiments after a paper from another group contradicted their findings. Ahluwalia left UCL in 2007, but in 2008 UCL started a misconduct investigation which concluded in 2010 that “it was beyond reasonable doubt that Ahluwalia had misrepresented his experiments […] deliberately” and “that it was likely that he had […] deliberately contaminated chemicals used in colleagues’ experiments “so as to falsify the results of those experiments in order to conceal the falsification by him of the results of his own experiments”. By then, Ahluwalia had moved to the University of East London. Following this revelation, Ahluwalia was dismissed from UEL in 2011. Dr Wager summed up this example as “four very reputable British universities not sharing information”.
98.Professor C.K. Gunsalus noted that researchers found guilty of misconduct may also move from country to country. She knew of “five cases where individuals who were found to have committed research misconduct in the United States moved to the United Kingdom and are active researchers there, and vice versa—people who got into trouble in the UK and moved to the US and started anew. Recidivism is a fairly serious problem”.
99.Dr Wager suggested that there was a need for ‘blacklisting’ of researchers or a licence to practice, to combat ‘serial fraudsters’:
I think the idea of some kind of licence or public list is a good one. They do it in Pakistan; if you get caught for plagiarism, there is a public website I can look at to find out if I want to employ you or not. I think that is a real area of concern.
However, Dr Tony Peatfield, representing RCUK, was sceptical about maintaining a blacklist of researchers, on legal grounds:
You [would] have to have a process for striking off people. Somebody would have to complain, and then you go through a legal process to strike them off, because you are depriving somebody of the right to work. My personal view is that it would be extremely bureaucratic and expensive to set up and probably will not work very well. […] I am not a lawyer, but I understand that it may be illegal to blacklist people if it stops them working, so blacklists per se are not an option.
Sir Mark Walport, the Chief Executive of UKRI, was similarly cautious about the legalities of maintaining a blacklist in relation to data handling, but commented that “subject to it being legal, I can see a good argument for doing it”.
100.Dr Alyson Fox from the Wellcome Trust indicated that funders may in practice have their own blacklists:
Typically, if someone has been found guilty of research misconduct, we, as a funder, would no longer receive any applications from them for funding for life, because we think it is serious. That is what we do.
Dr Peatfield suggested that it was for the new employer to be diligent in its hiring process:
It crops up occasionally where somebody just moves from one institution to another. A report by Science Europe last year recommended that universities employing researchers should ask the question at interview, “Have any cases of misconduct been held against you?” If that person lies, that would be a reason for subsequent dismissal if they were then appointed. There is an onus on the new employer, or any employing institution, to ask those who are applying for jobs what their history has been.
101.Cases of researchers committing misconduct at a string of institutions suggest that either some universities are using non-disclosure agreements to keep misconduct quiet, or are not being sufficiently diligent in checking references when hiring researchers. Hiding misconduct through non-disclosure agreements is not acceptable, not least as it effectively makes the institution complicit in future misconduct by that individual. The Government should ask UKRI to consider how this practice can be effectively banned by institutions receiving public funds, and statements to this effect should be included in a strengthened Concordat (see Chapter 3). Meanwhile, there is a need for greater diligence in employers checking for past misconduct, and for previous employers fully disclosing such information.
102.Dr Wager suggested that there were currently problems with the various parts of the system not communicating properly with each other when investigating or responding to research integrity problems. She described “systematic failings to alert readers to potentially or actually unreliable research reports. This may be due to journals being reluctant to issue Expressions of Concern or to retract articles, or institutions being reluctant to investigate cases, or failing to investigate them properly, or failing to inform journals about investigations or their findings”.
103.Dr Fox said that the Wellcome Trust’s grant conditions required institutions that it funds to report any investigations to them at the screening stage. Wendy Appleby, the Registrar at UCL, also commented on the information flow between research institutions and funders:
A topic of discussion between funders and universities is about when you disclose to a funder an allegation of research misconduct. We need to be very clear about the stage for that and about what a funder is going to do with it. Understandably, if an allegation is to be dealt with at one of the earlier stages, and is not going to go through to proven misconduct, researchers are naturally concerned about their funders being informed of that. They have certain rights in terms of confidentiality as well. Clearer protocols and mechanisms for dealing with these things generally would be very useful.
104.We were also alerted to the complexities of handling investigations that may span multiple research institutions, such as when a researcher moves to another university or when a project is undertaken at several locations. As with the interactions between funders, employers and publishers, such investigations also raise questions about when and how information is shared, how confidentiality is handled during the process, and how the risk of duplication of effort is managed. The Russell Group Research Integrity Forum has recently produced a ‘Statement of cooperation in respect of cross-institutional research misconduct allegations’, emphasising the need to provide clarity on when a researcher’s right to confidentiality “might be overridden by an institution’s duty to uphold the integrity of research carried out in its name”. The statement commits Russell Group members to contacting associated parties at the outset and agreeing with them how to proceed.
105.Although the Concordat to Support Research Integrity includes a commitment to deal with allegations of misconduct using “transparent, robust and fair” processes, it does not discuss the liaison required between different parties that may be involved, beyond stating that employers of researchers should provide information to funders “as required by their conditions of grant and other legal, professional and statutory obligations”.
106.Researcher mobility means that research misconduct investigations may require coordination between current and former employers, and between journals and funders. We are encouraged to see the Russell Group developing protocols for communicating with related parties when dealing with allegations that cross institutional boundaries. There is a need for all parts of the system to work together—including employers, funders and publishers of research outputs—but there appear to be problems with the required sharing of confidential information. We recommend that employers, funders and publishers of research work together to agree a protocol for information-sharing on researchers involved in research integrity problems in a way that meets employment protection legislation. Commitments in this vein could form part of a tightened Concordat (see Chapter 3).
141 Dr Elizabeth Wager () para 4.4
143 InterAcademy Council, Responsible Conduct in the Global Research Enterprise: A Policy Report (2012), pp31–32
144 BMJ () para 3.a.i
145 BMJ () para 3.a.i
146 Nuijten M. B. et al, , Behavioural Research (2016)
147 Nuijten M. B. et al, , Behavioural Research (2016)
148 Stephen Buranyi, , The Guardian, 1 February 2017
155 Professor Carl Heneghan () para 3.2
157 UK Research Integrity Office, Procedure for the Investigation of Misconduct in Research (August 2008)
158 UK Research Integrity Office, Procedure for the Investigation of Misconduct in Research (August 2008), p40
159 UK Research Integrity Office, Procedure for the Investigation of Misconduct in Research, (August 2008), p44
162 “”, The Guardian, 1 September 2017
163 Professor Patricia Murray and Raphael Levy ()
164 [Wendy Appleby]
167 BMJ () para 3.a.vii
168 The Publishers Association () para 23
172 BMJ () para 1.a.iii
173 Jerome Samson, , Wired (accessed 5 June 2018)
178 NIH US National Library of Medicine, , ClinicalTrials.gov, accessed 12 June 2018
179 Dr Elizabeth Wager () para 3.1
180 Jump, P., , Times Higher Education (2012)
181 Jump, P., , Times Higher Education (2012)
182 Jump, P., , Times Higher Education (2012)
185 Higher Education Commission, Pakistan, , accessed 11 May 2018
191 Dr Elizabeth Wager () para 4.4
194 Russell Group ()
Published: 11 July 2018