Reproducibility and Research Integrity

This is a House of Commons Committee report, with recommendations to government. The Government has two months to respond.

Sixth Report of Session 2022–23

Author: Science, Innovation and Technology Committee

Related inquiry: Reproducibility and research integrity

Date Published: 10 May 2023

Download and Share

Contents

1 Introduction

1. As the UK emerges from the covid-19 pandemic and seeks to revive the economy, research and innovation present excellent opportunities to drive growth and recovery. A 2020 UK Research and Innovation (UKRI) review of research and development infrastructure said that: “Every £1 spent on public R&D unlocks £1.40 of private R&D investment, together delivering £7 of net-economic benefit to the UK”.1 As Government spending on research and development will total £39.8 billion between 2022 and 2025,2 the opportunities for research and innovation to generate value therefore appear to constitute a significant economic boost. However, a growing evidence base shows that the integrity of research may be at risk from what some have termed a ‘reproducibility crisis’, a phenomenon observed in research where it turns out to be very difficult or impossible to reproduce—in other words ‘corroborate’—the claimed findings of a particular scientific study, experiment or piece of research. Such failures can damage the integrity, reputation, and economic value of research activity as well as threaten confidence in innovations, for example in relation to clinical drug development.

2. To date, Government policy in this sphere has been more closely focused on establishing policy on research integrity, for example, through the 2012 national Concordat to Support Research Integrity (since updated in 2019) and steps taken include asking UKRI to establish a national research integrity committee as initially recommended by our predecessor Committee in 2018.3 Meanwhile, the specific issue of reproducibility in research appears to have received less direct attention. For this reason, we have focused our attention on the constraints to, and merits of, replicating research results; and the extent to which the relevant data can and should be open to scrutiny and challenge on a self-regulatory or voluntary basis.4

Our inquiry

3. This inquiry was launched in July 2021, with a call for written submissions covering the points below:5

a) whether there was a ‘reproducibility crisis’ in science and social science research, its scale and causes, and the most vulnerable research areas;

b) the roles of different stakeholders in addressing such problems, relevant policies and mechanisms needed, and the potential contribution of the UKRI’s national committee on research integrity; and

c) the measures required for an open, contestable, and rigorous research environment.

4. Since the outset of our inquiry, we have published over 100 written evidence submissions.6 We have also heard oral evidence from 16 witnesses from across the research system over the course of four evidence sessions. We are grateful for all these contributions to our work.

The research system

5. As Dr Jessica Butler, Analytical Lead and Research Fellow at the University of Aberdeen, outlined, “barring some commercial enterprise” conducted within industry, typically primary research in the UK “gets done by researchers in the universities”.7 The focus of this Report is on the research conducted in academic settings, though some of our analysis is relevant to industrial settings too.

6. Figure 1 provides a high-level summary of the lifecycle of a piece of academic research in the UK. Over the course of our inquiry, we heard that publication in an academic journal via this cycle was the expected outcome of almost all academic research.

Figure 1: The typical cycle for academic research

7. Reproducibility and research integrity issues are the product of different—often interacting elements—of this process. We therefore considered the research process holistically, assessing the incentives in place at every stage and hearing from the different key actors in the system, including: the Government; research funders; research institutions and groups; individual researchers; and publishers.

Defining reproducibility

8. The use of the terms of reproducibility and replicability vary, especially across disciplinary divides, meaning there are no fixed understandings of their definitions. Though written evidence submitted to our inquiry utilised slightly diverging interpretations of the terminology, and acknowledged that their use could be interchangeable, we have defined our terms thus:

a) Reproducible research is “that for which results can be duplicated using the same materials and procedures as were used by the original investigator”.8 The assessment of research for its reproducibility therefore entails “independent investigators [who] do not try to rerun a whole study, but [instead] [ … ] subject the original data to their own analyses and interpretations”.9

b) On the other hand, replicable research is “that for which results can be duplicated using the same procedures but new data”.10

9. We interpreted reproducibility comprehensively over the course of our inquiry, in recognition of the overlapping meaning and focus of measures of reproducibility, replicability, and transparency. Veering away from a strict definition of reproducibility was recommended in the written evidence, to ensure that our understanding of the term and its associated challenges was both more “inclusive and nuanced”.11 Written evidence from King’s College London explained why this was the case, outlining how a concept of reproducibility which also considered elements of transparency and replicability was more inclusive of a range of disciplinary specialisms:

For many subjects, outside of STEM, particularly within Arts & Humanities, the concept of reproducibility is not applicable to the research methods employed and terms such as transparency are more appropriate to discuss the rigour of the research findings. If we approach reproducibility through broader definitions, we can engage with a broader range of researchers to better understand the scale of the issue in wider disciplinary areas.12

Professor Marcus Munafò, Chair of the UK Reproducibility Network, summarised why neat definitions should not be a focus of our Report:

to some extent we need to step back from that and think about why those things matter. They matter because we are fundamentally interested in whether the knowledge that we generate is robust and will be useful, and whether we are getting the right answer, which is not a trivial thing in the context of science. Those are ways of evaluating the extent to which we are getting towards the truth, the right answer and able to make progress.13

Therefore, in line with the evidence we heard, we use the term reproducibility to broadly refer to the transparency and quality of research—i.e. have the researchers been sufficiently open to allow for robust assessment of their work’s data, methodology, and conclusions, and, through such an assessment, is their research proven to be thorough and accurate.

Why is reproducibility important within research?

10. The previous quotation from Professor Munafò emphasised the ultimate fact that reproducibility matters in research, as greater robustness brings us closer to the truth. There are a number of reasons why this matters so critically to those within academia, as well as those far beyond. It is evident that given science “is the key driver of human progress, improving the efficiency of scientific investigation and yielding more credible and more useful research results can translate to major benefits”.14

11. Reproducible research is transparent and well-documented, with far less likelihood of fraud as well as unintentional errors making it into our knowledge base as a result. This transparency makes reproducible research conducive to supervision, peer review, and effective presentation, engendering trust and confidence in the results from audiences. It allows for direct repetition but also variation on the original theme, meaning it is easier for researchers to produce corrections and revisions in the light of re-runs but is also conducive to collaborations, new developments, and to meta-analyses where collaborations and iterative studies have shared data and methodology. For these reasons and more, reproducible research constitutes best practice. It opens research endeavours to the broader research community and beyond in perpetuity. This matters because where reproducibility gaps exist in a research field, high secondary costs are experienced far beyond the academic context in which they originate.

12. Firstly, as research builds gradually upon a base of preceding knowledge, “inaccurate findings in the literature mean [subsequent] studies are built upon tenuous foundations”.15 Errors made in one piece of research could be passed into new thinking and even magnify with time, having knock-on effects for the quality of succeeding work. As Professor Dorothy Bishop, Professor of Developmental Neuropsychology at the University of Oxford, said:

science should be cumulative. If you want it to be cumulative, it is very dangerous just to take a single study and then develop more and more on that without first being absolutely sure that that effect is solid.16

As Professor Bishop alluded to, there is a real risk of harm from research which cannot be reproduced being used as the foundation for other work.

13. As summarised in written evidence from a group of University of Oxford academics led by Dr Aksel Sterri, “many important decisions in politics, business, medicine, and people’s private lives are based on scientific studies”.17 The potential costs of these decisions being based on inaccurate, overemphasised, or fraudulent evidence are significant. People and animals could suffer from avoidable morbidity or mortality, and harmful mistruths and even conspiracies about solutions to pressing issues could be spread.

14. There is also a significant risk of a loss of economic power and value of research conducted in the UK. Evidence submitted by a team of researchers led by Dr Stephen Bradley, a GP and Research Fellow at the University of Leeds, highlighted the vast monetary implications of irreproducibility, providing the example of medical research:

An approximate, but plausible, estimate suggests that around 85% of all expenditure on medical research is wasted because of the kind of problems that make research non-reproducible. Even if this is a gross over-estimate, because the UK government spends over £2bn on medical research per year, it is certain that colossal sums of public as well as charitable funding are lost to avoidable waste.18

As Dr Bradley et al’s evidence went on to testify, such waste also “causes great harm because of the opportunity cost of the discoveries we forgo or postpone”.19

15. These negative effects are greatly magnified where they transcend the national scale. Reproducibility gaps in research will affect the UK’s global standing, especially given our ambitions to act as a science superpower internationally.20 Researchers and students may be dissuaded from engaging with academia in the UK, to the detriment of our domestic research system. At worst, the UK could lose a vital facet of soft power if our research culture becomes either demonstrably or reputationally worse or less trusted than equivalent competitive research nations.

The impact of scientific misconduct

16. Several witnesses raised the significant impact that scientific misconduct can have on both the research community and the general public.21 Incidences of fraud can be harmful, detrimental, damaging to research integrity and cases of misconduct often adversely impact scientific progress and reputation. Such cases have been identified in various articles, for example Lee Harvey’s article “Research fraud: a long-term problem exacerbated by the clamour for research grants”,22 Adil Shamoo and David Resnik’s book on “Responsible Conduct of Research”,23 and Stuart Ritchie’s book, “Science Fictions”, detail key cases of scientific misconduct which have had grave consequences on the scientific community and have had a negative impact on public perception of science.24 A summary of some of these cases is listed in Box 1, below.

Box 1: Cases of scientific misconduct

Wakefield and the MMR vaccine

In 1998 Andrew Wakefield and his colleagues published a paper in the Lancet that made false claims that there was a link between autism and the measles, mumps and rubella (MMR) vaccine.25 Although concerns about the paper were raised as early as 2004, the Lancet did not retract the paper until 2010 (Wakefield was struck off the UK medical register at the same time). The case was extremely damaging, eroding public trust in science, and causing a drop in vaccination rates and an increase in measles cases and deaths.26

Schön and organic semiconductors

Jan Hendrick Schön was a German condensed matter physicist who was praised for his work on organic semiconductors. Between 1998 and 2002 he published over 90 articles, including nine in Science and seven in Nature.27 However in 2002, other labs had difficulty replicating his work and found that some of the graphs for different experiments were identical. Bells Labs, where Schön worked, launched an independent investigation which found that Schön had fabricated data and falsified reports.28

Hwang and stem cell cloning

In 2005, Woo-Suk Hwang published a study in the journal Science that claimed he had found a method to create human stem-cell lines using cloned embryos. In 2006, Hwang admitted to falsifying the data after an anonymous tip-off to Science claimed that some of the data was false, which resulted in an investigation by his institute.29 In 2009, he was given a two year suspended sentence, for embezzlement and bioethical violations but was cleared of fraud.

Obokata and stem cell production

In January 2014, Haruko Obokata and co-authors published two Nature articles reporting a method that could turn ordinary cells into stem cells by dipping them into acid for 30 minutes.30 Within days of publication, concerns were raised about the article, with other researchers unable to replicate the results and suggestions that images had been manipulated.31 In April 2014, Obokata’s employer, the RIKEN Centre for Developmental Biology (CDB) in Kobe, Japan, investigated and concluded scientific misconduct. Obokata was given the opportunity to reproduce her results, but resigned after her attempts failed.

Macchiarini and trachea surgery

In June 2022, the Swedish court convicted Paolo Macchiarini of causing bodily harm because of an experimental transplant of a synthetic windpipe that he performed.32 Macchiarini performed more than sixteen (exact numbers are unknown) such transplants between 2011 and 2014, and most patients are thought to have died.33 In 2016 he was sacked for breaching medical ethics after being accused of falsifying his work.

17. In some of these cases a significant amount of time passed between the original publication of the results and the retraction of the paper. During this time, researchers may have used the results and built upon them in their own work, relying on false data. As scientific misconduct was not the main subject of our inquiry, we make no observation on the frequency of fraud and scientific misconduct, but note, as demonstrated by the cases listed above, that misconduct can cause serious harm, including loss of life. We make recommendations on tackling scientific misconduct in Chapter 4.

This Report

18. We heard a range of testimony which evidenced that a lack of reproducibility continued to be a critical issue in research. Chapter 2 of this Report analyses the extent of reproducibility issues, whilst Chapters 3, 4, 5 and 6 turns to a discussion of potential solutions for conducting, communicating, and assessing research in a way which promotes reproducibility—with the responsibilities of each key stakeholder discussed.

2 The extent and impact of reproducibility challenges in UK research

19. This Chapter analyses evidence on how extensive irreproducibility is across different disciplines; the potential impacts of this; and whether the extent and the effects, together, constitute a ‘crisis’—with ‘crisis’ defined as an existential threat to the research sector. This discussion focuses on the state of research in the UK, but it is important to note that this is not a problem unique to this country. Rather, written evidence told us that reproducibility is a global problem, and ultimately solving it will require international coordination.34

The reproducibility evidence base

20. Whilst evidence points towards significant reproducibility challenges in the research system, establishing the true extent of these challenges has long been a challenge for the research sector.35 Quantitative assessments have not been sufficient—as stated in the journal Nature in May 2016, “data on how much of the scientific literature is reproducible are rare and generally bleak” (we understood this to mean that of what data there is, reproducibility is found to be lacking).36

Proxy measures of reproducibility

21. There is no comprehensive assessment available of the extent to which UK research, public or private sector, is reproducible or not, nor is there empirical information on the degree to which any lack of reproducibility has injured or inhibited the value, reputation or trustworthiness of the UK research sector or its outputs. Despite this, our predecessor Committee identified numerous sources of information, in addition to the direct reproducibility studies that did exist, from which proxy indicators of the scale of the broader research integrity challenge could be derived. These include:

a) academic studies in the field of ‘meta-research’, (explained later in this Chapter);

b) surveys of researchers (i.e. self-reporting or describing the behaviour of others);

c) data on scientific journal article retractions; and

d) annual research integrity statements from research institutions, where available.37

22. Meta-research is the study of research itself, including its “methods, reporting, reproducibility, evaluation, and incentives”.38 It was a 2005 meta-research study of medical studies, conducted by John Ioannidis, that was largely responsible for launching reproducibility as a mainstream concern.39 His article, “Why Most Published Research Findings Are False”, used simulations to claim that “for most study designs and settings, it is more likely for a research claim to be false than true”.40 Although the article went on to receive significant criticism, with other researchers claiming its methodologies were flawed and the claims it made were too strong,41 it did bring the issue of reproducibility into the limelight. Since its publication, surveys of the academic community have provided additional insight of the widespread scale of reproducibility issues in published research. According to a survey published in the journal Nature in May 2016, for example, more than 70% of 1,576 researchers surveyed reported that they had tried and failed to reproduce another scientist’s experiments.42

23. A more recent survey, conducted with over 4,000 researchers by the Wellcome Trust in 2020, painted a similar picture. Of particular concern for reproducibility were their findings on research culture, especially at earlier career stages. Of 1,832 junior researchers and students asked:

a)61% said they had felt pressured by their supervisor to produce a particular result; and

b)13% said they would not feel comfortable approaching their supervisor if they couldn’t reproduce lab results.43

24. Information submitted by Retraction Watch and the Centre for Scientific Integrity reported that authors with UK affiliations have retracted more than 1,100 papers from publication as of 1 December 2021.44 Their evidence also indicated that a growing group of “sleuths” had identified thousands of additional problematic papers, most of which had yet to be corrected or retracted. For context, a 2022 BEIS Report found that “UK researchers published 225,595 articles in 2020, corresponding to a continuous growth of 1.3% per annum on the 214,082 publications published in 2016”, suggesting retractions were relatively uncommon but not insignificant.45

Building an effective evidence base

25. Whilst these indicators are useful, they are not a substitute for a more holistic assessment of research integrity issues faced across the research sector. In 2018, our predecessor Committee concluded that further work was needed to determine the scale of research integrity issues, including by establishing a research integrity committee. In their response, the Government tasked UKRI with this consideration and evaluation—as discussed in paragraphs 34–45—this Committee has since been brought into existence as the UK Committee on Research Integrity, meeting for the first time in May 2022.46

26. In evidence to this inquiry, UKRI wrote: “It remains the case that efforts to improve research integrity, and reproducibility and replicability, are hampered by a lack of evidence about the scale of the problem, and the nature of the problem across different disciplines”.47 Written evidence from others, for example the Academy of Medical Sciences48 and the UK Research Integrity Office (UKRIO), also suggested that no such analysis, evaluation or map of threats and risks to research integrity had been established for the UK. The UKRIO wrote:

It is vital to have a clearer picture on the overall state of research integrity across all sectors of research in the UK and how the UK compares to other nations, including information on how poor practice and suspected misconduct is being addressed. This would enable more accurate assessment of current problems and of progress made in correcting them, and provide assurance to the public and policy makers. Gathering this information will require collective working from funders, research organisations, publishers and policy makers, supported by specialist bodies such as UKRIO and UKRN [UK Reproducibility Network].49

27. Although qualitative evidence indicates a potentially substantial scale of research integrity issues in the UK, there is a lack of quantitative evidence, including on the relative significance of the different causes of problems. This can only hamper efforts to evaluate damage being caused to the UK research sector in terms of culture, performance, reputation and economic value—now and in the future. This in turn prevents the design of proportionate and effective solutions to any problems.

Reproducibility challenges across academic disciplines

28. Evidence presented to our inquiry was divided on whether irreproducibility was a general issue, applicable to all disciplines, or confined to—or at least more prevalent in—certain disciplines. STM—the global trade association for academic and professional publishers—highlighted the breadth of reproducibility challenges, arguing that no research specialism or methodology could be exempt from these issues, but that the specific challenges faced varied by discipline.50

29. Based on evidence that they defined as “anecdotal”, Imperial College London (ICL) identified: a lack of transparency and openness; insufficient data-gathering; and increasing reliance on complex software, as factors inhibiting reproducibility in virtually “all science”. ICL also provided a list of more specific issues—such as working across disciplines, lack of robust controls, using confidential data, and the influence of vested interests—that affected some fields of study more than others.51 Evidence submitted jointly by the Wellcome Centre for Integrative Neuroimaging (Open Win) and Reproducible Research Oxford, was similarly clear that “no field is unaffected” because “all disciplines are vulnerable to the perverse incentives that can undermine research integrity”.52

30. However, much of the written evidence sought to differentiate between academic specialisms, with many emphasising problems in social and medical research. For example, the Coventry University Group wrote that:

The reproducibility of research arguably has greater effects on the medical and social sciences than the physical and mathematical sciences—this is explained by the different nature of “controllable” contextual factors and measurement errors across disciplines.53

Psychology has, for example, received a striking level of attention in relation to reproducibility. Written evidence from the British Psychological Society and from Professor Dorothy Bishop, Professor of Developmental Neuropsychology at the University of Oxford emphasised that Psychology had been at the forefront of self-examination in relation to reproducibility challenges and solutions.54

31. The Royal Academy of Engineering provided a counterbalance to evidence highlighting the prevalence of issues in social and medical research. It hinted that the true scale of reproducibility challenges seen in engineering might have been underacknowledged:

The extent of integrity issues in engineering research, including reproducibility, have not been explored or investigated to the same extent as in some other disciplines, such as medical sciences or psychology.55

The Academy therefore concluded that it would be complacent to take the current lack of evidence of reproducibility issues in engineering research as concrete proof that such issues do not exist. This self-assessment might apply similarly to other quantitative sub-disciplines not often associated with reproducibility challenges. Indeed, University College London argued that the “most severely affected” areas were “experimental, quantitative sciences where there are countless analytical choices that can be made when designing a study or collecting and analysing data”.56

32. Though the specific problems faced differ, all research disciplines are affected by the systemic issues that limit reproducibility. The established view that issues are concentrated in the social and medical sciences is outdated and tackling reproducibility challenges should be a priority in every research field, regardless of discipline and methodology.

Artificial intelligence

33. Sebastian Vollmer, Professor for Application of Machine Learning at TU Kaiserslautern in Germany, pointed to an array of reproducibility issues with the methodologies employed in the artificial intelligence (AI) field, especially around code, and concluded that there was a “big gap” in reproducibility.57 This was in line with published findings on the artificial intelligence research system—the 2021 ‘State of AI report’ found that only 26% of AI papers published that year had made their code available, which has concerning implications for transparency.58 Professor Vollmer stressed that “AI had many facets” and that the “applications of AI to science are as diverse as the methodologies are”, indicating the broad impacts that AI has had on research.59 This is in part exacerbated by especially negative incentives in the AI research field, which are more pronounced than in other disciplines. As Professor Vollmer shared, “there is pressure in the field to publish even more and even faster than in other fields” meaning there is a risk of more “research garbage” because “the field is so fast moving—it may be sliced up so that, rather than making one concise, clear paper, it becomes five”.60

The UK Committee on Research Integrity (CORI)

34. In its 2018 Report on research integrity, our predecessor Committee recommended that the Government ask UKRI to establish a new national committee on research integrity. The original recommendation stipulated that the Committee should be able to independently verify whether a research institution had followed appropriate processes to investigate misconduct and argued that the national committee should also have formal responsibility for promoting research integrity.

35. In July 2021, UKRI announced that it would host the UK Committee on Research Integrity (UK CORI) for three years as a free-standing committee.61 In April 2021, after our evidence sessions had concluded, UK CORI’s Co-Chairs, Professor Andrew George MBE and Professor Rachael Gooberman-Hill, were announced.62 In May 2022, the Committee’s eight further members were announced, and the Committee held its inaugural meeting with the full membership on 20 May 2022. In correspondence shared with us on 27 May 2022, the co-Chairs shared an updated record of UK CORI’s specific remit:

a) develop a strategy and roadmap to achieve independence from UKRI. To be developed in the first year, the strategy must reflect the need for collective ownership and action on research integrity across the sector. It must include a roadmap to obtain independence from UKRI and a timescale to achieve it;

b) advise across the sector on potential roles in the research integrity landscape, including setting of expectations and requirements through policies, guidance, T&Cs etc;

c) identify opportunities across the research system to improve research integrity; build high quality evidence on the state of research integrity in the UK, and on the effectiveness of interventions to improve integrity;

d) work with UUK to operationalise the Concordat to Support Research Integrity;

e) identify how systemic pressures contribute to research integrity; and

f) create opportunities for discussion across the sector.

36. During our evidence sessions, we heard testimony on what the purpose of UK CORI should be. James Parry, CEO of the UK Research Integrity Office, re-emphasised in oral evidence the need for better data collection on the extent of research integrity issues. He said that a free-standing committee, backed by UKRI, could be well placed to do that.63 UKRI’s written evidence confirmed that evidence-gathering would indeed be a focus for UK CORI:

UK CORI will help to build and communicate that evidence [about research integrity problems] and, from this, identify effective solutions. UK CORI will publish an annual state of the nation report establishing baselines in the first years which will allow for evaluation of progress in future. Identifying what the right measures are and which trends to monitor will be a key output from UK CORI.64

37. Much of the evidence welcomed UK CORI on the basis that it would play a co-ordinating role, providing drive and oversight to efforts seeking to address research integrity issues. Speaking to us, UK Reproducibility Network (UKRN) Chair Professor Marcus Munafò stressed that:

There is space for some oversight to ensure that that co-ordination is happening and to ensure that that collaborative approach is fostered and, to an extent perhaps, mandated by funders such as UKRI. This is where I see a potential role for the Committee on Research Integrity—that oversight, soft influence role in shaping and co-ordinating that activity, looking for areas of overlap that can be brought together, and looking for potential gaps where new activity is needed.65

By co-ordinating efforts, UK CORI could provide sectoral leadership on research integrity. Oxford Professor Dorothy Bishop indicated in her written evidence that:

It certainly would be good to signal that reproducibility issues are being taken seriously at the highest level, and this would help combat the discouragement felt by many early-career researchers who feel there is conflict between doing good science and career progress.66

38. Some thought that UK CORI’s leadership could extend to developing best practice guidance and making provisions to enforce such guidance. Imperial College London recommended that UK CORI:

work across disciplines and sectors (with funders, publishers, institutions, individuals at different career stages, …) to provide unambiguous guidance and resources with respect to good (expected) practice and sharing research outputs (data, software, protocols, constructs, etc.), and to co-ordinate progress and monitoring of compliance.67

The Russell Group reached a similar conclusion, writing that this guidance could include: “a registry of useful resources, the development of baseline universal expectations, and/or guidance on some of the limits to reproducibility such as limits to what data can be made public”.68

39. The evidence included some discussion on UK CORI’s remit. UKRI’s written evidence clarified that: “UK CORI will look at research integrity in the broadest sense”.69 UK CORI will:

incorporate reproducibility and replicability but set this in the context of related issues and pressures which cause problems beyond and not always related to reproducibility and replicability.70

The British Neuroscience Association commented on this intention, noting a “danger [that] reproducibility is largely overlooked within the much broader integrity topic”. They recommended that if a separate committee focused on reproducibility was not deemed feasible, “there should be a sub-committee reporting into UK CORI tasked with reproducibility in research” to ensure these specific issues are adequately addressed.71

40. Others were concerned about the regulatory and punitive powers of UK CORI. UCL argued that UK CORI “should seek to facilitate and support the research community rather than taking a top-down or regulatory role”, whilst academics from the University of Sheffield clarified that it was vital that UK CORI focused on “structural causes of error, rather than individual instances of malpractice”.72 Dame Ottoline Leyser, Chief Executive of UKRI, agreed that UK CORI’s role should not take a punitive form, because, “it would be better, in the context of supporting research integrity in those institutes, if the investigatory and regulatory powers were outside our walls”.73 UKRI was clear that UK CORI would not have the authority to verify whether an institution had followed appropriate processes in investigating cases of alleged misconduct but would be a valuable forum to discuss how safeguards, standards and expectations for misconduct processes could be set, and what sort of assurance framework would be appropriate.

41. Some witnesses were concerned that UK CORI might duplicate work already being undertaken by established bodies in the field, such as the UK Reproducibility Network (UKRN) and the UK Research Integrity Office (UKRIO).74 The Association of Medical Research Charities recommended that “clarifying the key responsibilities of different groups may be helpful to provide tangible objectives and to prevent each group being overwhelmed with the number of changes”.75 However, UK Research Integrity Office CEO, James Parry, stressed that UKRIO had been consulted throughout the initial scoping of UK CORI’s remit. He warmly welcomed UK CORI as a new actor in the field and the prospect of effective collaboration. He summarised the differences, telling us:

UKRN describes itself as a peer network of research; it is very much within the research community. UKRIO is an independent advisory voice which sits outside. UK CORI will be a free-standing body that I think can connect the dots a bit, but, with the backing of a major funder behind it, it will have a bit more clout than two relatively small not-for-profit organisations, UKRN and UKRIO. I am not underselling what the existing organisations can do. I think that UK CORI will be able to get action at a higher level than we currently can, but we can all work collectively and we are already in discussion about joint working on various issues.76

42. On 23 February 2023, UK CORI published its 2023–25 strategic plan,77 which set out four pillars of work that it would focus on:

  • promote research integrity;
  • support research integrity;
  • define the evidence base; and
  • build new directions.78

Whilst, in general, the strategy suggested that UK CORI would address some of the issues raised by the sector during our inquiry, for example, on evidence gathering and exemplifying best practice, the strategy lacked detail on how UK CORI would achieve its ambitions. There was little clarity on what resources, including funding would be made available to aid its work. The strategy was also completely lacking on the topic of reproducibility, using only the term “integrity” throughout, which is something stakeholders feared may happen. In addition, it did not state explicitly that it would produce an annual report on the “state of the nation”, as suggested by UKRI. It only said that it would produce an annual statement that showcased “evidence and practice”. Furthermore, it did not confirm the timeline for any of the actions that it set out.

43. Despite significant delays, we welcome the establishment of UK Committee on Research Integrity (UK CORI) as a potential answer to existing research integrity challenges. We ask UK CORI to commit to produce an annual state of the nation report on research integrity, including reproducibility, issues. These reports should contain a breakdown of the scale of these issues across different disciplines. They should also contain action plans for addressing identified issues. UK CORI should also publish an action plan detailing how it will carry out each of the steps set out in its strategy, including timelines and what resources will be committed for the activities it plans to carry out.

44. It is disappointing that UK CORI’s recently published strategy did not mention reproducibility, especially since our inquiry highlighted that this is a major research integrity issue. UK CORI should make sure reproducibility challenges are given due attention and not overlooked in deference to other pressing research integrity issues. UK CORI should develop a sub-committee to focus on reproducibility challenges in research. This sub-committee should establish the relative weight of reproducibility within research integrity concerns, and UK CORI should use this evidence to plan its prioritisation.

45. The increased application of artificial intelligence (AI) in scientific research presents a challenge to traditional research methods. The UK Committee on Research Integrity (UK CORI) should specifically: investigate the impact of deploying AI—and other increasingly complex software—on reproducibility in research; consider whether specialised action is needed; and set out these findings in its annual reports.

Impact—a reproducibility crisis?

46. The evidence described so far in this Report collectively suggests a systemic, large-scale challenge to reproducibility in UK research, although opinion is divided on whether the situation amounts to an emergency. The clarion call of the “reproducibility crisis” has been used to try and galvanise action across the research community and many written evidence submissions did not directly contest its applicability. According to a 2016 Nature survey of researchers themselves, there was broad buy-in to “crisis” terminology—52% of researchers believed that there was “a significant reproducibility crisis”, with a further 38% indicating that there was, at least, a “slight crisis”.79

47. However, use of the phrase “reproducibility crisis” was challenged in a number of written evidence submissions and in oral evidence sessions for being “hyperbolic and counterproductive”.80 The public interest charity, Sense about Science, argued that the ‘crisis’ label was unhelpful:

… its continued use risks becoming a counsel of despair: too big a problem to address research practices or to help the public, reporters and decision makers understand issues of quality and reliability, and too small a lens to tackle the systemic problem of the research reward system.81

48. The gravity of the situation with reproducibility might be usefully broken down into elements (with much depending on the state of reliable knowledge about each):

a) The direction and pace of change (whether the problem is getting measurably better or worse in terms of risk, threat or impact or changing character with as yet unpredictable effects);

b) the likeliness of remediation becoming prohibitively expensive or impossible;

c) intractability (the extent to which there are any potential solutions, mitigations or other workarounds and the availability, costs and utility of these); and

d) impact (the range, severity, immediacy and acceptability of the injury, damage, loss, costs or other detriment).

49. In recognition of the lack of clarity over the scale of the reproducibility issue, its impacts, and the extent to which poor reproducibility is by itself the real issue, we conclude that use of the term crisis is ill advised until more vital information is uncovered about these unknowns. However, while we eschew crisis terminology, seeing instead problems and challenges across all disciplines in research culture, action is still needed to improve reproducibility in UK research. Marcus R. Munafò et al concluded similarly in January 2017 in the “A manifesto for reproducible science”whether or not these issues amount to a crisis, there is undoubtedly substantial room for improvement. Finding solutions should be the focus of researchers’ attention.

50. Whilst significant reproducibility challenges are faced in research, to refer to the sum of these issues as a “crisis” risks detracting from the many successes of the UK’s scientific research base. Nonetheless, there is need for action to address the significant problems caused by the prevalence or reproducibility problems in the scientific community.

3 The Government’s role

51. The remainder of this Report will explore reproducibility challenges and the solutions which, if implemented, would contribute to addressing these challenges. Specific solutions are discussed in the following three Chapters, which split elements of the research process into: conducting research; communicating research; and assessing research. Actions are proposed for key actors in the system to take, including: the Government; research funders; research institutions and groups; individual researchers; and publishers. As Chapter One outlined, the process of research and publication of outputs requires the involvement of every body across the research system and individuals. Effective solutions to the challenges of reproducibility therefore require coordination between these myriad actors.

The Government’s role

52. The written evidence stressed that the Government holds primary responsibility for providing leadership and policy direction on issues of research integrity. In our final evidence session, the Minister for Science, Research and Innovation, George Freeman MP, outlined that in his view, the role of the Government was:

… to set the framework; it is to set some clear ambitions around what good looks like to help to make sure that the incentives, direct and indirect, through the funding system support best practice, while never forgetting that our universities are free and sovereign institutions.82

It is important to note, however, that these “free and sovereign” institutions receive large amounts of public funding, and therefore should be accountable to guidelines set by the Government.

Evidence from other witnesses stressed the importance of Government coordination with the research sector when implementing relevant policies or solutions. For example, the Russell Group noted that unilateral action from the Government “may generate far greater costs for UK research” than benefits.83 Instead of leading action, Biomathematics and Statistics Scotland argued that the Government’s role was to:

provide a clear, unambiguous, statement on the need for reproducibility and replicability across the science sector and to steer science funding bodies towards reproducible, replicable, Open Science research.84

53. The provision of adequate research funding was repeatedly referenced as the Government’s primary responsibility to the research system. The British Psychological Society wrote that the Government had “a role to play in recognising the resource implications of reproducibility and integrity, and supporting UKRI funding accordingly”.85 Evidence submitted by academics from the University of Oxford concurred, highlighting that the Government must be aware that any future cuts to higher education could have the unintended consequence of reducing resources to promote open research.86 Their evidence went on to urge the Government to consider ring-fenced funding for research ethics and integrity. Psychology Professor Timothy Bates at the University of Edinburgh echoed this viewpoint—specifying that, in his view, the Government should set aside 15% of the research budget for straight replications.87

54. BEIS’s written evidence noted the range of policies the department had already begun to implement in recognition of the importance of transforming research culture, including:

a) the UK R&D Roadmap (published in July 2020) which committed to driving up the integrity and reproducibility of research, including by mandating open publication and strongly incentivising open data sharing (while cutting red tape);88

b) UKRI’s new Open Access policy (aimed at making the outputs of UKRI-funded research freely and immediately accessible and discussed in detail in Chapter 5); and

c) the R&D People and Culture Strategy, which set out actions to strengthen research culture in the UK, including measures to reduce bullying and harassment (rated in a 2019 survey as the top negative influence on research integrity), as well as improving frameworks and incentives.89

55. However, in his oral testimony, Minister Freeman conceded that the Government was currently not doing enough to promote reproducibility. He acknowledged that, “Undoubtedly, Government can always do more”.90

4 Solving reproducibility challenges which emerge as research is conducted

56. This Chapter discusses solutions for improving the reproducibility of research as it is being conducted, as research has a better chance of being reproducible if a series of best practices are followed by individual researchers as they are working. Whilst reproducible research is therefore in part the responsibility of the individual conducting it, this Chapter will also seek to explore how support from the wider research system is vital.

Action to tackle research misconduct and fraud

57. The problem of research misconduct was raised in several submissions, though differing stances were taken on its severity. Whilst many emphasised that fraudulent practices were relatively rare, and thus should not be the focus of our inquiry, other evidence highlighted ongoing issues with fraud in research. Paper mills, for example—defined in STM’s evidence as “commercial non-legitimate organisations who manufacture falsified manuscripts and submit them to journals on behalf of researchers for a fee, with the aim of providing an easy publication route for the author”—presented an especially difficult problem in medical and life science research in particular, we heard.91

58. Much of the written evidence sought stronger investigative procedures and sanctions for research misconduct where it does occur. Two key solutions in this regard were discussed. Springer Nature recommended that the Government could detect and deter misconduct and violations of scientific integrity policies before they occur by developing a framework which laid out expectations for each actor in the research field.92

59. Others, such as HealthWatch UK, a UK charity that has been promoting science and integrity in healthcare (not connected with Healthwatch England), recommended a national body be established with the aim of investigating research fraud, for the instances in which research misconduct had occurred.93 Retraction Watch and the Center for Scientific Integrity detailed how research misconduct architecture operating in the United States, such as the investigation bodies like the U.S.’s Office of Research Integrity and National Science Foundation’s Office of the Inspector General, did not exist in the UK.94 Oxford Professor Dorothy Bishop recommended that such an institution employ highly trained investigators “skilled at data sleuthing and capable of evaluating quality of data and analysis in a range of scientific areas”.95 Individuals found to have subverted scientific integrity policies could then face a range of consequences including sanctions, as set out in the co-governance framework.

60. The UK’s lacks an established infrastructure for responding to research misconduct cases. The UK Government should lead on a co-produced framework with the UK Reproducibility Network, UKRIO and UK CORI, which sets out the roles and expectations for key actors when cases of misconduct are identified.

61. The UK Government should assess the benefits that an additional body, set up to investigate malpractice, could bring to the UK’s research integrity governance architecture.

The employment of robust methodologies

62. Methodological flaws in research which compromised reproducibility without necessarily constituting a deliberate form of misconduct were raised more frequently. As summarised in the below table, Dr Bradley (a GP and Research Fellow at the University of Leeds) et al, listed several poor methodological research practices that limited the reproducibility of research, including:

Table 1: Poor methodological research practices

Methodological practice

Explanation

HARKing

Researchers generate hypotheses to fit results and present these as if formulated prior to obtaining results.

P-Hacking

Researchers manipulate results until findings are generated which satisfy statistical significance.

Outcome switching

Researchers do not report certain outcomes, or switch primary and secondary outcome, to highlight favoured results.

Source: Stephen Bradley (GP & Clinical Research Fellow at University of Leeds); Nicholas DeVito (Doctoral Research Fellow at University of Oxford); Kelly Lloyd (PhD Researcher at University of Leeds); Jess Butler (Research Scientist at University of Aberdeen); David Mellor (Director of Policy at Center for Open Science (USA)); Patricia Logullo (Postdoctoral Meta-Researcher at University of Oxford) (RRE0061)

63. Due to these systematic pressures on researchers, multiple submissions recommended that individual researchers should not be tasked with providing solutions to reproducibility challenges that emerged as research was conducted. Professor Dorothy Bishop at the University of Oxford summarised the views of many when she stated that “asking researchers to change their attitudes is futile unless the incentive structure changes”.96

64. However, other written evidence submissions argued that although they could operate within a challenging environment, individual researchers were ultimately responsible for their own methodological approaches and research outputs. As summarised by the National Physical Laboratory in its evidence: “Individual researchers must take responsibility for the quality and reproducibility of their own research outputs and encourage colleagues to do the same”.97

65. The ‘Turing Way’, a community-sourced guide for reproducible, collaborative, and ethical data science, outlined that transparency was achieved when researchers “properly document and share the data and processes associated with their analysis”.98 Transparency is a first order priority in conducting reproducible work, as thorough scrutiny of research data, methodology, and conclusions is required to assess a work’s reproducibility. Evidence from ELIXIR-UK, an intergovernmental organisation that brings together life science resources from across Europe, stressed that researchers should “focus on transparency in the research process, and the reusability of results as both are key to generating research that will be reproducible”.99 Box 2 details what such transparency might entail.

Box 2: What does reproducible research look like?

In a 2021 ‘beginner’s guide’ article produced for those working in ecology and evolutionary biology by Dr Alton and Dr Rick, three key stages at which reproducible practices must be factored into research to achieve transparency are outlined. Whilst there are differences between research fields, this guide provides a useful headline summary:

Data storage and organisation: sound data management is required, including data back-ups to minimise risk of data loss; data should be clearly stored and labelled; metadata should be included in data storage to explain how the data has been processed and what each variable means.

Data analysis: data analysis should be performed using a coding script where possible, to ensure each step is documented; coding needs to be clean, and consistent; and analytical code should be annotated with comments sufficient in detail for an informed stranger to understand their application.

Finalising and sharing results: this final step will be far easier where preceding steps have been followed. Research should be accessible either via open access or on preprint servers; data and code should be published in repositories to allow for the easiest access; and figures and tables should be created directly with code to allow for automatic updates.100

Source: Jesse M. Alston and Jessica A. Rick, “A Beginner’s Guide to Conducting Reproducible Research”, The Bulletin of the Ecological Society of America, vol 2012, issue 2 (2021).

66. Individuals were also encouraged to contribute to an open research environment by modelling transparency in their own work, including where mistakes were made. Academics from the University of York noted that:

Individual researchers, whether ECRs [early career researchers] or senior staff, can endeavour to provide a supportive and encouraging atmosphere in their lab environments or large-scale collaborations, where they are honest about decisions and mistakes and provide the space for others to be similarly open.101

67. The written evidence also concluded that individual researchers could act as campaigners for systematic change in the research system. Evidence from the Russell Group stressed that individuals have a crucial role in helping the sector understand the scale of research integrity issues and by highlighting good and bad practice.102 As noted by the UK Research Integrity Office, researchers also have the power to affect changes through their voluntary commitments to assess others work, such as via peer review or grant review; and as mentors of colleagues.103 Evidence also highlighted that individuals could lend their efforts to building and sustaining grassroots initiatives that sought to support positive shifts in practice towards more reproducible outcomes. ‘ReproducibiliTea’, a network initiative that helps early career researchers create local Open Science journal clubs at universities to discuss ideas including how to improve science and reproducibility, was referenced repeatedly as an influential grassroots initiative.104

68. Most reproducibility issues are, in the main, not the result of deliberate bad practice. Many of the incentives faced by individuals conducting research act against reproducibility. Whilst individuals must take responsibility for conducting work which prioritises robust analysis and transparency, and for promoting the importance of reproducibility within their research field, we recognise that aspects of the academic process, such as time pressures and the publication model, act against the promotion of research integrity and reproducibility. Therefore, the research community, including research institutions and publishers, should work alongside individuals to create an environment where research integrity and reproducibility are championed.

Promoting a culture of reproducibility

69. The remainder of the solutions discussed in this section focus on how other actors in the research ecosystem can support individuals to conduct more reproducible research.

70. Academic researchers almost always belong to wider research groups or research institutions, such as universities. Academics from the University of Oxford summarised that:

Research institutions and the research groups they host are in a position to implement structural and cultural change to support reproducibility and research integrity at all organisational levels and career stages.105

We were told that institutions should seek to resist the unproductive pressures that the publication system can present, to ensure that the people who work within academic research institutions have the space and the freedom to undertake robust research.106 The Institute for Scientific Information, Clarivate, noted that institutions could support researchers to conduct reproducible research by making sure that the demands on them were reasonable, and effectively balanced between teaching and research time.107

71. Evidence also suggested that to cultivate a positive research culture, research institutions should actively and openly support reproducible and replicable research. Professor Timothy Bates at the University of Edinburgh suggested institutions could do this by supporting researchers who challenged popular claims, published studies which failed to replicate previous work, or whose work was independently replicated.108 Evidence from the UK Reproducibility Network Local Network Leads concurred. They suggested that research institutions should have systems in place to support and protect individuals who adopted open scholarship practices, as this work might sometimes act counter to the approval of a supervisor or the wider community.109 This chimed with the emphasis placed by the Association of Medical Research Charities on the importance of a ‘no-blame’ culture in research communities. They wrote that researchers should consider how to identify factors and practices that lead to any discrepancies in research outcomes and make corrections. Research could be an iterative process, and in the light of new evidence, analysis, or criticism, researchers and research institutions should be as open as possible about inevitable mistakes to normalise them and to disincentivise underhand behaviour.110

72. Research institutions should model a culture of reproducibility by managing inordinate pressures on academics and encouraging the prioritisation of reproducibility in research outputs. This extends to encouraging openness around mistakes and their correction. In collaboration with the Higher Education Sector, Universities UK should implement a coordinated policy on minimum protected research time for research staff.

Collaborating with and supporting statistical and software experts

73. The written evidence clarified that whilst some researchers who employed insufficiently robust methodologies were knowingly subverting best practice, others might have lacked the statistical training to conduct more reproducible work. Dr Paul Marchant, Retired Chartered Statistician, formerly of Leeds Beckett University and Visiting Research Fellow of the University of Leeds, highlighted that there was limited awareness of subtle statistical issues by those doing research, as well as a lack of statistical support available to them to assist their research, and insufficient scrutiny through thorough review, meaning poor methodological practices made it into research outputs.111 Biomathematics and Statistics Scotland similarly emphasised that “a lack of critical statistical thinking throughout the scientific process is a large contributor”. Their written evidence included some examples of misunderstandings that had emerged in research, including:

a) a lack of appreciation of the difference between exploratory and confirmatory science;

b) inappropriate or poor experimental designs that lacked statistical power, or were strongly confounded, or which were fundamentally unable to test the hypothesis in question;

c) inadequate exploration of uncertainty; and

d) an over-interpretation of results.112

To overcome these statistical limitations, the written evidence recommended that research funders and institutions should work together on making adequate career provisions for the range of skilled workers needed to produce strong research outputs, including statistical experts and software engineers. As Professor Bishop emphasised, “most of the really complicated problems that we are trying to solve these days really require teams with different expertise working synergistically”.113 Roles such as data stewards and research software engineers, are integral to making research outputs other than manuscripts open, accessible, and reusable.114 As noted by the Academy of Medical Sciences, researchers’ work could really benefit from increased collaboration with statisticians during the design, implementation, and analysis stage of a study.115

74. In order for these roles to provide effective input, ELIXIR-UK suggested that institutions must ensure career paths for data stewards and software engineers were formally recognised and supported.116 A similar recommendation was made to us by Neil Ferguson, Professor of Mathematical Biology at Imperial College London, as he emphasised the importance of recognising and rewarding software developers by designing aspirational career paths. Ben Goldacre, Professor of Evidence-Based Medicine at University of Oxford, suggested how universities might deliver collaborative research with data specialists:

The best way to do it would be through some pilot schemes where people say we are going to have a big research software engineering squad in this electronic health records research group, this NHS data analysis group, this digital archaeology group, or whatever it is, and have them as pioneers showing the value of modern, open, collaborative approaches to computational data science, teaching people how to code and share their code, and how to make efficient code.117

75. Evidence from the Royal Statistical Society argued that all projects should specify which member(s) of the research team was/were responsible for both methodology and analysis and the skillsets that enabled this function to be carried out. Stronger tests for the presence of adequate skills within research teams from funders would then help ensure that higher quality projects were funded.118 Evidence also noted that funders could provide more effective methodological support for researchers. Imperial College London and the Royal Statistical Society recommended that UKRI consider the feasibility and cost of a UKRI-wide service to advise on data-driven research methods, providing the researchers they fund with more guidance on their data collection and analysis, where they perceive it to be needed in a grant application.119

76. Statistical experts and software developers are insufficiently recognised and renumerated within the university research sector. Funders and universities should develop dedicated funding for the presence of statistical experts and software developers in research teams. In tandem, universities should work on developing formalised, aspirational career paths for these professions.

77. Research funders should implement stronger tests for the presence of adequate software and statistical skills within research teams at the outset of a funding application. Where these skills are perceived to be lacking, UKRI should consider the feasibility and cost of offering a dedicated methodological support system to research teams.

The provision of appropriate education and training

78. Education and training provision for researchers were frequently raised as additional tools for improving reproducibility in research as it is conducted. As noted by Imperial College London, it is the responsibility of researchers to be aware of, and seek to address, gaps in their own skillset and/or understanding.120 The institutions which house researchers are also responsible for ensuring their staff are supported to build the necessary skillset for producing reproducible research.

79. Many submissions, such as from the Royal Academy of Engineering, noted that early-career researchers should be better trained in both reproducible practices and in the intricacies of the publishing process.121 Evidence from the British Pharmacological Society concurred, and suggested that early-career researchers should be taught to fully engage with the issues of reproducibility contained within the publishing process, in order to better navigate difficulties that might emerge later in their careers.122

80. We heard that students and early-career researchers also need guidance on conducting effective replication work within their education and training. The British Psychological Society hailed Dr Katherine Button at the University of Bath as a leading example in teaching reproducibility. Dr Button is known for encouraging third-year Psychology students to collaborate on a replication study for their final year dissertation project, rather than seeking to produce a novel result.123

81. We heard that to sustain a robust research environment, it was important that research institutions set aside funding for researchers to engage in reproducibility training throughout the course of their career, as the problems seen are not unique to those at early career stages. Whilst some universities do offer training on research integrity and reproducibility to researchers, this provision is not consistent across the academic landscape. Evidence from academics at the British Neuroscience Association, for example, explained that institutions should incorporate mandatory training and continuing professional development in open and rigorous research practices at all career stages.124 Evidence from Protocols.io, a secure platform for developing and sharing reproducible methods, also noted the importance of providing the best digital reproducibility tools and infrastructure for staff and ensuring they were trained to use them.125

82. Currently there is insufficient attention placed on reproducibility and research integrity training for university students and research professionals. Greater emphasis should be placed on the importance of reproducibility and research integrity in education and training at undergraduate, postgraduate and early career researcher stages. Part of this training at the undergraduate and postgraduate levels should include the routine production of replications.

83. Institutions should incorporate mandatory reproducibility training and professional development plans for researchers across the course of their career.

‘Slow science’ grants

84. Evidence was clear that any corner-cutting academics undertake whilst conducting their research was broadly the product of their demanding, competitive, and precarious working environment. The research funding system which provided relatively short-term grants from research projects, contributed to ‘short-termism’ amongst researchers and created the conditions for increasingly unstable academic careers. As written evidence from Mr Christopher Marc Taylor (Chair at ISRCTN primary clinical trial registry) argued this current funding model:

results in extremely insecure working conditions for young researchers, creating pressures and moral dilemmas which seem highly likely to put at risk the quality of the science they work on.126

85. Dr Jessica Butler, Analytical Lead and Research Fellow, University of Aberdeen, explained how this process “actively disincentivised [early career researchers] from producing top-quality, reproducible research”:

Almost as soon as they get started on this research, not only does their contract end but they are automatically made redundant. There is no controversy there. They lose their visas. When they start this project, they are looking for their next job [ … ] the only thing they get judged on at the next step is how fancy the magazine is in which they published the summary of their research

Dr Butler’s evidence pointed towards the strong additional disincentives for reproducibility introduced by the academic publication system, which is the focus of the below Section on ‘Communicating Research’.

86. Funders can support individuals to conduct high quality research by extending the timeframes of their grant awards to relieve some of the pressure placed on researchers. Slow science grants provide researchers with the time to produce more thorough outputs. They are therefore a useful tool for promoting reproducibility but should not entirely replace faster-paced research looking to make cutting edge discoveries. A group of academics from the University of Oxford also argued that funders should consider specific funding mechanisms for reproducibility:

funders should consider extending the length and scope of awards to include specific funding for open research practices including constructing, maintaining, and storing data.127

The British Neuroscience Association justified this approach, noting that “a move towards producing fewer, more reproducible studies may ultimately result in faster progress through a reduction in research waste”.128

87. Evidence from many submitters also argued that an important part of giving academics the time they needed to work effectively was ensuring they had better job security. Evidence from the Alan Turing Institute suggested that imposing a three-year minimum contract for a postdoctoral researcher would give them some time to check the reproducibility of their work, and hopefully to reuse their own code and data for additional research questions.129

88. Short-term research grants place restrictive limitations on researchers, which can be to the detriment of research integrity and reproducibility. UKRI should consult with a representative sample of researchers to understand whether their grants allow them sufficient time and funding to do the work needed for ensuring their research is reproducible. UKRI should also implement a trial funding programme with an emphasis on ‘slower’ science.

89. Uncertainty in the academic job market, especially at earlier career stages, acts as a strong additional disincentive against the prioritisation of reproducibility by researchers. Research funders, including UKRI, should work to impose a three-year minimum contract for post-doctoral researchers in universities.

5 Solving reproducibility challenges which emerge in the communication of research

90. As outlined in evidence from the British Pharmacological Society, journals and publishers have significant influence in the research system: the expectation for researchers to communicate their results in peer-reviewed journals, thereby validating their work, means journal policies are a powerful tool to influence research practice.130

91. Much of the written evidence focussed on the communication of research and the issues for reproducibility that emerge because of the academic publication system. Dr Charlotte Brand, a Postdoctoral Research Associate at the University of Sheffield, was vocal about fundamental issues which existed in the research publishing business model:

Incredible opportunities exist in how we publish and discuss research outputs, but–with a few honourable exceptions - they are not pursued with enough vigour by a scholarly publishing industry which makes private profits from public money. [ … ] Journals follow a traditional publishing model that was designed for physically printed documents, not digitally distributed knowledge [ … ] The current system was designed with far fewer researchers in mind, and is not only inefficient but damaging to the reliability of the research record.131

This Chapter explores how improving reproducibility could be achieved by reforming elements of how research is communicated in journals.

Alternatives to the traditional publication model

92. The largest policy reform for traditional publication has been the shift towards open access, whereby research publications are made freely available online to all at no cost and with limited restrictions with regards to reuse.132 Publisher Wiley summarised the benefits of open access as: “Ensuring articles are free to access, distribute and reuse enables greater (and independent) scrutiny of published research”.133

93. There are two types of open access (OA):

a) Gold open access—Gold OA makes the final version of an article freely and permanently accessible for everyone, immediately after publication.

b) Green open access—Green OA, also referred to as self-archiving, is the practice of placing a version of an author’s manuscript into a repository, making it freely accessible for everyone.134

94. UKRI published its open access policy in August 2021.135 Research publications arising from the research UKRI funds, or that of one of its Councils, must now be freely accessible to the public and under conditions that allow reuse. The policy articulated the requirement for data access statements in research papers and highlighted the need for research data to be managed so it remained “as open as possible and as closed as necessary”.136 As a result of this policy shift, written evidence from the Publishers Association stated that they projected that 87% of the UK’s research articles would be available via Gold Open Access by 2022, a 57% increase since 2016.137 Dame Ottoline Leyser explained that open access “is part of the broader shift in publication culture that absolutely contributes to the move away from the idea that publishing in a particular journal with a focus on just the flashy results is the only thing that is valued in the system”.138

95. UKRI’s updated open access policy has been met with mixed reviews, mostly because the associated costs for publication are still high. For example, to publish under ‘gold’ open access for with Nature Neuroscience (and other associated Nature journals), “the cost of publication is covered by an Article Processing Charge (APC) paid at the time of publication. The APC for Nature Neuroscience in 2022 is [ … ] £8,290”.139 In order to circumnavigate this high cost for researchers, publishers are looking to transform their funding streams. Nature cited the ‘transformative agreement’ that had been reached with the Max Planck Digital Library as an example. This agreement established a single annual fee to cover APCs for affiliated authors as well as journal subscriptions, thus removing the burden of paying APCs from individual investigators. Dame Ottoline contested concerns about high fees. She told us that this new funding system was “a key element of the open access policy”, and that it had “put the entire open access system on to a more financially sustainable footing”.140

96. Since our inquiry finished taking evidence, on 25 August 2022, the White House Office of Science and Technology Policy (OSTP) updated its guidance to federal agencies and departments, requiring them to update their public access policies so that all publications where research was funded by the taxpayer is made publicly accessible, without embargo or cost.141 The announcement received a similarly mixed reception to UKRI’s new policy.142

97. More radical alternatives to publication in a traditional academic journal have also emerged for researchers. These include Peer Community In (PCI), “a non-profit organization of researchers offering peer review, recommendation and publication of scientific articles in open access for free”.143 PCI offers researchers routes for quality control and assessment beyond traditional publication records. Another option are ‘notebooks’, such as those hosted on Jupyter.144 Notebooks are webpages that report on every stage of a research study openly, allowing other researchers to verify analysis and claims and allowing for rapid updates where errors are found.

98. The trend towards blanket open access in the communication of scientific outputs is positive. UKRI and other research funders should continue to implement open access policies until this figure reaches 100%, by the end of 2025 at the latest.

Journals should mandate greater transparency

99. As explored in the introduction, transparency is a fundamental precondition for reproducibility. Without access to an open record, researchers cannot analyse others’ data, code, or results. Evidence frequently promoted the ideal of transparency, with different solutions for achieving this discussed. Regardless of the degree of openness desired, the evidence, such as from the British Pharmacological Society, noted that the minimum standard should be that all the materials and methods must be given in sufficient detail to allow another researcher to reproduce the work.145 In order to facilitate openness, written evidence argued that publishers should introduce and uphold guidelines for the academics who publish with them. Evidence from a group of academics led by Thomas Rhys Evans at the University of Greenwich recommended that journals should maintain author guidelines that stipulated how research data and materials should or must be stored and shared, as a condition of publication.146

100. Biomathematics and Statistics Scotland went further, recommending that all primary data (appropriately sanitised) and code be published freely and openly alongside their publications.147 Professor Ben Goldacre made a similar suggestion, outlining his vision for an ideal research ecosystem which:

would probably be that all publicly resourced research, or all research done in British universities, would have a project idea attached to it; all of the relevant digital objects such as links to the dataset or the dataset itself; links to the code repository; links to the protocol; links to the final published paper; and links to any additional appendices, time-stamped, attached to data about the funding and so on.148

101. Currently, research outputs are frequently published without an associated link through to their open-source data and code. This prevents other researchers assessing work for its reproducibility. In all bar the most exceptional ethical and legal situations, researchers should share their research data and code alongside published outputs.

Journals should mandate data deposition

102. Data management is a key enabler for transparent practices because data must be discoverable and available for scrutiny if a study’s reproducibility is to be assessed. Speaking as a representative of the publishing industry, Dr Ritu Dhand, Chief Scientific Officer, Springer Nature, suggested that whilst openness with data was supported by publishers, it was not always mandated.149 Dr Elizabeth Moylan, who works in Research Integrity and Publishing Ethics at publisher Wiley, outlined how the extent of engagement with data deposition differed by journal: “some journals will encourage data sharing, some will expect to say something about where the data is, and some will mandate the sharing of that data”.150 In 2010, our predecessor Committee recommended that the climate science community work to become more transparent by publishing raw data and detailed methodologies.151 We are therefore disappointed to see that, on the whole, the evidence submitted to our inquiry suggested that publishers were not doing enough to encourage open data depositions. A submission jointly presented by Professor Marcus Munafò and Professor Hugh Shanahan noted that the growth in the diversity, size and speed of data being collected:

has not been matched by a corresponding increase in the infrastructure, training and support available to researchers to ensure these various elements of the research process are produced to a high standard, and shared effectively and ethically. What is available varies between institutions, and is not well coordinated across these institutions.152

Their submission went on to recommend a number of solutions for this problem, including more widespread adoption of FAIR (Findability, Accessibility, Interoperability, and Reuse of digital assets) data principles, which were established in 2016.153 Professors Munafò and Shanahan also recommended the importance of education, training, and support for key data roles, such as data stewards and software engineers, as discussed earlier in this Chapter.

103. Professor Munafò also drew our attention to the UK’s ”excellent” digital infrastructure for depositing data. He argued that data deposits would be more effective, and knowledge more transferable, if the regulations for providing data deposits were coordinated across institutions in the UK—and suggested publishers could help incentivise this by requiring data used in research papers to be included and then published alongside them.154

104. Journals should collectively encourage researchers to employ the FAIR (Findability, Accessibility, Interoperability, and Reuse of digital assets) principles within their research and should mandate the deposition of research data in open-access repositories alongside the publication of research outputs.

Encouraging uptake of data management plans

105. Research funders can also place conditions on research grants which require researchers to adopt more reproducible working practices with data. One example is to place a mandate that researchers provide a data management plan along with grant applications. A data management plan is a written document that:

describes the data you expect to acquire or generate during the course of a research project, how you will manage, describe, analyse, and store those data, and what mechanisms you will use at the end of your project to share and preserve your data.155

ELIXIR-UK, an intergovernmental organisation that brings together life science resources from across Europe, recommended that UKRI require researchers provide, and be held accountable for data management plans. They argued that:

UKRI councils should require the inclusion of a costed activity related to research transparency in funding requests, with a research transparency plan extending the current data management plans, and investigators held to account that they deliver on the plan.156

UKRI’s written evidence indicated that it had implemented the necessary data management policies:

UKRI’s research councils have research data management policies that mandate and encourage deposition and curation of research data that underpin research findings and for some time the councils have led practice in requiring data management plans.157

106. We welcome UKRI’s use of data management plans. A continued emphasis on their importance as a condition of research funding is necessary.

Publication of replications and null results

107. Others acknowledged that elements of the publication system were reforming and improving, though evidence mostly concluded that overall, the state of the publishing sector was not conducive to reproducible work. The Royal Academy of Engineering summarised why this was, outlining the incentives that the publication system shaped, and explaining how they worked against a prioritisation of reproducibility:

many reproducibility-related issues relate to the incentives and publishing system of research, as well as to the timescales involved in research. Incentives underpinning aspects of the research process are shaped by incentives in the publishing system. The resulting focus is on novel, publishable results, and do not incentivize careful thinking on reproducibility - there are also limited rewards for considering, conducting and publishing replication studies.158

108. Evidence from Lewis Halsey, Professor of Environmental Physiology at the University of Roehampton, further clarified how the system primarily rewarded novelty and significance, to the detriment of studies which sought to verify existing knowledge.159

109. Similarly, Biomathematics and Statistics Scotland detailed how this system affected scientists’ behaviours and priorities by encouraging them to “propose research that prioritises quantity over quality, to favour currently ‘hot’ research areas, and to provide exaggerated claims (both in terms of significance, impact, and novelty)”.160

110. The pressure to publish in turn has led to a positive publication bias in the academic record, whereby positive research findings are more likely to be reported and published. Different attempts have been made to quantify the scale of this bias. One notable 2016 study reviewed 25 years of biomedical articles and found that 96% of research reported findings were ‘statistically significant’, which would be a mathematical impossibility if the published record was representative of all research carried out.161

111. Research funders were also repeatedly called upon in the written evidence to provide more dedicated funding for studies “that aim to reproduce and/or replicate findings”.162 Under the current funding system, replication studies often fail to attract funding as their proposals are competing against novel cutting edge pitches. However, far from representing a waste of funds by not progressing novel ideas, the University of Edinburgh stressed in its evidence that replication studies provided exceptional value for money, in that they had a high probability (40–80%, depending on the field) of overturning existing knowledge.163 Academics at the Department for Psychology at the University of York suggested that a UK-based replications funding scheme be set up to mirror the Dutch equivalent of UKRI (NWO), which launched a €3 million pilot programme, “Replication studies”, in 2016 to encourage researchers to carry out replication research. NWO also used the experience to gain insight into how replication research can be effectively included in its more mainstream research programmes.164 The pilot closed in 2022 after successfully funding 24 research projects, many of which are still in progress.

112. Writing in follow-up evidence to Dame Ottoline’s evidence to the Committee, UKRI highlighted that:

Defining the exact proportion of UKRI funding allocated to replication studies, meta-research and long-term grants is challenging given that these factors are often components of larger projects or interventions that vary in definition and may change over time.165

UKRI set out the range of funding commitments already in place to support these aims but did not place a numerical value on their size.

113. Evidence from many, such as the Russell Group, recommended that publishers could also explore and invest in new formats for publishing null results, in collaboration with universities.166 The Academy of Medical Sciences provided an example of this practice, undertaken by the Wellcome Trust, which allowed their grant holders to publish negative or null research findings—including failed replication attempts—in their journal Wellcome Open Research.167 Evidence from submissions, including from the University of Edinburgh, also recommended that publishers should give greater prominence to replication studies.168

114. This position was countered in the evidence session with publishers. In summary, Dr Elizabeth Moylan testified that “we [journals] publish sound science results, but it has to come from the researchers. It is not the publishers saying that we will not publish that”.169 Dr Moylan also argued that journals were equally as willing to publish positive results as null ones. She later wrote to us to clarify this position, after we expressed our surprise at her comments: “Publishers with a range of journals publish both selective high-interest journals focused on novel results and, increasingly, journals publishing confirmatory and null results”.170

115. Providing adequate funding for replication studies is an important precondition for ensuring researchers have the resources necessary to conduct them. UKRI should learn from its Dutch equivalent, NWO, by developing a pilot programme to fund replication studies.

116. Aside from some notable exceptions, publishing routes for negative and confirmatory findings are not pursued thoroughly enough by the scholarly publishing industry. Publishers should review their journal portfolios to ensure that there are sufficient options for the publication of negative and confirmatory science, in line with the proportion of submissions which demand such routes.

Effective academic record maintenance

117. Written evidence from STM acknowledged the importance of publishers’ work with the scholarly record. Publishers play a key role in facilitating reproducibility by ensuring that the scholarly record is verified, curated, disseminated and can be trusted.171 Evidence from Wiley affirmed that publishers had such a responsibility: “we [as publishers] have a duty of care to maintain the integrity of the published literature”.172 The Publishers Association noted this duty of care extended to the management of ‘publication ethics’, including article retraction, erratum, corrigendum and expressions of concern.173 Dr Moylan also emphasised that all journals were members of COPE, the Committee on Publication Ethics, which provided standards and guidance for them to adhere to.174

118. Dr Ritu Dhand outlined the process of correction that journals followed when reproducibility issues with published pieces were flagged:

When we get criticism, we go through a very a rigorous peer review process. We have to work out whether we are looking for a correction here? Is the person who is complaining at fault? Did they do something wrong? Are we looking at a correction where it is not the case that the whole research is no longer true but a small part of it might need to be corrected?175

She went on to confirm that: “If we are absolutely sure that that paper is not solid or sound and no longer conclusive, we retract it”.176

119. Dr Alina Chan, Postdoctoral Fellow at the at the Broad Institute of MIT and Harvard, told us that in her experience over the course of the covid-19 pandemic, the correction process was overly lengthy and difficult for “scientists who bring forwards constructive criticism and clarifications of existing publications, [which then] deters efforts to reproduce published work”.177 Dr Chan also explained that journals could improve reproducibility by ensuring that scientists offering corrections and retractions to published work were supported and had a reasonable time schedule to work to.

120. Written evidence submitted by Open WIN (Wellcome Centre for Integrative Neuroimaging), and Reproducible Research Oxford, University of Oxford, was similarly strong in its recommendations. It suggested exploring the feasibility of fining or otherwise sanctioning publishers which failed to correct or publish an expression of concern when a paper was found to contain unambiguous major errors.178

121. There was also strong support in the written evidence for journals ensuring their quality control role encompassed the use of the best technology and infrastructure. Evidence from Wiley noted that:

There is an increasing role for technology in helping publishers and researchers (whether this be as editors, peer reviewers or authors) check and report research as transparently as possible.

Technologies employed include:

a) Checks for text similarity; and

b) Use of machine learning and artificial intelligence software to identify discrepancies in images or statistical data.179

122. The Alan Turing Institute similarly acknowledged the importance of quality infrastructure in its written evidence. They recommended establishing an Open Infrastructure Technology Oversight Committee, to provide a foundation for sharing best practices, aligning power, and calling for broader system change including negotiating with vendors on pricing, reciprocity, and values-alignment.180

123. Publishers have a vital role in the maintenance of the scholarly record. Publishers should support academics who report issues with published research in their journals and should commit to timely publication of research error corrections and retractions where necessary—in our view this process should not take longer than two months. Publishers should also commit to timely deployment of technology to support the quality of the published record.

6 Solving reproducibility challenges in the assessment of research

124. Reforming the way in which research is assessed offers a chance for reproducibility to be given greater priority when determining a researcher’s contributions. Over the course of our inquiry, we heard that researchers focus their priorities according to how they are going to be assessed. Evidence from the University of Manchester summarised where these priorities lay, and why this had a negative knock-on effect for reproducibility:

An individual’s research success still tends to be defined in terms of where their research is published, how many research outputs they generate, and how much grant income they are awarded. None of these measures necessarily correlate with research quality and neither are they dependent on the adoption of open and transparent research practices.181

As University of Oxford Professor, Dorothy Bishop argued:

If we want to get rid of the fraudsters, we have to get rid of the high reward for the proxy indicators of quality and start rewarding genuine quality on metrics that cannot readily be gamed.182

125. The UK Reproducibility Network Local Network Leads said that to better promote reproducibility, institutions needed to shift reward and incentive structures:

Institutions should reward and encourage open and transparent research practice through their incentive structures for Individuals, for example by including consideration of open and transparent practice into their hiring, induction, probation, promotion, workload, and professional development policies and frameworks.183

126. There has been a drive within the research community to address the lack of incentivisation of reproducibility in research. For example, the British Neuroscience Association recommended the continual uptake of two key sets of principles which seek to reform how research was assessed:

a) The San Francisco Declaration on Research Assessment (DORA): Developed in 2012 during the Annual Meeting of the American Society for Cell Biology in San Francisco, DORA “discourages use of journal-based metrics, such as JIF, as a surrogate measure of the quality of individual research articles, or to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions”.184 UKRI’s written evidence confirmed their support for DORA and stated that the principles had been embedded “in activities, including guidance for those reviewing grant applications”.185

b) The Hong Kong Principles: developed as part of the 6th World Conference on Research Integrity in June 2019, the five principles focus on the need to drive research improvement through ensuring that researchers are explicitly recognised and rewarded for behaviours that strengthen research integrity.186 These principles “offer one route for institutions to explicitly commit to recognising and rewarding researchers for behaviour that leads to trustworthy research by avoiding questionable research practices”.187

Whilst these principles provided the basis for discussions on reforming research assessment practices, the written evidence highlighted that further action was needed in:

c) the evaluation of researcher’s contributions via the Research Excellence Framework (REF);

d) the appraisal of a researcher’s contributions via their CV; and

e) the peer review assessment process.

127. Reward structures in academia disincentivise reproducibility by placing disproportionate value on secured funding and frequent publication in prestigious journals.

Research Excellence Framework (REF) reform

128. The Research Excellence Framework (REF) is the UK’s system for assessing the excellence of research in UK higher education providers. Once conducted, REF outcomes are used to inform the allocation of around £2 billion per year of public funding for universities’ research.188 The most recent exercise was carried out in 2021. Research England manages the REF on behalf of the four UK higher education funding bodies: Research England; Scottish Funding Council; Higher Education Funding Council for Wales; and Department for the Economy, Northern Ireland.189 The Panels use the following criteria for assessment:

a) Outputs—’originality, significance and rigour’;

b) Impact—’reach and significance’; and

c) Environment—’vitality and sustainability’.190

129. The Future Research Assessment Programme is a piece of work to be led by the four UK higher education funding bodies to evaluate REF 2021, and investigate other possible research evaluation models and approaches, looking to identify those that could encourage and strengthen the emphasis on delivering excellent research and impact. This programme of work was expected to conclude by late 2022,191 but a recent article in Research Professional News said that an update was now not expected until mid June.192

130. Evidence from Professor Lewis Halsey at the University of Roehampton reflected on the importance of shifting the straightforward association between novel research and ‘excellence’.193 He argued, as did Dr Jessica Butler, that the REF needed to reform the criteria for the highest quality papers away from novelty in the assessment of originality. Dr Butler told us:

The best score you can get on the Research Excellence Framework that brings your university the most money is summarised by the phrase, “This paper was world leading in its originality, significance and rigour.” Is that what we want? Is that the best possible research that comes out of the UK? That phrase could be tweaked. Everything has to be original and significant all the time.194

131. The Future Research Assessment Programme (FRAP) is consulting on reforms for the assessment of UK higher education research. It should review the Research Excellence Framework assessment criteria to assure that transparency is a prerequisite of top-scoring research. It should also consider the effects of removing ‘originality’ from the top score bracket.

Researcher resumes

132. Professor Dame Ottoline Leyser summarised what a researcher’s CV entailed:

at present the standard academic CV consists of a list of your publications, a list of the funding that you have previously been awarded and, maybe, some indicators of the esteem in which you are held in the community, such as invitations to speak at conferences.195

Dame Ottoline explained that these lists then:

exclude any possibility of talking about elements such as how you mentor and train your students in high-quality, good research practice and support them in their career development more broadly, and how you contribute more widely to the research community in ways that build a community of high-quality, good practice, including integrity.196

UKRI, along with other funders, have acknowledged this by adopting the Royal Society’s Resume for Researchers format, a more narrative CV statement that encompasses a researcher’s wider achievements.197 UKRI’s written evidence summarised that:

The narrative format, with the opportunity to provide a wide range of evidence of research output, along with a requirement to provide evidence of contributions to research beyond the direct outputs of the research itself is one tool funders are using to recognise the full range of activities essential for research excellence.198

133. Researchers should be assessed on the broader contributions to their academic field, including time spent conducting voluntary peer review and promoting reproducibility and research integrity. Funders, led by UKRI, should move towards the exclusive use of the ‘resume for researchers’ format in funding calls by 2025.

Peer review reform

134. We heard that there were also top-down issues with the operation of the peer review system in academia. Due to its voluntary nature, academics increasingly do not have the time to extensively scrutinise others’ work on top of their numerous additional priorities, and it can therefore also be hard for journals to find expert reviewers.199 The Alan Turing Institute concluded that the current system limited how extensive peer review critique could be, particularly with regard to reproducibility, as:

many peer reviewers do not check the reproducibility of published papers. This is a resource (time, money and expertise) intensive activity that most peer reviewers cannot justify given that their work is not reimbursed.200

135. Peer review is a vital tool for ensuring the accuracy and integrity of research that is published. As Elsevier explained, in an effective research system, journals should employ peer review processes to interrogate authors’ use of methods, results and statistical analysis.201 The written evidence noted that the process failed to catch irreproducible work in some cases.202 Dr Jessica Butler, Analytical Lead and Research Fellow at the University of Aberdeen, explained how this process “actively disincentivised [early career researchers] from producing top-quality, reproducible research”:

No peer reviewer—no one—will look at the code they wrote. They will not look at the data. In all my years of research, no peer reviewer has ever asked to look at the data behind my experiments and the code.203

136. Evidence explained that in order to produce an environment in which the research output was reproducible, journal publishers should subscribe to, and ensure compliance with, best-practice guidelines. Written evidence from the Publishers Association highlighted that over 11,000 journals were members of the Committee on Publication Ethics (COPE), which published ethical guidelines for peer reviews. These guidelines set out the basic principles and standards to which all peer reviewers should adhere to during the peer review process.204 Written evidence submitted by Open WIN (Wellcome Centre for Integrative Neuroimaging) and Reproducible Research Oxford, University of Oxford, concurred with this viewpoint. They argued that publishers should take an active role in ensuring the quality of peer review by publishing guidelines on the process and insisting that manuscripts adhered to standard reporting checklists.205

137. Evidence from the UK Reproducibility Network Stakeholder Engagement Group also suggested that publishers should assume responsibility for effectively training their reviewers.206 Biomathematics and Statistics Scotland wrote that another critical positive step would be for publishers to encourage transparency by committing to publishing peer reviews alongside publications.207 Wiley provided information about some of its journals which offered transparent peer review. In these cases, it enabled the open publication of an article’s entire peer review process including the peer reviewers’ reports, authors’ responses and editors’ decisions, which some suggested could raise the overall quality of the reviewer’s work.208

138. Written evidence from Professor Bishop pointed to unfairness in the system, whereby publishers generated significant profit from journals, whilst benefitting from free labour from academics. Whilst Professor Bishop did not necessarily endorse paying reviewers, she explained that there were ways of professionalising the service they provided to ensure that experts with the specific remit required were interrogating the paper thoroughly.209 Written evidence submitted by the National Centre for the Replacement, Refinement and Reduction of Animals in Research concurred that systematically involving statisticians in the peer review process meant that they could thoroughly check that the design and analysis of the experiments described in a manuscript were adequate to support the conclusion and inferences of the study—a level of scrutiny which the average peer reviewer did not currently have time to perform.210

139. One further way of improving the outcomes of peer review is to relocate it to an earlier stage of the research process, which allows researchers to incorporate methodological feedback into their work. This can be achieved through report pre-registration. As Dr Charlotte Brand, Postdoctoral Research Associate at the University of Sheffield, pointed out in her written evidence, pre-registration involves publicly registering all data collection protocols and analysis plans, particularly hypotheses and predictions, before the data are collected or seen, as is already default for clinical trials. This, she said, “could be mandated by research institutions, publishers, and research funders”. She continued:

It is of course natural for research plans to change as a project progresses; be that changing questions, adjusting collection protocols, or even altering experimental design. However, preregistration does not preclude change (a common misconception), but instead requires honest, chronological documenting of the changes along with their justifications. This documentation not only aids peer-review, but is also extremely valuable for other researchers who are undertaking similar projects.211

140. UK Reproducibility Network Chair Professor Munafò built on the theme of registered reports—calling for a “registered reports partnership model” solution, where funding and the principles for a study were agreed concurrently, with a guarantee of publication.212 This model was expanded upon (drawing on Professor Munafò’s research) in written evidence submitted by nine university researchers, led by Dr Thomas Rhys Evans at the University of Greenwich. They said:

Registered Report Funding Partnerships have been proposed as a method of extending the RR model by integrating it with the grant funding process so that researchers receive both funding and in principle acceptance for publication based on the integrity of the theory and methods. Combining grant funding and publication decisions in this way may help to streamline these processes and reduce the burden on reviewers, while also providing the aforementioned benefits of RRs in reducing questionable research practices and publication bias.213

141. Evidence submitted by Dr Thomas Rhys Evans et al called for more “structural support” in order to “implement RR [registered reports] more widely”, including “wider journal adoption”—currently, only 300 journals (out of a globally estimated 30,000)214 offer registered report formats. They argued that registered reports were “valuable amid ongoing concerns of widespread ‘false-positive findings’ in the published literature. Indeed, the rate of supported hypotheses is much lower among RRs than conventional research articles”.215 Written evidence submitted by Dr Stephen Bradley et al argued that publishers could play a key role in encouraging the “uptake of Registered Reports”, promoting the “vetting of study plans prior to commencement” and “reduce publication bias against ‘negative’ or less newsworthy results”. In addition, they could assist in “establish[ing] a central searchable platform to host pre-registration of all publicly funded research, on which documentation such as protocols and where possible study data is also deposited”.216

142. Peer review should not be viewed as a binary measure of quality versus unreliability for published papers. There is a wide range of competency, depth, and rigour in the analyses carried out during peer review as time-poor academics often do not have time to conduct detailed scrutiny. Where possible, journals should seek to publish peer review comments to improve transparency in the research publication process and to deter paper mills.

143. The ‘registered report partnership model’ offers a good opportunity for researchers to have their methodologies peer reviewed at an early stage in the research process, allowing researchers to incorporate feedback into their research plans. Within this model, funding and the principles for a study are agreed concurrently, with a guarantee of publication regardless of experiment outcomes. Publishers and funders should work together to offer a ‘registered report partnership model.’ Not only will this benefit researchers who will receive feedback on their research plans and guaranteed publication, but greater transparency in research methodologies will be achieved, which should have a positive impact on research reproducibility.

Concluding remarks

144. We hope that this Report has raised practicable and effective solutions to the challenges presented by a lack of reproducibility in research. As we have argued, maximising the integrity and utility of the research system is a vital element of realising the UK’s science superpower ambitions and, if realised, will deliver great benefits and avert significant risks across the board.

Conclusions and recommendations

The extent and impact of reproducibility challenges within UK research

1. Although qualitative evidence indicates a potentially substantial scale of research integrity issues in the UK, there is a lack of quantitative evidence, including on the relative significance of the different causes of problems. This can only hamper efforts to evaluate damage being caused to the UK research sector in terms of culture, performance, reputation and economic value—now and in the future. This in turn prevents the design of proportionate and effective solutions to any problems. (Paragraph 27)

2. Though the specific problems faced differ, all research disciplines are affected by the systemic issues that limit reproducibility. The established view that issues are concentrated in the social and medical sciences is outdated and tackling reproducibility challenges should be a priority in every research field, regardless of discipline and methodology. (Paragraph 32)

3. Despite significant delays, we welcome the establishment of UK Committee on Research Integrity (UK CORI) as a potential answer to existing research integrity challenges. We ask UK CORI to commit to produce an annual state of the nation report on research integrity, including reproducibility, issues. These reports should contain a breakdown of the scale of these issues across different disciplines. They should also contain action plans for addressing identified issues. UK CORI should also publish an action plan detailing how it will carry out each of the steps set out in its strategy, including timelines and what resources will be committed for the activities it plans to carry out. (Paragraph 43)

4. It is disappointing that UK CORI’s recently published strategy did not mention reproducibility, especially since our inquiry highlighted that this is a major research integrity issue. UK CORI should make sure reproducibility challenges are given due attention and not overlooked in deference to other pressing research integrity issues. UK CORI should develop a sub-committee to focus on reproducibility challenges in research. This sub-committee should establish the relative weight of reproducibility within research integrity concerns, and UK CORI should use this evidence to plan its prioritisation. (Paragraph 44)

5. The increased application of artificial intelligence (AI) in scientific research presents a challenge to traditional research methods. The UK Committee on Research Integrity (UK CORI) should specifically: investigate the impact of deploying AI—and other increasingly complex software—on reproducibility in research; consider whether specialised action is needed; and set out these findings in its annual reports. (Paragraph 45)

6. Whilst significant reproducibility challenges are faced in research, to refer to the sum of these issues as a “crisis” risks detracting from the many successes of the UK’s scientific research base. Nonetheless, there is need for action to address the significant problems caused by the prevalence or reproducibility problems in the scientific community. (Paragraph 50)

Solving reproducibility challenges which emerge as research is conducted

7. The UK’s lacks an established infrastructure for responding to research misconduct cases. The UK Government should lead on a co-produced framework with the UK Reproducibility Network, UKRIO and UK CORI, which sets out the roles and expectations for key actors when cases of misconduct are identified. (Paragraph 60)

8. The UK Government should assess the benefits that an additional body, set up to investigate malpractice, could bring to the UK’s research integrity governance architecture. (Paragraph 61)

9. Most reproducibility issues are, in the main, not the result of deliberate bad practice. Many of the incentives faced by individuals conducting research act against reproducibility. Whilst individuals must take responsibility for conducting work which prioritises robust analysis and transparency, and for promoting the importance of reproducibility within their research field, we recognise that aspects of the academic process, such as time pressures and the publication model, act against the promotion of research integrity and reproducibility. Therefore, the research community, including research institutions and publishers, should work alongside individuals to create an environment where research integrity and reproducibility are championed. (Paragraph 68)

10. Research institutions should model a culture of reproducibility by managing inordinate pressures on academics and encouraging the prioritisation of reproducibility in research outputs. This extends to encouraging openness around mistakes and their correction. In collaboration with the Higher Education Sector, Universities UK should implement a coordinated policy on minimum protected research time for research staff. (Paragraph 72)

11. Statistical experts and software developers are insufficiently recognised and renumerated within the university research sector. Funders and universities should develop dedicated funding for the presence of statistical experts and software developers in research teams. In tandem, universities should work on developing formalised, aspirational career paths for these professions. (Paragraph 76)

12. Research funders should implement stronger tests for the presence of adequate software and statistical skills within research teams at the outset of a funding application. Where these skills are perceived to be lacking, UKRI should consider the feasibility and cost of offering a dedicated methodological support system to research teams. (Paragraph 77)

13. Currently there is insufficient attention placed on reproducibility and research integrity training for university students and research professionals. Greater emphasis should be placed on the importance of reproducibility and research integrity in education and training at undergraduate, postgraduate and early career researcher stages. Part of this training at the undergraduate and postgraduate levels should include the routine production of replications. (Paragraph 82)

14. Institutions should incorporate mandatory reproducibility training and professional development plans for researchers across the course of their career. (Paragraph 83)

15. Short-term research grants place restrictive limitations on researchers, which can be to the detriment of research integrity and reproducibility. UKRI should consult with a representative sample of researchers to understand whether their grants allow them sufficient time and funding to do the work needed for ensuring their research is reproducible. UKRI should also implement a trial funding programme with an emphasis on ‘slower’ science. (Paragraph 88)

16. Uncertainty in the academic job market, especially at earlier career stages, acts as a strong additional disincentive against the prioritisation of reproducibility by researchers. Research funders, including UKRI, should work to impose a three-year minimum contract for post-doctoral researchers in universities. (Paragraph 89)

Solving reproducibility challenges which emerge in the communication of research

17. The trend towards blanket open access in the communication of scientific outputs is positive. UKRI and other research funders should continue to implement open access policies until this figure reaches 100%, by the end of 2025 at the latest. (Paragraph 98)

18. Currently, research outputs are frequently published without an associated link through to their open-source data and code. This prevents other researchers assessing work for its reproducibility. In all bar the most exceptional ethical and legal situations, researchers should share their research data and code alongside published outputs. (Paragraph 101)

19. Journals should collectively encourage researchers to employ the FAIR (Findability, Accessibility, Interoperability, and Reuse of digital assets) principles within their research and should mandate the deposition of research data in open-access repositories alongside the publication of research outputs. (Paragraph 104)

20. We welcome UKRI’s use of data management plans. A continued emphasis on their importance as a condition of research funding is necessary. (Paragraph 106)

21. Providing adequate funding for replication studies is an important precondition for ensuring researchers have the resources necessary to conduct them. UKRI should learn from its Dutch equivalent, NWO, by developing a pilot programme to fund replication studies. (Paragraph 115)

22. Aside from some notable exceptions, publishing routes for negative and confirmatory findings are not pursued thoroughly enough by the scholarly publishing industry. Publishers should review their journal portfolios to ensure that there are sufficient options for the publication of negative and confirmatory science, in line with the proportion of submissions which demand such routes. (Paragraph 116)

23. Publishers have a vital role in the maintenance of the scholarly record. Publishers should support academics who report issues with published research in their journals and should commit to timely publication of research error corrections and retractions where necessary—in our view this process should not take longer than two months. Publishers should also commit to timely deployment of technology to support the quality of the published record. (Paragraph 123)

Solving reproducibility challenges in the assessment of research

24. Reward structures in academia disincentivise reproducibility by placing disproportionate value on secured funding and frequent publication in prestigious journals. (Paragraph 127)

25. The Future Research Assessment Programme (FRAP) is consulting on reforms for the assessment of UK higher education research. It should review the Research Excellence Framework assessment criteria to assure that transparency is a prerequisite of top-scoring research. It should also consider the effects of removing ‘originality’ from the top score bracket. (Paragraph 131)

26. Researchers should be assessed on the broader contributions to their academic field, including time spent conducting voluntary peer review and promoting reproducibility and research integrity. Funders, led by UKRI, should move towards the exclusive use of the ‘resume for researchers’ format in funding calls by 2025. (Paragraph 133)

27. Peer review should not be viewed as a binary measure of quality versus unreliability for published papers. There is a wide range of competency, depth, and rigour in the analyses carried out during peer review as time-poor academics often do not have time to conduct detailed scrutiny. Where possible, journals should seek to publish peer review comments to improve transparency in the research publication process and to deter paper mills. (Paragraph 142)

28. The ‘registered report partnership model’ offers a good opportunity for researchers to have their methodologies peer reviewed at an early stage in the research process, allowing researchers to incorporate feedback into their research plans. Within this model, funding and the principles for a study are agreed concurrently, with a guarantee of publication regardless of experiment outcomes. Publishers and funders should work together to offer a ‘registered report partnership model.’ Not only will this benefit researchers who will receive feedback on their research plans and guaranteed publication, but greater transparency in research methodologies will be achieved, which should have a positive impact on research reproducibility. (Paragraph 143)

Formal minutes

Wednesday 26 April 2023

Greg Clark, in the Chair

Aaron Bell

Tracey Crouch

Rebecca Long-Bailey

Stephen Metcalfe

Carol Monaghan

Draft Report (Reproducibility and Research Integrity), proposed by the Chair, brought up and read.

Ordered, That the draft Report be read a second time, paragraph by paragraph.

Paragraphs 1 to 144 read and agreed to.

Summary agreed to.

Resolved, That the Report be the Sixth Report of the Committee to the House.

Ordered, That the Chair make the Report to the House.

Ordered, That embargoed copies of the Report be made available, in accordance with the provisions of Standing Order No. 134.

Adjournment

Adjourned till Wednesday 3 May 2023 at 9.20am.


Witnesses

The following witnesses gave evidence. Transcripts can be viewed on the inquiry publications page of the Committee’s website.

Wednesday 01 December 2021

Professor Dorothy Bishop, Professor of Developmental Neuropsychology, University of Oxford; Professor Marcus Munafò, Chair, UK Reproducibility Network Steering GroupQ1–26

Dr Ivan Oransky, Co-Founder, Retraction Watch, Editor-in-Chief, Spectrum, Distinguished Writer in Residence, New York University’s Arthur Carter Journalism Institute; Dr Janine Austin Clayton, Associate Director for Research on Women’s Health and Director, Office of Research on Women’s Health at the United States National Institute for HealthQ27–54

Professor Neil Ferguson OBE, Professor of Mathematical Biology, Imperial College LondonQ55–71

Wednesday 15 December 2021

Dr Ben Goldacre, Director, Nuffield Department of Primary Care Health Sciences, University of Oxford; Dr Jessica Butler, Analytical Lead and Research Fellow, University of AberdeenQ72–177

Dr Ritu Dhand, Chief Scientific Officer, Springer Nature; Dr Elizabeth Moylan, Publisher, Research Integrity and Publishing Ethics, WileyQ98–125

Richard Horton, Editor in Chief, The Lancet; The Viscount Ridley DL, Co-author, Viral: The Search for the Origin of Covid-19; Dr Alina Chan, Co-author, Viral: The Search for the Origin of Covid-19Q125–177

Wednesday 19 January 2022

Dr Adrian Weller, Principal Research Fellow in Machine Learning, University of Cambridge; Professor Sebastian Vollmer, Professor for Application of Machine Learning, TU KaiserslauternQ178–206

Wednesday 02 February 2022

Professor Dame Ottoline Leyser, Chief Executive, UK Research and Innovation (UKRI); James Parry, Chief Executive, UK Research Integrity OfficeQ207–257

George Freeman MP, Minister for Science, Research and Innovation, Department for Business, Energy & Industrial StrategyQ258–291


Published written evidence

The following written evidence was received and can be viewed on the inquiry publications page of the Committee’s website.

RRE numbers are generated by the evidence processing system and so may not be complete.

1 Anonymised (RRE0011)

2 Anonymised (RRE0025)

3 Arnold, Dr Becky (Postdoctoral researcher, University of Keele) (RRE0005)

4 Association of Medical Research Charities (RRE0074)

5 Association of Research Managers and Administrators - ARMA UK (RRE0079)

6 Bates, Timothy (Professor, University of Edinburgh) (RRE0006)

7 Biochemical Society (RRE0075)

8 Biomathematics & Statistics Scotland (RRE0053)

9 Bishop, Professor Dorothy (Professor of Developmental Neuropsychology, University of Oxford) (RRE0013); (RRE0095)

10 Bradley, Stephen (GP & Clinical Research Fellow, University of Leeds); DeVito, Nicholas (Doctoral Research Fellow, University of Oxford); Lloyd, Kelly (PhD Researcher, University of Leeds); Butler, Jess (Research Scientist, University of Aberdeen); Mellor, David (Director of Policy, Center for Open Science (USA)); and Patricia Logullo (Postdoctoral Meta-Researcher, University of Oxford) (RRE0061)

11 Brand, Dr Charlotte (Postdoctoral Research Associate, University of Sheffield) (RRE0026)

12 Brauer, Dr Rene (visiting scholar , University of Eastern Finland) (RRE0020)

13 British Neuroscience Association (RRE0051)

14 British Pharmacological Society (RRE0054)

15 British Psychological Society (RRE0052)

16 CLOSER, the home of longitudinal research (RRE0078)

17 Catt, Brian (Energy Consultant, Eurochannel) (RRE0085)

18 Chan, Dr Alina (RRE0098)

19 Coventry University Group (RRE0071)

20 Department for Business, Energy and Industrial Strategy (RRE0072); (RRE0102)

21 Drury, Professor John (Professor of Social Psychology, University of Sussex); Bauld, Professor Linda (Bruce and John Usher Professor of Public Health, University of Edinburgh); Munafò, Professor Marcus (Professor of Biological Psychology, University of Bristol); Brown, Professor Jamie (Professor of Behavioural Science, University College London); Dienes, Professor Zoltan (Professor in Experimental Psychology, University of Sussex); Professor Susan Michie (Professor of Health Psychology, University College London); Stokoe, Professor Elizabeth (Professor of Social Interaction, Loughborough University); Templeton, Dr Anne (Lecturer in Social Psychology, University of Edinburgh); and West, Professor Robert (Emeritus Professor of Health Psychology, University College London) (RRE0003)

22 ELIXIR-UK (RRE0069)

23 EMBL-EBI (RRE0058)

24 Eastern Arc (RRE0046)

25 Elsevier (RRE0049)

26 Evans, Thomas Rhys; Pownall, Madeleine; Collins, Elizabeth; Henderson, Emma L; Pickering, Jade; O’Mahoney, Aoife; Zaneva, Mirela; Jaquiery, Matthew; and Dumblaska, Tsvetomira (RRE0007)

27 FORRT - A Framework for Open and Reproducible Research Training (RRE0080)

28 Grigg, Professor Jonathan (Professor of Paediatric Respiratory Medicine, Deputy Dean for Research Integrity, Queen Mary Univerity of London) (RRE0008)

29 Guy’s and St Thomas’ NHS Foundation Trust (RRE0040)

30 Halsey, P Lewis (Professor of Environmental Physiology, University of Roehampton) (RRE0009)

31 Harrington, (Scientist, Keele University) (RRE0001)

32 Hatton, Professor Les (Emeritus Professor of Forensic Software Engineering, Kingston University); and Warr, Professor Greg (Emeritus Professor of Biochemistry and Molecular Biology, Medical University of South Carolina) (RRE0031)

33 Health Research Authority (RRE0070)

34 HealthWatch UK (RRE0037)

35 Hobson, Dr Hannah (Lecturer in Psychology, University of York); de Bruin, Dr Angela (Lecturer in Psychology, University of York); and Horner, Dr Aidan (Senior Lecturer in Psychology, University of York) (RRE0010)

36 Imperial College London (RRE0064)

37 Institute for Scientific Information, Clarivate (RRE0077)

38 International Institute for Conservation of Historic and Artistic Works (RRE0060)

39 Iphofen, Dr Ron (Independent Consultant, Independent) (RRE0012)

40 Keirnan, Mr Richard (CEO, Pelory Limited) (RRE0084)

41 Kings College London (RRE0091)

42 Kolstoe, Dr Simon (Reader in Bioethics, University of Portsmouth) (RRE0014)

43 Loryman, Chris (RRE0004)

44 Marsden, Prof. Emma (Professor of Applied Linguistics (Education Dept), University of York); and Bolibaugh, Dr. Cylcia (Lecturer in Applied Linguistics (Education dept), University of York) (RRE0017)

45 Marston, Dr Hannah R (Research Fellow, The Open University); Morgan, Dr Deborah J. (Senior Research Fellow, Swansea University); Wilson-Menzfeld, Dr Gemma (Senior Lecturer, Northumbria University); and Turner, Mr Robbie S. (Partner and Senior Consultant, Spektrum Consulting) (RRE0021)

46 MQ: Mental Health Research (RRE0063)

47 Marchant, Dr Paul (Retired Chartered Statistician, Formerly of Leeds Beckett University and Visiting Research Fellow of the University of Leeds) (RRE0018)

48 Munafo, Professor Marcus; and Shanahan, Hugh (RRE0101)

49 McKay, Professor Stephen (Distinguished Professor in Social Research, University of Lincoln) (RRE0066)

50 Moylan, Dr Elizabeth (Publisher, Research Integrity and Publishing Ethics, Wiley) (RRE0100)

51 Munafò, Professor Marcus (Professor of Biological Psychology and MRC Investigator, University of Bristol) (RRE0094); (RRE0096)

52 National Centre for the Replacement, Refinement & Reduction of Animals in Research (NC3Rs) (RRE0033)

53 National Physical Laboratory (RRE0057)

54 Oliveira, Catia M. (PhD student, University of York); Guttesen, Anna (PhD student, University of York); Olivier, Juliana (PhD student, University of York); and Sullivan, Emma (PhD student, University of York) (RRE0039)

55 People for the Ethical Treatment of Animals Foundation (RRE0082)

56 Reproducible Research Oxford, University of Oxford; and Open WIN (Wellcome Centre for Integrative Neuroimaging), University of Oxford (RRE0067)

57 Responsible Research in Practice (RRE0081)

58 Retraction Watch and The Center for Scientific Integrity (RRE0097)

59 Royal Statistical Society (RSS) (RRE0065)

60 STM (RRE0059)

61 Scottish Research Integrity Network (RRE0030)

62 Sense about Science (RRE0089)

63 Seymour, Michael (RRE0093)

64 Smith, Dr James Andrew (Senior Research Associate, Botnar Research Institute and Centre for Statistics in Medicine, University of Oxford); and Sandbrick, Jonas (Researcher, Future of Humanity Institute, University of Oxford) (RRE0034)

65 Smith, Professor Adrian (Secretary, Norecopa (Norway’s National Consensus Platform for the Replacement, Reduction and Refinement of Animal Experiments)) (RRE0027)

66 Software Sustainability Institute (RRE0038)

67 Springer Nature (RRE0047)

68 Stafford, Dr Tom (Research Practice Lead, University of Sheffield); and Brand, Dr Charlotte (Postdoctoral research associate, University of Sheffield) (RRE0032)

69 Sterri, Dr Aksel (Hosted Researcher, Oxford Uehiro Centre for Practical Ethics, University of Oxford); Brown, Dr Rebecca (Senior Research Fellow, Oxford Uehiro Centre for Practical Ethics, University of Oxford); Earp, Brian (Research Fellow, Oxford Uehiro Centre for Practical Ethics, University of Oxford); and Savulescu, Professor Julian (Uehiro Chair in Practical Ethics; Director, Oxford Uehiro Centre for Practical Ethics; Co-Director, Wellcome Centre for Ethics and Humanities University of Oxford, Oxford Uehiro Centre for Practical Ethics, University of Oxford) (RRE0076)

70 Taylor & Francis (RRE0023)

71 Taylor, Mr Christopher Marc (Chair, ISRCTN registry) (RRE0050)

72 The Academy of Medical Sciences (RRE0055)

73 The Alan Turing Institute (RRE0088)

74 The Centre for Education & Youth (RRE0048)

75 The Global Warming Policy Foundation (RRE0015)

76 The Publishers Association (RRE0056)

77 The Royal Academy of Engineering (RRE0029)

78 The Royal Society (RRE0087)

79 The Russell Group (RRE0086)

80 TranspariMED (RRE0092)

81 TranspariMED; Cochrane; and Transparency International Global Health Programme (RRE0024)

82 UCL (RRE0019)

83 UK Committee on Research Integrity (RRE0104); (RRE0105)

84 UK Reproducibility Network (institutional leads) (RRE0043)

85 UK Reproducibility Network (local network leads) (RRE0042)

86 UK Reproducibility Network (stakeholder engagement group) (RRE0044)

87 UK Reproducibility Network (steering group) (RRE0041)

88 UK Research Integrity Office (RRE0083)

89 UK Research and Innovation (UKRI) (RRE0090); (RRE0103)

90 Universities Allied for Essential Medicines UK (RRE0036)

91 University of Edinburgh (RRE0022)

92 University of Manchester (RRE0028)

93 Wiley (RRE0068)

94 Wilmshurst, Dr Peter (RRE0099)

95 protocols.io (RRE0045)


List of Reports from the Committee during the current Parliament

All publications from the Committee are available on the publications page of the Committee’s website.

Session 2022–23

Number

Title

Reference

1st

Pre-appointment hearing for the Executive Chair of Research England

HC 636

2nd

UK space strategy and UK satellite infrastructure

HC 100

3rd

My Science Inquiry

HC 618

4th

The role of Hydrogen in achieving Net Zero

HC 99

5th

Diversity and Inclusion in STEM

HC 95

Session 2021–22

Number

Title

Reference

1st

Direct-to-consumer genomic testing

HC 94

2nd

Pre-appointment hearing for the Chair of UK Research and Innovation

HC 358

3rd

Coronavirus: lessons learned to date

HC 92

Session 2019–21

Number

Title

Reference

1st

The UK response to covid-19: use of scientific advice

HC 136

2nd

5G market diversification and wider lessons for critical and emerging technologies

HC 450

3rd

A new UK research funding agency

HC 778


Footnotes

1 UK Research and Innovation, The UK’s research and innovation infrastructure: opportunities to grow our capability (October 2020), p2

2 Department for Business, Energy, & Industrial Strategy Press Release, Government announces plans for largest ever R&D budget, (14 March 2022)

3 Universities UK, The Concordat to Support Research Integrity, (accessed 9 June 2022); Science & Technology Committee, Research Integrity, (HC 350; July 2018); and the Government response, (HC 1563; September 2018)

4 However, elements of research integrity, such as fraud or misconduct, that affect the reproducibility of research are considered in this report.

5 Science & Technology Committee, Reproducibility of Research inquiry launched, (accessed 9 June 2022)

6 Science & Technology Committee, Written Evidence - Reproducibility & research integrity, (accessed 14 March 2022)

7 Q72

8 Eastern Arc (RRE0046)

9 BMJ (RIN0081)

10 Eastern Arc (RRE0046)

11 The Russell Group (RRE0086)

12 Kings College London (RRE0091)

13 Q29

14 John P. A. Ioannidis, Meta-research: Why research on research matters, PLOS Biology, 2018

15 Elsevier (RRE0049)

16 Q38

17 Dr Aksel Sterri (Hosted Researcher at Oxford Uehiro Centre for Practical Ethics, University of Oxford); Dr Rebecca Brown (Senior Research Fellow at Oxford Uehiro Centre for Practical Ethics, University of Oxford); Brian Earp (Research Fellow at Oxford Uehiro Centre for Practical Ethics, University of Oxford); Professor Julian Savulescu (Uehiro Chair in Practical Ethics; Director, Oxford Uehiro Centre for Practical Ethics; Co-Director, Wellcome Centre for Ethics and Humanities University of Oxford at Oxford Uehiro Centre for Practical Ethics, University of Oxford) (RRE0076)

18 Stephen Bradley (GP & Clinical Research Fellow at University of Leeds); Nicholas DeVito (Doctoral Research Fellow at University of Oxford); Kelly Lloyd (PhD Researcher at University of Leeds); Jess Butler (Research Scientist at University of Aberdeen); David Mellor (Director of Policy at Center for Open Science (USA)); Patricia Logullo (Postdoctoral Meta-Researcher at University of Oxford) (RRE0061)

19 Stephen Bradley (GP & Clinical Research Fellow at University of Leeds); Nicholas DeVito (Doctoral Research Fellow at University of Oxford); Kelly Lloyd (PhD Researcher at University of Leeds); Jess Butler (Research Scientist at University of Aberdeen); David Mellor (Director of Policy at Center for Open Science (USA)); Patricia Logullo (Postdoctoral Meta-Researcher at University of Oxford) (RRE0061)

20 Gov.uk, We’re restoring Britain’s place as a scientific superpower, (accessed 9 June 2022)

21 HealthWatch UK (RRE0037), Springer Nature (RRE0047), Retraction Watch and The Center for Scientific Integrity (RRE0097), Dr Peter Wilmshurst (RRE0099)

22 Lee Harvey, Research fraud: a long-term problem exacerbated by the clamour for research grants, Quality in Higher Education, 2020

23 Shamoo, Adil E., and David B. Resnik, Misconduct in Research, Responsible Conduct of Research, 4th edition, Oxford University Press, 2022

24 Stuart Ritchie, Science Fictions, Penguin Random House UK, 2020

25 BMJ, Wakefield’s article linking MMR vaccine and autism was fraudulent, 2011.

26 BMJ, Measles cases in England and Wales rise sharply in 2008, 2009

27 The Guardian, Big trouble in the world of ‘Big Physics’, (18 September 2002)

28 Science, Physicist Fired for Falsified Data, (25 September 2002)

29 Nature, Woo Suk Hwang convicted, but not of fraud, (26 October 2009)

30 Nature, Stem-cell scientist found guilty of misconduct, (01 April 2014)

31 Science, STAP cells succumb to pressure, (13 June 2014)

32 BMJ, Disgraced surgeon Paolo Macchiarini convicted over experimental trachea surgery, (2022)

33 For Better Science, Macchiarini’s trachea transplant patients: the full list, (16 June 2017)

34 The Academy of Medical Sciences (RRE0055)

35 The Academy of Medical Sciences (RRE0055)

36 Monya Baker, 1,500 scientists lift the lid on reproducibility, Nature, 533 (2016), pp 452–454

37 Science & Technology Committee, Research Integrity, (HC 350; July 2018)

38 John P. A. Ioannidis, Meta-research: Why research on research matters, PLOS Biology, 2018

39 John P. A. Ioannidis, Why Most Published Research Findings Are False, PLOS Medicine, 2005

40 John P. A. Ioannidis, Why Most Published Research Findings Are False, PLOS Medicine, 2005

41 Steven Goodman and Sander Greenland, Why Most Published Research Findings Are False: Problems in the Analysis, PLOS Medicine, 2005

42 Monya Baker, 1,500 scientists lift the lid on reproducibility, Nature, 533 (2016), pp 452–454

43 Wellcome, What researchers think about the culture they work in, (accessed 19 April 2022)

44 Retraction Watch and The Center for Scientific Integrity (RRE0097)

45 Department for Business, Energy, and Industrial Strategy, International comparison of the UK research base, 2022, (accessed 9 June 2022)

46 Science & Technology Committee, Sixth Report of Session 2018–19, Research Integrity, (HC 350; July 2018), paragraph 128; and the Government response, (Seventh Special Report from the Committee, (HC 1563; September 2018), Appendix 1, paragraphs 12-14)

47 UK Research and Innovation (UKRI) (RRE0090)

48 The Academy of Medical Sciences (RRE0055)

49 UK Research Integrity Office (RRE0083)

50 STM (International Association of Scientific, Technical and Medical Publishers) (RRE0059)

51 Imperial College London (RRE0064)

52 Reproducible Research Oxford, University of Oxford, Open WIN (Wellcome Centre for Integrative Neuroimaging), University of Oxford (RRE0067)

53 Coventry University Group (RRE0071)

54 British Psychological Society (RRE0052); Professor Dorothy Bishop (Professor of Developmental Neuropsychology at University of Oxford) (RRE0013)

55 The Royal Academy of Engineering (RRE0029)

56 UCL (RRE0019)

57 Q181

58 State of AI Report 2021, State of AI Report 2021, (accessed 19 April 2022)

59 Q178

60 Q179

61 UKRI, Promoting research integrity across the UK, (15 July 2021)

62 UK Research and Innovation, Inaugural co-chairs of UK Committee on Research Integrity announced, (accessed 19 April 2022)

63 Q208

64 UK Research and Innovation (UKRI) (RRE0090)

65 Q43

66 Professor Dorothy Bishop (Professor of Developmental Neuropsychology at University of Oxford) (RRE0013)

67 Imperial College London (RRE0064)

68 The Russell Group (RRE0086)

69 British Neuroscience Association (RRE0051); UK Research and Innovation (UKRI) (RRE0090)

70 UK Research and Innovation (UKRI) (RRE0090)

71 British Neuroscience Association (RRE0051)

72 UCL (RRE0019); Dr Tom Stafford (Research Practice Lead at University of Sheffield); Dr Charlotte Brand (Postdoctoral research associate at University of Sheffield) (RRE0032)

73 Q209

74 British Psychological Society (RRE0052); Eastern Arc (RRE0046)

75 Association of Medical Research Charities (RRE0074)

76 Q251

77 UK Committee on Research Integrity, Strategic plan 2023 to 2025, (23 February 2023)

78 UK Committee on Research Integrity, Strategic plan 2023 to 2025, (23 February 2023), p5

79 Monya Baker, 1,500 scientists lift the lid on reproducibility, Nature, 533 (2016), pp 452–454; a definition for a “slight crisis” was not provided by the authors.

80 Springer Nature (RRE0047)

81 Sense about Science (RRE0089)

82 Q275

83 The Russell Group (RRE0086)

84 Biomathematics and Statistics Scotland (RRE0053) and see Open Science

85 British Psychological Society (RRE0052)

86 Reproducible Research Oxford, University of Oxford, Open WIN (Wellcome Centre for Integrative Neuroimaging), University of Oxford (RRE0067)

87 Timothy Bates (Professor at University of Edinburgh) (RRE0006)

88 Department for Business, Energy & Industrial Strategy UK Research and Development Roadmap, (accessed 19 April 2022), p51

89 Department for Business, Energy & Industrial Strategy Research and development (R&D) people and culture strategy, (accessed 19 April 2022) p22 and p30

90 Q276

91 STM (RRE0059)

92 Springer Nature (RRE0047)

93 HealthWatch UK (RRE0037)

94 Retraction Watch and The Center for Scientific Integrity (RRE0097)

95 Professor Dorothy Bishop (Professor of Developmental Neuropsychology at University of Oxford) (RRE0013)

96 Professor Dorothy Bishop (Professor of Developmental Neuropsychology at University of Oxford) (RRE0013)

97 National Physical Laboratory (RRE0057)

98 The Alan Turing Institute, The Turing Way, (accessed 31 March 2022)

99 ELIXIR-UK (RRE0069)

100 Jesse M. Alston and Jessica A. Rick, A Beginner’s Guide to Conducting Reproducible Research, The Bulletin of the Ecological Society of America, vol 2012, issue 2 (2021)

101 Catia M. Oliveira (PhD student at University of York); Anna Guttesen (PhD student at University of York); Juliana Olivier (PhD student at University of York); Emma Sullivan (PhD student at University of York) (RRE0039)

102 The Russell Group (RRE0086)

103 UK Research Integrity Office (RRE0083)

104 British Pharmacological Society (RRE0054)

105 Reproducible Research Oxford, University of Oxford, Open WIN (Wellcome Centre for Integrative Neuroimaging), University of Oxford (RRE0067)

106 Eastern Arc (RRE0046)

107 Institute for Scientific Information, Clarivate (RRE0077)

108 Timothy Bates (Professor at University of Edinburgh) (RRE0006)

109 UK Reproducibility Network (local network leads) (RRE0042)

110 Association of Medical Research Charities (RRE0074)

111 Dr Paul Marchant (Retired Chartered Statistician at Formerly of Leeds Beckett University and Visiting Research Fellow of the University of Leeds) (RRE0018)

112 Biomathematics & Statistics Scotland (RRE0053)

113 Q44

114 Professor Marcus Munafò; Hugh Shanahan (RRE0101)

115 The Academy of Medical Sciences (RRE0055)

116 ELIXIR-UK (RRE0069)

117 Q81

118 Royal Statistical Society (RSS) (RRE0065)

119 Imperial College London (RRE0064); Royal Statistical Society (RSS) (RRE0065)

120 Imperial College London (RRE0064)

121 The Royal Academy of Engineering (RRE0029)

122 British Pharmacological Society (RRE0054)

123 British Psychological Society (RRE0052)

124 British Neuroscience Association (RRE0051)

125 protocols.io (RRE0045)

126 Mr Christopher Marc Taylor (Chair at ISRCTN registry) (RRE0050)

127 Reproducible Research Oxford, University of Oxford, Open WIN (Wellcome Centre for Integrative Neuroimaging), University of Oxford (RRE0067)

128 British Neuroscience Association (RRE0051)

129 The Alan Turing Institute (RRE0088)

130 British Pharmacological Society (RRE0054)

131 Dr Charlotte Brand (Postdoctoral Research Associate at University of Sheffield) (RRE0026)

132 Springer, What is Open Access?, (accessed 19 April 2022)

133 Wiley (RRE0068)

134 Springer, What is Open Access?, (accessed 19 April 2022)

135 UK Research and Innovation, UKRI open access policy, (6 August 2021)

136 UK Research and Innovation (UKRI) (RRE0090)

137 The Publishers Association (RRE0056)

138 Q236

139 Editorial, Nature Neuroscience offers open access publishing, Nature Neuroscience, 25, 1 (2022)

140 Q239

141 The White House, OSTP Issues Guidance to Make Federally Funded Research Freely Available Without Delay, (25 August 2022); Nature, US government reveals big changes to open-access policy, (30 August 2022)

142 Chemistry World, White House says federally-funded research papers must be open access from 2026, (30 August 2022)

143 Peer Community In, “Peer Community in, (accessed 22 June 2022)

144 Jupyter, Jupyter Notebook: The Classic Notebook Interface, (accessed 4 May 2022)

145 British Pharmacological Society (RRE0054)

146 Thomas Rhys Evans; Madeleine Pownall; Elizabeth Collins; Emma L Henderson; Jade Pickering; Aoife O’Mahony; Mirela Zaneva; Matthew Jaquiery; Tsvetomira Dumbalska (RRE0007)

147 Biomathematics & Statistics Scotland (RRE0053)

148 Q76

149 Q98

150 Q107

151 Science and Technology Committee, The disclosure of climate data from the Climatic Research Unit at the University of East Anglia, (HC 387-I; March 2010), para 136

152 Professor Marcus Munafò; Hugh Shanahan (RRE0101)

153 Mark D. Wilkinson et al, The FAIR Guiding Principles for scientific data management and stewardship, Scientific data, 3, 160018 (2016)

154 Q48

155 Stanford Libraries, Data management plans, (accessed 19 April 2022)

156 ELIXIR-UK (RRE0069)

157 UK Research and Innovation (UKRI) (RRE0090)

158 The Royal Academy of Engineering (RRE0029)

159 P Lewis Halsey (Professor of Environmental Physiology at University of Roehampton) (RRE0009)

160 Biomathematics & Statistics Scotland (RRE0053)

161 David Chavalarias et al, Evolution of Reporting P Values in the Biomedical Literature, 1990–2015, JAMA, (March 2016), vol 315 no 11, pp 1141-8

162 Dr Hannah Hobson (Lecturer in Psychology at University of York); Dr Angela de Bruin (Lecturer in Psychology at University of York); Dr Aidan Horner (Senior Lecturer in Psychology at University of York) (RRE0010)

163 University of Edinburgh (RRE0022)

164 NWO, Replication Studies, (accessed 19 April 2022)

165 UK Research and Innovation (UKRI) (RRE0103)

166 The Russell Group (RRE0086)

167 The Academy of Medical Sciences (RRE0055)

168 University of Edinburgh (RRE0022)

169 Q105

170 Dr Elizabeth Moylan (Publisher, Research Integrity and Publishing Ethics at Wiley) (RRE0100)

171 STM (RRE0059)

172 Wiley (RRE0068)

173 The Publishers Association (RRE0056)

174 Q120

175 Q115

176 Q117

177 Dr Alina Chan (RRE0098)

178 Reproducible Research Oxford, University of Oxford, Open WIN (Wellcome Centre for Integrative Neuroimaging), University of Oxford (RRE0067)

179 Wiley (RRE0068)

180 The Alan Turing Institute (RRE0088)

181 University of Manchester (RRE0028)

182 Q51

183 UK Reproducibility Network (local network leads) (RRE0042)

184 The Declaration on Research Assessment (DORA) ‘What is DORA?’, (accessed 19 April 2022); British Neuroscience Association (RRE0051)

185 UK Research and Innovation (UKRI) (RRE0090)

186 World Conferences on Research Integrity, Hong Kong Principles, (accessed 19 April 2022)

187 British Neuroscience Association (RRE0051)

188 UK Research & Innovation, How Research England supports research excellence, (accessed 19 April 2022)

189 UK Research & Innovation, How Research England supports research excellence, (accessed 19 April 2022)

190 Research Excellence Framework 2021, Index of revisions to the ‘Panel criteria and working methods’ (2019/02), (accessed 19 April 2022)

191 Department for Business, Energy and Industrial Strategy (RRE0072)

192 Research Professional News, Next REF could ‘liberate creativity’, hints Jessica Corner, (2 March 2023)

193 P Lewis Halsey (Professor of Environmental Physiology at University of Roehampton) (RRE0009)

194 Q89

195 Q226

196 Q226

197 The Royal Society, Résumé for Researchers, (accessed 19 April 2022)

198 UK Research and Innovation (UKRI) (RRE0090)

199 Dr Tom Stafford (Research Practice Lead at University of Sheffield); Dr Charlotte Brand (Postdoctoral research associate at University of Sheffield) (RRE0032); The Alan Turing Institute (RRE0088)

200 The Alan Turing Institute (RRE0088)

201 Elsevier (RRE0049)

202 Chris Loryman (RRE0004)

203 Q83

204 The Publishers Association (RRE0056)

205 Reproducible Research Oxford, University of Oxford, Open WIN (Wellcome Centre for Integrative Neuroimaging), University of Oxford (RRE0067)

206 UK Reproducibility Network (stakeholder engagement group) (RRE0044)

207 Biomathematics & Statistics Scotland (RRE0053)

208 Wiley (RRE0068)

209 Professor Dorothy Bishop (Professor of Developmental Neuropsychology at University of Oxford) (RRE0095)

210 National Centre for the Replacement, Refinement & Reduction of Animals in Research (NC3Rs) (RRE0033)

211 Dr Charlotte Brand (Postdoctoral Research Associate at University of Sheffield) (RRE0026)

212 Q40

213 Thomas Rhys Evans; Madeleine Pownall; Elizabeth Collins; Emma L Henderson; Jade Pickering; Aoife O’Mahony; Mirela Zaneva; Matthew Jaquiery; Tsvetomira Dumbalska (RRE0007)

214 PublishingState.com, How Many Academic Journals are There in the World?, (accessed 4 May 2022)

215 Thomas Rhys Evans; Madeleine Pownall; Elizabeth Collins; Emma L Henderson; Jade Pickering; Aoife O’Mahony; Mirela Zaneva; Matthew Jaquiery; Tsvetomira Dumbalska (RRE0007)

216 Stephen Bradley (GP & Clinical Research Fellow at University of Leeds); Nicholas DeVito (Doctoral Research Fellow at University of Oxford); Kelly Lloyd (PhD Researcher at University of Leeds); Jess Butler (Research Scientist at University of Aberdeen); David Mellor (Director of Policy at Center for Open Science (USA)); Patricia Logullo (Postdoctoral Meta-Researcher at University of Oxford) (RRE0061)