Research integrity Contents

4Supporting and promoting the integrity of research

UKRIO’s role in supporting research integrity

47.The UK Research Integrity Office (UKRIO, established in 2006) is an advisory body for the research sector on matters relating to research integrity. It does not investigate misconduct, but instead offers “support to the public, researchers and organisations to further good practice in academic, scientific and medical research”, and welcomes enquiries on “any issues relating to the conduct of research, whether promoting good research practice, seeking help with a particular research project or investigating cases of alleged fraud and misconduct”.87 UKRIO’s aim is to “provide practical and proportionate advice, which the public and the research community may find useful”.88

48.UKRIO is funded by institutional subscriptions, of £2,600 per year.89 The UKRIO website lists 80 such subscribers, of which 65 are Universities UK members. It therefore appears that at least 71 universities—i.e. the majority of the 136 UUK members—are not UKRIO subscribers.90 Other subscribers listed include universities outside the UK (including Belgium) and various UK academies and institutes such as the Royal Society and the British Academy.

49.James Parry (CEO, UKRIO) argued that the organisation’s reliance on funding from universities themselves (the majority of subscribers) did not represent a conflict of interest, since “pretty much everyone pays a flat-rate subscription, we are not beholden to any one institution. If a university says, ‘We don’t like the way you are handling this particular case,’ we can simply say, ‘That is all very well and good. We are happy for you to unsubscribe’”.91

50.It is surprising that most UK universities are not subscribers to the UK Research Integrity Office. The result is that the profile and impact of UKRIO might be highest with the institutions which already choose to participate, rather than the ones that might need the most help. The default assumption for all universities should be that they are subscribers to UKRIO, unless they can explain why they do not need to use UKRIO’s advisory services. We recommend that the Government and Universities UK write jointly to all universities to encourage them to engage with UKRIO and consider subscribing to its services.

Creating a suitable ‘research culture’

51.Many of the submissions we received argued that there was a need to consider the incentives and pressures acting on researchers, and to investigate how ‘research culture’ could be changed to better support research integrity. Professor Marcus MunafÒ argued that:

Research integrity is the wrong focus, or at least only represents a small part of a much larger problem. That problem is the wider incentive structures that exist across science (or at least some scientific disciplines). […] Rather than focus on the rare cases of outright scientific fraud, we should address the more insidious systemic problems that impact on the wider scientific endeavour.

52.A 2014 report on the ‘culture’ of research produced by the Nuffield Council on Bioethics92 pointed to a broad set of factors as contributors to problems with research integrity:

Professor Dame Ottoline Leyser, representing the Nuffield Council on Bioethics, complained that research is “hyper-competitive and the rules for winning the competition are the wrong rules”.93 She argued that this meant that “at some level we have lost sight of what science actually is”:

Science is a method. It is a way of building models of the world that have both explanatory and predictive power. It is not about the ultimate quest for “Truth”. It is not about correct and incorrect; it is a progressive method for proposing, testing and rejecting or refining models of the world […] It moves forward extensively by being wrong […] The way things have gone in the research system, we have developed a culture where people are rewarded for being “right” and being “exciting” in some way. Those things have nothing to do with science.94

53.We were also told that the competitive research environment is off-putting for some people. Professor Bishop suggested that “sometimes the people we most want to keep in science, and who want to do the careful stuff, get so demoralised that they leave, and we are left with the people who think, ‘Oh well, I’ll tweak it because my boss says I should tweak it’”.95 Professor Lewandowsky and Professor Bishop argued that “most scientists start out motivated by pursuit of knowledge, but they can become demoralized if they see rewards going to those who adopt dubious practices to get ahead”.96 They called for institutions and funders to adopt “criteria that reward researchers for reproducible research rather than showy research, with a focus on quality rather than quantity”.97

54.The Royal Society suggested that a range of incentives within the system can work against research integrity:

labs that take extra time to verify their data might have fewer publications to show their funders, those scientists who decide against splitting their data to get multiple publications have fewer publications to enter into the [Research Excellence Framework]. There is no single aspect of the culture of research that could be changed to fix this, it is the cumulative effect of lots of different mechanisms and pressures.98

They argued that “systems of publishing, assessment and dissemination of work should be adjusted in order to incentivise ‘good behaviour’”.99 The Society is currently holding a series of workshops on research culture, creating what they describe as a “national and self-sustaining conversation” about how to embed a culture of research that maintains research excellence.100

55.We received some evidence that the pressures on researchers vary according to the way in which the institution is funded. The Director of the MRC Laboratory of Molecular Biology—a research council-funded institute rather than a university—told us that a healthy research culture “may be easier to achieve” in a research institute than in some university environments:

Freed from other responsibilities, the emphasis of our scientists is on their research output, and this determines their reputation and reward. With relatively secure funding, we take a long-term view, and a track record of persistently getting discoveries right is highly valued. Emphasis on the short term and on numbers of publications per se is not helpful. Core funding (of the entire institute) is decided by reviews every five years, and collective success is important as well as individual achievement. This creates peer pressure for everyone to do well, and colleagues do not hesitate to be critical of work perceived as shoddy. Funding of larger entities, in addition to grants to single groups, can help to create this culture. The ability to provide core resources to bridge gaps (e.g. between externally-funded grants) relieves some of the short-term pressure to publish, and allows the emphasis to be on quality even if it takes longer. Universities, and their funding mechanisms, should ideally do the same.101

56.One factor that is frequently cited as a determinant of institutional and researcher behaviour is the Research Excellence Framework (REF)—the periodic review of the quality of research undertaken which is used to determine the ‘quality-related’ funding that institutions receive from higher education funding councils.102 Professors Lewandowksy and Bishop argued that the influence of the REF presented an opportunity for tackling perverse incentives on researchers.103 They suggested that the ‘environment’ section of the REF should “include information on responsible public communication of science, to discourage press offices from overhyping research”. HEFCE told us that the 2014 REF exercise did include some assessment of research integrity as part of the assessment of the research environment, albeit only for medical and life sciences and social sciences.104 We were encouraged to hear that in the 2021 REF process, all assessment panels will be required to consider how research integrity and misconduct issues can be covered in the research environment section.105

57.Creating a healthy ‘research culture’ is just as important as tackling lapses in research integrity, and would help ensure that a career in research is attractive to those who value rigour, accuracy, honest and transparency. We endorse Research England’s plans to require the REF 2021 assessors to consider how research integrity issues can be taken into account. We hope that this will underline the importance of research integrity to a healthy research environment, and counterbalance some of the pressures to compromise on integrity. For this to be successful it must be implemented in a way that encourages universities to be more transparent about research integrity and investigations, rather than an additional incentive to avoid drawing attention to lapses in integrity.

58.There is a need to understand more fully the effects of the current funding system on researcher and institutional behaviour, and consider how unwanted effects can be minimised. We recommend that UKRI commission research to understand the effects of incentives in the research system on researcher behaviour and assess where adjustments or counterbalances may be needed to support research integrity.

Training

59.Sheffield Hallam University highlighted the importance of training to its approach to research integrity:

Educational initiatives are key both for academics, researchers, doctoral students, relevant support staff and students on taught courses. Ensuring that research ethics and integrity are mandatory core elements of all the taught research methods curricula ensures that future generations of researchers are properly informed and that due attention to these issues is given in all research undertaken on taught courses.106

They told us that training on ethics and integrity is mandatory for doctoral students at the university and is included in the research supervisor training programme. They explained that:

The aim is to assure research integrity by making staff aware that there is a collective responsibility to report any apparent breaches as negative publicity related to research misconduct by any research in the university will impact on all researchers’ work. Thus it becomes research misconduct to collude with and/or fail to report any apparent breaches of policy and procedures.107

60.The risks of failing to ensure that training in research design was provided to research supervisors was made clear by Dr Alyson Fox from the Wellcome Trust:

The basic fundamentals of designing an experiment and research programme are really hard. Traditionally, but not exclusively, a young researcher coming into a lab to start a PhD, or their first postdoc, is taught by the person above them, so it is generally not even the [Principal Investigator]—the lab head; it is the postdoc or senior postdoc. In bad cases, essentially it is the blind leading the blind.108

A similar point was made by James Parry, UKRIO’s Chief Executive:

The question is, are we teaching researchers what they need to know, or are they picking things up as they go along? If it is the latter, are the picking up good habits or bad?”109

61.Provision of training is referred to in the Concordat to Support Research Integrity, albeit only in general terms. Signatories commit to “supporting a research environment that is underpinned by a culture of integrity” by providing “suitable learning, training and mentoring opportunities to support the development of researchers”.110 Funders provide some further specification of what this should entail; the formal RCUK requirement in its ‘Statement of Expectations for Postgraduate Training’ is that:

Students should receive training in the principles of good research conduct in their discipline, and understand how to comply with relevant ethical, legal and professional frameworks. Students should be provided with training to identify and challenge unintentional bias as appropriate to their studies.

Students should receive training in experimental design and statistics appropriate to their disciplines and in the importance of ensuring research results are robust and reproducible.111

As with other aspects of the Concordat, compliance with this is in principle monitored by the research and funding councils. Again, Dr Tony Peatfield from RCUK told us that there is “a dipstick monitoring process”, but “obviously we cannot have comprehensive policing of what actually goes on. One of the things we have done is to try to focus our doctoral training in fewer places, and that makes for better training but also makes it easier to keep an eye on what is going on”.112

62.Rather than leave this training for the institutions themselves to design, the Association of Medical Research Charities suggested that there was a need for centralised training on matters relating to research integrity:

We would encourage Government to consider supporting a centralised training and education resource for researchers across the breadth of research disciplines. This could be led by UK Research and Innovation and the National Institute for Health Research to support excellent science more broadly. It would be significantly easier to achieve from a ‘top down’ approach, rather than fragmented efforts from multiple smaller bodies. It could, for instance, form a significant part of a researcher’s training throughout their PhD thereby equipping future scientists with the skills required to create cultural change.113

63.Sir Bernard Silverman, representing UKRIO, told us that “some formal work on research integrity would be very worthwhile, just as university lecturers nowadays get formal training in teaching”.114 Dr Steven Hill, representing HEFCE, agreed that “there is probably a core of activity where consistent training would be helpful”,115 but both HEFCE and RCUK witnesses suggested that a range of providers could provide contextualisation for different disciplines.116

Statistical training

64.We explored with witnesses whether training was particularly needed in relation to statistics, given the need to understand the problems associated with ‘p-hacking’ and ‘HARKing’ (see Chapter 1, Box 1). Professor Bishop told us that “people are using statistics without fully understanding what they are doing. That is extremely dangerous. We need much better statistical training, and more statisticians to deal with this issue”.117 She explained that “in some disciplines there are statisticians available for consultancy, particularly in medicine. In most disciplines, there are not”.118 Professor Leyser was concerned that “the statistical training people get is not in the principles of statistics; it is ‘Here is a list of statistical tests and here is a programme that does it for you’. The training has to be about the principles and not the details”.119 She argued that “what they think they are doing is testing whether or not they are right; if their p-number is small, it means they are right, and if their p-number is big, it means they are wrong. That entire approach is deeply flawed and needs to be shifted right from the beginning in education”.120

65.Rather than training all researchers to the same level, Dr Arnaud Vaganay, a meta-researcher, argued that research teams could include individuals with the relevant skills; he told us that “usually research teams are too homogenous. Economists work with economists; sociologists work with sociologists; and usually statisticians work with statisticians. Perhaps a solution to the problem would be to bring in people with different backgrounds”.121 Similarly, the Royal Statistical Society argued that “the UK’s system for research and science funding needs to support more skilled statistical instructors who work across disciplines. Mechanisms to address statistical integrity are most advanced for medicine and clinical trials, but models developed there should be applied more widely to other fields of research”.122

66.We are encouraged to hear that some universities make training in research integrity a mandatory part of doctoral studies and include it in their research supervisor training programme. It is important that the attitudes to research integrity transmitted to the next generation of researchers are the right ones, and that those supervising them are also suitably trained. We recommend that UKRIO provide guidance to universities on best practice in delivering training to doctoral supervisors.

67.The research councils do not have reliable information on what training is currently being delivered. The increased concentration of training, through ‘Centres for Doctoral Training’, presents an opportunity for monitoring whether suitable training on research integrity is being provided as part of a PhD. We recommend that UKRI assess whether suitable training is being provided in line with current requirements and report back to us on its findings. UKRI should also consider further the case for centralised provision of training on research integrity, or standards that could be set.

68.We recommend that UKRI consider how best to encourage research teams to engage with statisticians as part of their research, and how best to improve the statistical competencies of researchers in general.

‘Open science’

69.The BMJ told us that “increasingly, journals have policies to enhance the reproducibility of published research by making the underlying ‘raw’ data accessible to other researchers”.123 We heard that journals including Science, Nature, and the PLoS (Public Library of Science) require authors to make study protocols, datasets, and code available on publication,124 and that the BMJ “requires authors of clinical trials to make anonymised individual patient data available on reasonable request”.125 Moreover, some journals are integrated with data repositories to facilitate data sharing.126 Universities UK agreed that open access to research outputs and data “will undoubtedly create opportunities for enhancing the integrity of research” and that this approach “may help to address the challenges associated with reproducibility and publication bias more effectively than a regulator could”.127 In particular, publication of datasets could assist with identifying errors.128

70.However, we also heard that open data can present risks to integrity through secondary misuse, and ‘p-hacking’ in particular (see Chapter 1, Box 1). As Professor Lewandowsky and Professor Bishop explained, “if one subdivides a large multivariate dataset in every possible way, some associations will be found by chance, but they cannot be regarded as meaningful unless adequate correction is made for the number of statistical tests”.129 Professor Bishop provided us with an example of this from the USA:

A group of people who thought that vaccines caused autism—still, after all these years—found a big dataset from some American survey. They dived into it and found that if you looked at the children who were boys, who were black, who were of a particular age range and went to a particular nursery, lo and behold there was an association between vaccination and autism. If you looked at the whole dataset, of course, there was nothing, That was classic p-hacking. The paper was published. It was subsequently retracted, but the damage was done. It is still thought to be a cover-up by the original researchers who did not publicise that amazing fact.130

The fact that the paper based on p-hacking was published in the first instance—even if subsequently retracted—demonstrates some of the limitations of peer review as a means of protecting against this practice.

71.Professor Bishop argued that there was a need for those wishing to re-analyse data to submit a protocol setting out the analysis that will be done, rather than “a free-for-all where you can just poke around and pull out the bit that happens to support your views”.131 Dr Vallance also highlighted potential privacy issues relating to individual patient data, which also underlines the need for a process for applying for access to data. He highlighted the Clinical Studies Data Request132 as a case study of how this process could be managed, with applicants required to specify their research question and methods before accessing the data in order to minimise risk that secondary researchers will “data-trawl large datasets and come up with very bad post-hoc analysis”.133

72.The Royal Statistical Society effectively summarised the competing arguments for us:

In an ideal world, open access to data would allow external validation of any claim. But the downside of open access is its abrogation of the protections that prior approval and registration of study protocols affords, and could lead to ill-founded disputes in areas of contested science.134

Our predecessor Committee recommended in its report on Big Data that a ‘Council on Data Ethics’ should be established.135 More recently, in our report on algorithms in decision-making, we highlighted the newly-established “Centre for Data Ethics and Innovation”.136 There is a role for such a body addressing the issues we have explored here in the context of open data and research integrity.

Better reporting of methods

73.The BMJ noted that one of the drivers of problems with reproducibility (see Chapter 1) was research methods being reported “too cursorily or without clarity”, and that “this may mean that the methods themselves were inadequate, or simply that they were badly written up, or both”.137

74.Catriona Fennell, representing the Publishers Association, argued that online publishing meant that there was less of an excuse not to provide full methodological details to ensure reproducibility:

In the past, with print, if the author had a word limit, there was a risk that they would reduce the method section, and you basically ended up with somebody writing a recipe: “Throw some flour in a bowl. Add some butter and throw it in the oven for a while.” Actually, what you need is what type of flour and how many grams, what temperature should the oven be, what type of butter, and so on.138

We were directed to various initiatives to improve the reporting of research methods, including reporting guidelines and checklists such as those collected for health research by the EQUATOR network.139

75.We are encouraged to see moves towards open publishing of datasets, and steps being taken to improve reporting of research methods through reporting checklists. However, we also recognise the need for protocols for accessing research data to ensure that secondary analysis is conducted appropriately. The Centre for Data Ethics and Innovation should consider further how best to balance the need for data to be openly shared with the need to ensure that data is used responsibly in secondary analysis.


90 UKRIO, ‘List of UKRIO Subscribers, (accessed 6 June 2018)

92 Nuffield Council on Bioethics, The Culture of Scientific Research in the UK (December 2014)

94 Q6

96 Professor Stephan Lewandowsky and Professor Dorothy Bishop (RIN0046) para 19

97 Professor Stephan Lewandowsky and Professor Dorothy Bishop (RIN0046) para 19

98 The Royal Society (RIN0049) para 5

99 The Royal Society (RIN0049) para 7

100 The Royal Society (RES0014)

101 Director, MRC Laboratory of Molecular Biology (RIN0095) paras 3.1–3.4

102 For further information on QR funding and the dual support system see Higher Education Funding in England, Commons Briefing Paper CBP 7973, January 2018

103 Professor Stephan Lewandowsky and Professor Dorothy Bishop (RIN0046) para 19

104 Q471 [Dr Hill]

105 Q472 [Dr Hill]

106 Sheffield Hallam University (RIN0036) para 3

107 Sheffield Hallam University (RIN0036) para 3

110 Universities UK, The Concordat to Support Research Integrity (July 2012), p15

113 Association of Medical Research Charities (RIN0033)

122 The Royal Statistical Society (RIN0085) para 2.1

123 BMJ (RIN0081) para 3.a.v

124 BMJ (RIN0081) para 3.a.v

125 BMJ (RIN0081) para 3.a.v

126 BMJ (RIN0081) para 3.a.v

127 Universities UK (RIN0057) para 29

129 Professor Stephan Lewandowsky and Professor Dorothy Bishop (RIN0046) para 26

134 The Royal Statistical Society (RIN0085) para 6.1

135 Science and Technology Committee, Fourth Report of Session 2015–16, The big data dilemma, HC 468, para 102

136 Science and Technology Committee, Fourth Report of Session 2017–19, Algorithms in decision-making, HC 351, paras 6–7

137 BMJ (RIN0081) para 1.b.vii




Published: 11 July 2018