The future of news Contents

Chapter 7: Mis/disinformation

Box 2: Definitions

Disinformation: “the deliberate creation and spreading of false and/or manipulated information that is intended to deceive and mislead people, either for the purposes of causing harm, or for political, personal or financial gain”.

Misinformation: “the inadvertent spread of false information”. People may unwittingly share false content (misinformation) that has been deliberately planted by malicious actors (disinformation).

Co-ordinated inauthentic behaviour: tactics used in developing a mixture of authentic, fake and duplicated social media accounts across multiple platforms to spread and amplify misleading content.

Source: Cabinet Office, ‘Fact Sheet on the CDU and RRU’ (9 June 2023): https://www.gov.uk/government/news/fact-sheet-on-the-cdu-and-rru#:~:text=The%20UK%20government%20defines%20disinformation,inadvertent%20spread%20of%20false%20information [accessed 1 September 2024]; Parliamentary Office for Science and Technology, Disinformation: sources, spread and impact, POSTnote 719 (25 April 2024)

Box 3: Government structures

The Government’s policy approach to disinformation is largely led by the Department for Science, Innovation and Technology. Delivery includes work from the Foreign, Commonwealth and Development Office; Ministry of Defence; Home Office, Cabinet Office; and the security and intelligence agencies, among others.

Activities in recent years include new foreign interference offences under the National Security Act 2023 and some limited offences under the Online Safety Act 2023; national security communications campaigns; monitoring and co-ordination units; research programmes; tech platform engagements; media literacy programmes; and international initiatives through Five Eyes, the G7, NATO, the EU and an extensive range of other groupings.

The cross-government Defending Democracy programme, launched in 2019, had an objective to co-ordinate efforts to tackle disinformation. In late 2022 the previous Government announced it had launched a Defending Democracy Taskforce overseen by the Minister for Security. A Joint Election Security Preparedness unit is stood up ahead of elections to provide cross-government co-ordination.

Sources: Written Statement, HCWS1772, Session 2019–21; Home Office, ‘Ministerial Taskforce meets to tackle state threats to democracy’ (28 November 2022): https://www.gov.uk/government/news/ministerial-taskforce-meets-to-tackle-state-threats-to-uk-democracy [accessed 1 September 2024]; National Cyber Security Centre, ‘Annual review’ (2023): https://www.ncsc.gov.uk/collection/annual-review-2023/resilience/case-study-defending-democracy#:~:text=The%20government’s%20Defending%20Democracy%20Taskforce,drives%20the%20government’s%20election%20preparedness [accessed 1 September 2024]; Department for Science, Innovation and Technology, ‘G7 ministerial declaration’ (15 March 2024): https://www.gov.uk/government/publications/g7-ministerial-declaration-deployment-of-ai-and-innovation/g7-ministerial-declaration [accessed 1 September 2024]; FCDO, ‘Foreign information manipulation’ (16 February 2024): https://www.gov.uk/government/news/us-uk-canada-joint-statement-foreign-information-manipulation#:~:text=That%20is%20why%20today%2C%20the,the%20foreign%20information%20manipulation%20challenge [accessed 1 September 2024]; NATO, ‘Setting the record straight’ (12 January 2024): https://www.nato.int/cps/en/natohq/115204.htm [accessed 1 September 2024]; Civil Service World, ‘Cabinet Office signs up fake news detection firm’ (3 April 2024): https://www.civilserviceworld.com/professions/article/cabinet-office-signs-up-fake-newsdetection-firm-to-track-disinformation [accessed 1 September 2024]

164.This chapter examines mis/disinformation and how it relates to news. The topic is vast and our review was confined to four issues. First is the evolving nature of the challenge. Second is the unease about counter-disinformation measures undermining free speech. Third is the risk of over-reliance on technical solutions, and the need for better long-term strategic responses. Fourth is the role of the media in reducing hype. There is a substantial literature on the principles-based arguments and operational issues which we draw on but do not attempt to summarise here.331

Changing characteristics

165.Professor Ciaran Martin CB, former CEO of the National Cyber Security Centre, told us that the characteristics, narratives, technologies and response options associated with mis/disinformation have evolved considerably over the past decade.332 Rising international competition, political realignments and the proliferation of new technologies all provide increasing motives, opportunities and means for adversaries to manipulate the information environment, and for false content to spread organically.333 Katie Harbarth, formerly Facebook’s Public Policy Director for Global Elections, noted that these changes are driven by supply and demand forces that go beyond any single platform, state or type of technology.334

166.Generative AI has caused much concern though, as our recent report on large language models found, the implications appear substantial rather than catastrophic.335 Much of the changes extend existing challenges, rather than create qualitatively new ones.

167.Future concerns might centre on ‘autonomous agents’, which are expected to mature over the next few years.336 These are AI tools capable of navigating their environment unaided to complete complex tasks—perhaps running marketing campaigns and making payments.337 In time these may be applied to run self-sustaining disinformation campaigns. Automated AI-run news sites (which earn money through advertising) provide an early indicator of future trends.338

Implications

168.The growing scale of misleading content does not mean a linear increase in reach, however. Meta told us that they were improving techniques to disrupt co-ordinated inauthentic activity, which reduces exposure; some other tech firms are doing likewise.339 Some studies suggest “trustworthy” information sites are far more visited than “untrustworthy” ones (though this methodology has been disputed).340 Search and news-related services may nevertheless find it increasingly complex to favour reliable sources.341

169.Nor does the growth in misleading content mean linear increases in impact. Various studies suggest that the effects of mis/disinformation range from insignificant to extensive, with much uncertainty.342 Dire predictions for the UK’s 2024 general election did not materialise, and there is likely a limit to the overall demand for unreliable content.343 Professor Martin believed the 2023 audio deepfake in Slovakia’s election, where the pro-Russian party subsequently won, was the “closest European experience we have seen to a fake intervention actually shifting the dial”.344 Moldova’s recent election is another test case, though the parallels with the UK remain limited.345

170.We heard that news organisations therefore had a responsibility to ensure proportionate reporting about disinformation, and avoid unnecessarily undermining confidence in the integrity of the information environment. As Professor Martin put it, “Don’t do the Russians’ job for them by bigging up the threat”.346

Counter-disinformation response options

171.In recent years governments and industry figures across the world have explored a range of potential responses to mis/disinformation. A recent analysis by the Carnegie Endowment indicates varying degrees of success and political sensitivity. Some of these actions include:

Mission creep?

172.Our evidence suggested some unease about mission creep. As the Carnegie Endowment noted, the term “disinformation” is “often invoked quite loosely to denigrate any viewpoint seen as wrong, baseless, disingenuous, or harmful”. This, in turn, risks deepening mistrust and undermining the legitimacy of tackling mis/disinformation across the board.

173.Tom Slater, Editor at the online Spiked magazine, believed for example that a “new anti-disinformation industry” had emerged, which he described as “a kind of anti-dissent industry”. He called for more focus on “not funding organisations that are censoring journalists”.348 Freddie Sayers, Editor in Chief and CEO of UnHerd, similarly believed that a “disinformation movement” had “exacerbated losses in public trust and fastforwarded the collapse in trust in the media and in government”.349

174.We also heard mixed views on the use of factcheckers. David Dinsmore, Chief Operating Officer of News UK said it was the role of journalists and editors to verify information, adding “I certainly do not want a third party coming in”.350 Peter Wright, Editor Emeritus of DMG Media, however said he found fact-checkers to be “helpful sometimes”.351

175.James Harding, Co-founder of Tortoise Media, worried that some arguments were “made under the banner of freedom of speech that actually promotes freedom from fact”. He raised concerns about lack of action from tech firms and believed they should do more to “deal with the deliberate dissemination of information that is untrue, divisive and sometimes dangerous”. He said that this differed from the idea of dissent outlined by Mr Slater: “Dissent is something else”.352

Money

176.Second, and relatedly, is the growing tension between publishers’ free speech, and online advertisers’ desires to ensure their adverts do not appear alongside problematic content. The UK Stop Advertising Funded Crime group has outlined how the complexity of the programmatic advertising system353 makes it almost impossible for brands to know where adverts will end up. Many ‘ghost’ AI-powered news sites earn advertising revenues, sometimes backed by organised crime.354 To protect their reputation, brands use third-party agencies to screen websites.355

177.Mr Sayers said that his news website UnHerd had faced a fall in advertising revenue after receiving a poor rating from the Global Disinformation Index (GDI), which describes itself as a non-profit service “enabling advertisers to reduce the unintended monetisation of deceptive and highly adversarial online content”.356 Mr Sayers argued this was an example of “detached and unaccountable actors” taking “very politicised views on things”.357

178.The GDI argued in response that “publishers have no automatic right to expect advertisers’ money” and that advertisers “aren’t compelled to buy ads alongside content they feel might harm their brand”. The GDI contended that the provision of services to inform advertising transactions was “a key tenet of a free market”.358 Concerns about advertising blacklists and the role of brand safety have become more widespread.359 Stephanie Peacock MP confirmed that public funding for the GDI from the Foreign, Commonwealth and Development Office to “promote initiatives that tackled disinformation” ended in 2023. She added that it is “not for the Government” to dictate where brands should advertise.360

179.The rise of brand safety organisations has raised complex questions about the extent and implications of their work. The Government’s online advertising taskforce should review the work and impact of brand safety organisations on news publisher revenue.

Technical solutions

180.A third theme was about the limits of technical solutions and risks of overreliance. The calls for watermarking on generative AI content are a good example.361 Meta, Google and OpenAI are among those working on watermarks.362 Many stakeholders have welcomed these moves. Yet although they are worth pursuing, they are no panacea. Watermarks can be limited or bypassed; an effective system requires widespread collaboration and mutual recognition across developers and platforms; malicious actors can use less scrupulous model providers (or simply fake watermarks directly); and users need to know what they are looking at.363 None of this seems likely to be solved in the short term.

181.Other common suggestions like content and source labelling are progressing, but remain complex. Much depends on the specific way they are implemented.364 Practices can also change quickly, as X (formerly Twitter) shows. The European Commission has said that the platform’s new blue tick verifier could “deceive” users into believing that the accounts are actually verified.365

182.Making tech platforms change their algorithms is another common proposal. Options exist, as voluntary initiatives366 and the EU’s Digital Services Act show.367 Stakeholder appetite seems mixed. The Media Reform Coalition advocated “imposing responsibilities on platforms to flag, label and deprioritise misleading or factually inaccurate content”.368 Professor William Dutton, Emeritus Professor, University of Southern California, argued in contrast that existing measures under the Online Safety Act would already “lead to over-regulation, oversurveillance and over-censorship”.369 One disinformation company, Logically, said that new legislation was unnecessary and that Ofcom should rather focus on getting its guidance right—for example stating explicitly that tech firms should track and address common tactics used by foreign adversaries. Clarifying what the new “false communications” offence means in practice might also help.370

Strategic responses

183.We were more persuaded by long-term strategic responses. These would likely require time, money and sustained commitment—but offer greater self-reliance and raise fewer free speech sensitivities.

Deterrence posture

184.A more muscular deterrence posture could help deter egregious foreign interference efforts. Professor Martin noted that the Russians were “unembarrassable”, indicating the limited impact of diplomatic responses. Yet harder options including offensive cyber are available. The US Cyber Command is thought to have degraded the Internet Research Agency’s technical infrastructure, for example.371 The UK’s National Cyber Force, a partnership between the Ministry of Defence and intelligence agencies, cites the possibility of using cyber power to “make it harder for states to use the internet to spread disinformation”.372 Professor Martin argued that:

“the more we can use interventions to take down the disinformation infrastructure of these groups, and the costlier we make it for them to operate, the better”.373

Media literacy

185.Media literacy initiatives remain a key way to improve societal resilience, and tend to be less intrusive than regulation.374 The Government’s Online Media Literacy Strategy funded initiatives throughout 2021–2025 and Ofcom has new duties under the Online Safety Act.375

186.A 2023 report for the Government by the London School of Economics summarised the various difficulties, including “short-term, small-scale funding” which creates fragmentation, duplication and administrative burdens; “limited coordination [which] leads to duplication and a lack of oversight”; and a “lack of clear benchmarks or specified outcomes” which affects scaling, evaluation and best practice sharing.376

187.Baroness Jones of Whitchurch and Stephanie Peacock MP noted the importance of cross-government media literacy efforts. They also suggested that the Government’s media literacy plans will be replaced by Ofcom’s new strategy.377 We did not find this wholly reassuring. Ofcom’s three-year media literacy strategy argues that:

“while Ofcom has an important part to play, media literacy must be everyone’s business—online platforms, parents, educators, third-sector organisations, providers of health and social care, professionals working with children, broadcasters (including the public service broadcasters) and others”.378

188.We remain uncertain about how well a regulator (as opposed to Government) is placed to drive progress in areas critical to media literacy—such as setting up real-world interventions, administering grant funding and influencing the plethora of Government departments central to this work. Baroness Jones of Whitchurch said her department was “looking now at the independent curriculum and assessment review into education in schools” and talking with the Department for Education.379

The role of news media

189.A robust media sector provides good options for responding to foreign interference efforts. Professor Martin suggested that agreements for structured dialogue between the Government and media could help media organisations balance their duties to report without becoming “unwitting agents” of foreign states by amplifying false narratives (and perhaps, more controversially, hack and leak materials).380 He cited successful examples from Australia as a model.381

190.Finally we heard that a healthy media sector, staffed by professionals producing stories that engage the public, remains one of the most enduring solutions to fears about mis/disinformation eroding a shared understanding of fact.382 The controversies surrounding the recent US election are illustrative. Commentators have highlighted the difficult choices in deciding whether highly contested assertions (such as the integrity of the US 2020 election) should be treated as demonstrably false claims or protected as legitimate political talking points.383 Ensuring such challenges are dealt with appropriately may become increasingly difficult as the information environment fragments.384 But as stakeholders throughout our inquiry argued, well-funded and professional news organisations remain the best placed actors to navigate such sensitivities.385

191.We welcome efforts to improve trust in the information environment, but we caution against a counter-mis/disinformation strategy that relies too heavily on measures in the Online Safety Act, or technical fixes like watermarks, labelling and algorithmic tweaking. Some of these are doubtless welcome, but such solutions are unlikely to tackle the root causes of supply and demand. They raise questions about potential overreach and free speech sensitivities. And they risk creating strategic dependencies on overseas tech firms to address highly sensitive societal challenges.

192.The Government should focus more on strengthening long-term resilience. We suggest four priorities.

(1)First is recognising more explicitly the value of a financially sustainable news sector: this is the best way to maintain a shared understanding of facts.

(2)Second, the Government could engage further with media organisations about protocols for responding to major foreign interference efforts, particularly around elections.

(3)Third, the Government should adopt a more muscular deterrence posture to impose greater costs on adversaries, for example using responsible cyber power to degrade adversary infrastructure. This could feature in the Strategic Defence Review currently underway.

(4)Fourth is media literacy. We are not yet convinced that the Government has a good plan. More resources and effort are needed to scale ‘what works’ in media literacy, and avoid a tangle of short-term fragmented projects. Ofcom is already taking on major burdens: we hope it is not left to be the main lead for such a complex policy issue. The Government needs its own strategy. DSIT should set out its future plans for media literacy and timeline for evaluating its current activities in response to this report. The Department for Education should use the opportunity of the Curriculum and Assessment Review to ensure that media literacy is given more time and prominence in schools.


331 See for example Parliamentary Office of Science and Technology, Disinformation: sources, spread and impact, POSTnote 179, 25 April 2024; US Department of State, Counter-Disinformation Literature Review (July 2023): https://www.state.gov/wp-content/uploads/2024/05/Learning-Brief-Counter-Disinformation-Literature-Review.pdf [accessed 15 November 2024]; Carnegie Endowment for International Peace, Countering Disinformation Effectively (2024): https://carnegie-production-assets.s3.amazonaws.com/static/files/Carnegie_Countering_Disinformation_Effectively.pdf [accessed 1 September 2024]

333 See for example selection of reports from Hybrid Centre of Excellent, ‘Research and Analysis’: https://www.hybridcoe.fi/research-and-analysis/ [accessed 1 September 2024]; RUSI, ‘The Need for a Strategic Approach to Disinformation and AI-Driven Threats’ (25 July 2024): https://www.rusi.org/explore-our-research/publications/commentary/need-strategic-approach-disinformation-and-ai-driven-threats [accessed 1 September 2024]

334 Q 110. See also reports from NATO Strategic Communications Centre of Excellence, ‘Publications’: https://stratcomcoe.org/publications?tid[]=30 [accessed 1 September 2024]

335 See also written evidence from Getty Images (FON0043), Sense about Science (FON0042), News Media Association (FON0056), AGENCY (FON0017)

336 Boston Consulting Group, ‘GPT was just the beginning’ (28 November 2023): https://www.bcg.com/publications/2023/gpt-was-only-the-beginning-autonomous-agents-are-coming [accessed 1 September 2024]; ‘Microsoft to let clients build AI agents for routine tasks from November’, Reuters (21 October 2024): https://www.reuters.com/technology/artificial-intelligence/microsoft-allow-autonomous-ai-agent-development-starting-next-month-2024–10-21/ [accessed 22 October 2024]

337 AutoGPT, ‘Autonomous agents are the new future’ (20 March 2024): https://autogpt.net/autonomous-agents-are-the-new-future-complete-guide/ [accessed 1 September 2024]

338 Written evidence from Fenimore Harper (FON0016); News Guard, ‘tracking AI-enabled misinformation’: https://www.newsguardtech.com/special-reports/ai-tracking-center/ [accessed 1 September 2024]

339 Appendix on Committee visit to San Francisco

340 Parliamentary Office of Science and Technology, Disinformation: sources, spread and impact, POSTnote 179, 25 April 2024

341 Appendix on Committee visit to San Francisco

342 For a brief summary see Parliamentary Office of Science and Technology, Disinformation: sources, spread and impact, POSTnote 179, 25 April 2024

343 Felix M Simon et al, ‘Misinformation reloaded?’ (18 October 2023): https://misinforeview.hks.harvard.edu/article/misinformation-reloaded-fears-about-the-impact-of-generative-ai-on-misinformation-are-overblown/ [accessed 1 September 2024]

345 Reuters, ‘Moldova’s Sandu decries ‘unprecedented’ meddling as EU referendum goes to wire’ (21 October 2024): https://www.reuters.com/world/europe/moldova-votes-election-eu-referendum-shadow-alleged-russian-meddling-2024–10-20/ [accessed 22 October 2024]

347 For an analysis on the evidence underpinning these types of responses see Carnegie Endowment for International Peace, Countering Disinformation Effectively (2024): https://carnegie-production-assets.s3.amazonaws.com/static/files/Carnegie_Countering_Disinformation_Effectively.pdf [accessed 1 September 2024]. See also details in Box 3.

348 QQ 125, 131

350 Q 42 (David Dinsmore)

351 Q 42 (Peter Wright)

353 Programmatic advertising is an automated process for buying and selling digital advertising, which involves a real-time bidding process through which ads can be bought in seconds with little human interaction.

354 UK Stop Advertising Funded Crime, ‘Stop Advertising Funded Crime’: https://uksafc.org/ [accessed 22 October 2024]

355 WARC, ‘The Future of Programmatic’ (2024): https://www.warc.com/content/paywall/article/warc-exclusive/the-future-of-programmatic-2024/en-gb/156796? [accessed 22 October 2024]

356 Written evidence from the Global Disinformation Index (FON0071)

358 Written evidence from the Global Disinformation Index (FON0071)

359 ‘News publishers and broadcasters warn over advertising blacklists’, Financial Times (26 September 2024): https://www.ft.com/content/51216ef9-b4b7-43ca-86b6-8da1ff6f65cd [accessed 15 November 2024]

360 Q 194. The Committee wrote to the (then) Minister for Europe, Nusrat Ghani MP, regarding action on mis/disinformation and the role of brand safety organisations. See Letter from The Rt Hon the Baroness Stowell of Beeston MBE, Chair of the Select Committee on Communications and Digital to Nusrat Ghani MP, Minister for Europe (9 May 2024): https://committees.parliament.uk/publications/44707/documents/222021/default/

361 Written evidence from Logically (FON0068), Professor Rafael A. Calvo (FON0047). See also Adobe, ‘Generative AI Content’ (2 August 2024): https://helpx.adobe.com/uk/stock/contributor/help/generative-ai-content.html [accessed 1 September 2024]; and our previous report on generative AI: Large language models and generative AI, para 223

362 See for example The Verge, ‘Meta says you better disclose your AI fakes or it might just pull them’ (6 February 2024): https://www.theverge.com/2024/2/6/24062388/meta-ai-photo-watermark-facebook-instagram-threads [accessed 1 September 2024]; The Verge, ‘OpenAI is adding new watermarks to DALL-E 3’ (6 February 2024): https://www.theverge.com/2024/2/6/24063954/ai-watermarks-dalle3-openai-content-credentials [accessed 1 September 2024]; The Verge, ‘Google is embedding inaudible watermarks right into its AI generated music’ (16 November 2023): https://www.theverge.com/2023/11/16/23963607/google-deepmind-synthid-audio-watermarks [accessed 1 September 2024]; AutoGPT, ‘OpenAI divided over launch of AI watermarking tool’ (6 August 2024): https://autogpt.net/openai-divided-over-launch-of-ai-watermarking-tool/ [accessed 1 September 2024]

363 MIT Technology Review, ‘Why Big Tech’s watermarking plans are some welcome good news’ (13 February 2024): https://www.technologyreview.com/2024/02/13/1088103/why-big-techs-watermarking-plans-are-some-welcome-good-news/ [accessed 1 September 2024]; AutoGPT, ‘OpenAI divided over launch of AI watermarking tool’ (6 August 2024): https://autogpt.net/openai-divided-over-launch-of-ai-watermarking-tool/ [accessed 1 September 2024]; Brookings, ‘Detecting AI fingerprints’ (January 2024): https://www.brookings.edu/articles/detecting-ai-fingerprints-a-guide-to-watermarking-and-beyond/#:~:text=Relative%20to%20other%20approaches%20to,watermarks%20in%20AI%2Dgenerated%20content [accessed 1 September 2024]; MIT Technology Review, ‘It’s easy to tamper with watermarks from AI generated text’ (29 March 2024): https://www.technologyreview.com/2024/03/29/1090310/its-easy-to-tamper-with-watermarks-from-ai-generated-text/#:~:text=The%20first%20one%2C%20called%20a,passed%20off%20as%20human%2Dwritten [accessed 1 September 2024]

364 Carnegie Endowment for International Peace, Countering Disinformation Effectively (2024): https://carnegie-production-assets.s3.amazonaws.com/static/files/Carnegie_Countering_Disinformation_Effectively.pdf [accessed 1 September 2024]

365 BBC, ‘EU says X’s blue tick accounts deceive users’ (12 July 2024): https://www.bbc.co.uk/news/articles/cw0y1ezpv5xo#:~:text=The%20bloc’s%20tech%20regulator%20said,Digital%20Services%20Act%20(DSA) [accessed 18 October 2024]

366 National Technology News, ‘Meta introduces AI system to tackle harmful content’ (9 December 2021): https://nationaltechnology.co.uk/Meta_Introduces_AI_System_To_Tackle_Harmful_Content.php [accessed 1 September 2024]

367 European Commission, ‘The Digital Services Act package’:https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package [accessed 1 September 2024]. This includes provisions around profiting from disinformation, tackling co-ordinated inauthentic activity and boosting transparency requirements.

368 Written evidence from the Media Reform Coalition (FON0029). See also written evidence from AGENCY (FON0017).

370 Written evidence from Logically (FON0068)

372 National Cyber Force, ‘Responsible Cyber Power in Practice’ (4 April 2023): https://www.gov.uk/government/publications/responsible-cyber-power-in-practice/responsible-cyber-power-in-practice-html#:~:text=to%20the%20strategy.-,National%20Cyber%20Force,interests%20at%20home%20and%20abroad [accessed 1 September 2024]

374 Q 111 (Professor Dutton), written evidence from Dr Madrid-Morales (FON0049)

375 Department for Science, Innovation and Technology, ‘Year 3 online media literacy action plan’ (23 October 2023): https://www.gov.uk/government/publications/year-3-media-literacy-action-plan-202324/year-3-online-media-literacy-action-plan-202324 [accessed 1 September 2024]

376 Lee Edwards et al, Cross-sectoral challenges to media literacy (August 2023), p 23: https://assets.publishing.service.gov.uk/media/651167fabf7c1a0011bb4660/cross-sectoral_challenges_to_media_literacy.pdf [accessed 1 September 2024]

379 Q 181 (Baroness Jones of Whitchurch)

380 This refers to material illicitly obtained and then published. It is typically associated with foreign interference campaigns.

381 107 (Professor Ciaran Martin)

382 Written evidence from the News Media Association (FON0056)

383 ‘News Organizations Cut Away From Trump’s Misleading Speech’ (31 May 2024), The New York Times: https://www.nytimes.com/2024/05/31/business/media/cnn-nbc-trump-speech.html [accessed 29 October 2024]; The Verge, ‘J.D. Vance is anti-Big Tech, pro-crypto’ (16 July 2024): https://www.theverge.com/24199314/jd-vance-donald-trump-vp-antitrust-big-tech-ftc-lina-khan-elizabeth-warren-google [accessed 1 September 2024]; ‘ The New York Times is facing backlash over its coverage of Donald Trump and the 2024 election’, CNN (5 March 2024): https://edition.cnn.com/2024/03/05/media/new-york-times-trump-coverage-backlash/index.html [accessed 6 November 2024].

384 Written evidence from the News Media Association (FON0056)

385 See for example written evidence from News UK (FON0055)




© Parliamentary copyright 2024