Draft Online Safety Bill Contents

9Transparency and oversight

397.We concluded in Chapter 2 that many service providers’ current transparency measures are not sufficient for users or researchers, who feel that service providers’ systems and decision-making processes are akin to a black box.675 Service provider transparency is also crucial for Ofcom to effectively fulfil their function as a regulator, as discussed in Chapter 8. At present there is no requirement on providers to produce transparency reports, leading to greatly varying levels of transparency, access and understanding.

Transparency for users

398.Service providers can be inconsistent in enforcing their terms and conditions, handling complaints, and taking enforcement decisions. We heard that they are often not transparent about these decisions:

“You could have a particular phrase or word used in one context and it is reported and deleted, but in another context, or in a slightly different post, tweet or whatever you want to say, it is allowed under the terms of service.” 676

399.In some instances, inconsistency may be intentional. Documents released by the Wall Street Journal, showed that Facebook had a category of “whitelisted” users under a program called XCheck (“cross check”). A Facebook internal document explained that this meant: “for a select few members of our community [those whitelisted] we are not enforcing our policies and standards”.677 Unlike the rest of our community, these people can violate our standards without any consequences.”678 Twitter has a similar policy which exempts some users from being subject to their typical moderation processes.679 Twitter said that this policy is intended to preserve content that is in the public interest,680 whilst Facebook have explained that their intention was “to create an additional step so we can accurately enforce policies on content that could require more understanding”.681

400.Inconsistencies also arise in moderation activity, some of which have been attributed to algorithmic or human biases resulting in over-moderation of legitimate content or obstacles to those seeking redress. For example, the removal of Palestinian content during conflicts in Israel and Palestine in May 2021,682 Arabic-language content being erroneously removed from social media sites,683 and the suppression of LGBTQ+ content on YouTube and other websites.684

401.Lack of transparency of service providers also means that people do not have insight into the prevalence and nature of activity that creates a risk of harm on the services that they use. Ms Haugen told us:

“Facebook’s own reports say that it is not just that Instagram is dangerous for teenagers; it is actually more dangerous than other forms of social media.” 685

Until Ms Haugen shared this information with the Securities and Exchange Commission and it was reported in the Wall Street Journal and other media outlets, it was not available to the public. We heard that people should be able to make an informed choice about the services that they are using by having an insight into the prevalence and nature of activity that creates a risk of harm on those services, services’ terms and conditions, and how those terms and conditions are enforced.686

402.We heard from DMG Media that users often don’t understand bias in search algorithms, “and imagine that when they search for news on politics, health, business, or any number of other topics, Google’s emphasis on relevance and expertise means the content they are shown has been picked because it gives the most reliable and useful information.”687 They describe how these algorithms work as “the company’s most closely-guarded secret”, that these are likely influenced by commercial interest or bias, and have real implications for media plurality.688 Sky told us that it was important the provisions protecting journalism were clear, “to ensure a plurality of views is upheld under the regime”689, a point also made by the NUJ690. Transparency will be vital in ensuring that there is no detriment to media plurality from the application of the safety duties.

Provisions on transparency in the draft Bill

403.The draft Bill primarily aims to improve transparency by requiring service providers to produce annual transparency reports for each of their services, with the information included in those transparency reports to be determined by Ofcom in a notice given to the provider.691 The draft Bill also contains other mechanisms that may enhance transparency, giving Ofcom duties to prepare a report about researchers’ access to information692 and to make arrangements for research about people’s experiences of regulated services.693

Box 4: Summary of information which Ofcom can require from service providers in annual transparency reports

  • the incidences of illegal and harmful content, and how many users have encountered such content
  • how illegal and harmful content is disseminated on the service
  • how terms of service or policies and procedures are applied
  • the systems and processes for users to report illegal content, harmful content, or other content which breaches the terms of service or policies and procedures
  • the systems and processes used to deal with illegal and harmful content, take it down, or prevent it being encountered in or via search results
  • functionalities to help users manage risks relating to harmful content
  • steps which a provider is taking to fulfil their various duties
  • how the provider cooperates with government, regulatory, or other public sector bodies in the UK
  • the systems and processes that the provider uses to assess the risk of harm to individuals from the presence of illegal content or harmful content
  • the systems and processes a provider has in place to direct users to information about how they can protect themselves from harm in relation to illegal and harmful content
  • the steps a provider is taking to provide a higher standard of protection for children than for adults
  • the steps a provider is taking to improve media literacy, and evaluate the effectiveness of these steps
  • any other steps the provider is taking relating to online safety matters

Source: Draft Online Safety Bill, Part 2, Chapter 6, Subsection 49(4)

404.Ofcom welcomed the transparency measures in the draft Bill describing them as a “step change” where service providers could be “truly accountable for the first time”, resulting in “a significant improvement in transparency and accountability for internet users, the public and Parliament”.694

Current problems underlying transparency reporting

405.Many service providers currently produce transparency reports, but we heard that these are often not informative due to some service providers’ choice of metrics.695 For example, we were told by Mr Ahmed about “missing statistics”,696 where some service providers give self-selected statistics that do not transparently answer the question that is being asked.697

406.We asked Facebook how effective their algorithms were in detecting hate speech. Ms Davis told us that she was aware that some Facebook engineers had said that their AI removes only 3–5 per cent of hate speech, but that she was: “also aware that we have put out a transparency report that indicates that the prevalence of hate speech on our platform has been reduced to 0.05 per cent.” She told us that they had “submitted the methodology that we used for that [metric]to an independent audit to verify[it].” 698 Ms Davis was unable to give us the information we requested.

407.Some providers use absolute values in their transparency reports, such as the number of takedowns per quarter. These metrics have limited value and do not give contextual information which could be important to understand them fully:

“ … we know that the volume of reports sent to us is not a useful metric, as many reports are about content which is not harmful …

Another figure that is regularly discussed is the amount of content removed by a platform, but this number too has limited value in isolation: if it goes down, is that because the platform became worse at removing harmful content, or because less harmful content was posted in the first place?” 699

408.Proportional metrics, such as the percentage of all content which violates a service’s policy, also do not necessarily give an accurate picture of the scale of harm on a service. Google, for example, told us that that “removed videos represent a fraction of a percent of total views on YouTube” and that they “work continuously to shrink this even further through improved detection and enforcement”.700 Whilst removed videos may represent a very small proportion of the videos on YouTube, this could still mean a large number of hours spent engaging with policy-violating content that creates a risk of harm, with YouTube reporting in 2019 that they had reached 1 billion hours of viewing time a day globally.701

409.Where service providers use different metrics to report the nature and prevalence of activity that creates a risk of harm on their services, it is difficult to compare them. This prevents people from making an informed choice about which services they use. If people can compare the risks of harm on different services, this could encourage the development of a competitive marketplace where successfully mitigating the risk of harm attracts users and becomes a competitive advantage.702

410.We recommend that Ofcom specify that transparency reports produced by service providers should be published in full in a publicly accessible place. Transparency reports should be written clearly and accessibly so that users and prospective users of the service can understand them, including children (where they are allowed to use the service) and disabled people.

411.We recommend that the Bill require transparency reporting on a regular, proportionate basis, with the aim of working towards standardised reporting as the regulatory regime matures. The Bill should require minimum standards of accuracy and transparency about how the report was arrived at and the methodology used in research. For providers of the highest risk services, the outcome of the annual audits recommended in paragraph 340 should be required to be included in the transparency report.

412.We agree with the list of information that Ofcom can require as part of its transparency reporting powers and recommend that it should have the clear power to request any other information. We recommend that transparency reporting should aim to create a competitive marketplace in respect of safety, where people can reasonably compare, using robust and comparable information, performance of services as they operate for UK users. We suggest Ofcom also be able to require information be published in transparency reports including (but not limited to):

a)Safety by design features;

b)Most viewed/engaged with content by month;

c)Most recommended content by month by age group and other demographic information (where that information is collected);

d)Their terms and conditions;

e)Proportion of users who are children;

f)Proportion of anonymous users;

g)Proportion of content breaching terms and conditions;

h)Proportion of content breaching terms and conditions removed;

i)Proportion of appeals against removal upheld;

j)Proportion of appeals against removal, by both recognised news publishers and other users on the grounds of public interest, upheld; and

k)Time taken to deal with reports.

413.In addition to transparency reporting, Ofcom should be empowered to conduct its own independent research with the aim of informing the UK public about the comparative performance of services in respect of online safety.

Access for independent researchers

414.The draft Bill requires Ofcom to prepare a report about researchers’ access to information and to publish this within two years of the Bill being enacted into legislation.703 This report must describe “how, and to what extent, persons carrying out independent research into online safety matters are currently able to obtain information from providers of regulated services to inform their research.”704

415.Dr Moore told us that “without … research and external scrutiny, we will be unable to properly assess the extent of problematic content and behaviour on these platforms, or assess the harms committed.”705 This position was supported by a number of other witnesses, who told us that lack of transparency from service providers and limited access to information for independent researchers hinders much-needed scientific progress towards understanding the prevalence, impact, causes, and dynamics of online activity that creates a risk of harm.706 This, in turn, hinders the ability to make policy decisions and dampens innovation, leaving us “working in the dark”.707

416.Witnesses told us that greater transparency could allow for more scrutiny of service providers, and consequently, increased accountability.708 Demos urged that the Bill give greater priority for independent researcher access to service providers’ data about the service:

“We would recommend that greater priority be given than is in the current Bill to facilitating independent researcher access to platform data, with appropriate privacy safeguards, so that platform action can be better scrutinised and [to] improve accountability for any failures to take meaningful measures to reduce risks of harm.”709

417.Many other witnesses, as well as participants in our 3 November roundtable, called for the highest risk services to share data more openly with vetted researchers.710 Reset called for the transparency powers in the Bill to “include a requirement for platforms to share relevant data with accredited researchers studying online harms/safety” as this would “give academia a much clearer picture of how harmful content is generated and promoted online and what impact it has on fundamental rights and the greater public good.”711 Reset points out that this would “align the Bill with the Digital Services Act” and redress the current transparency arrangements which operate “at the whim of platforms”.712

418.Ms Edelson told us that service providers are currently developing technological solutions to activity that creates a risk of harm on their own without sharing information, data, or knowledge. This lack of sharing hinders scientific progress on understanding and developing algorithms:

“We just do not have enough data. Ideally, we would develop a taxonomy of the variety of harmful content that spreads online and there would be research saying, ‘We have developed a classifier and it is X per cent effective at identifying self-harm content’. Someone else would come out with a better one. That is the normal process of scientific research, but we just do not have the data to do that … I do not want to say that it is useless to take the platforms’ research without seeing the data that backed it, but it does not advance science about what is going on in these platform… If we just make public data on platforms available to researchers … We can go through the scientific process of understanding various areas of harmful content and how we can avoid promoting them.”713

Ms Edelson is a member of the Ad Observatory project at New York University, which collected volunteers’ Facebook data to study the targeting of users with political advertisements and misinformation on Facebook. Despite collecting data only from consenting volunteers, Facebook shut down the personal accounts and research tools of members of the Ad Observatory in August 2021 for breaching its privacy rules.714

419.We heard from Dr Amy Orben, College Research Fellow at Emmanuel College, University of Cambridge, that lack of access to data is “making it impossible for good quality and independent scientific studies to be completed on topics such as online harms, mental health, or misinformation.”715 We heard that researchers have been misled by data that has been shared with them.716 Where researchers do have access to information, it lacks “detail and richness” that is important for researchers.717

420.Where data is available, service providers sometimes restrict access to it. In one case, an entire research programme was disrupted because independent researchers’ access to data was revoked by the service provider.718

421.We heard there is evidence that social media usage can cause psychological harm to children, but that platforms prevent research in this area from being conducted or circulated. Professor Jonathan Haidt, Professor of Ethical Leadership at New York University Stern School of Business, told us that in 2013 and 2014: “Something happened that started sending girls in particular to hospitals, and the suicide rate greatly increased.”719 Haidt argued that the rise of social media at this time was the cause. Similarly, we heard about The Wall Street Journal’s work that found Facebook conducted internal research that revealed young girls were psychologically harmed as a result of using Instagram.720 Common Sense told us that this example highlighted how important it is that independent researchers have access to data from platforms, as internal research “tends to be biased and, if [platforms] do not like the results, they simply will not share it with the public.”721

422.We heard in other evidence that researchers require further data to greater explore the link between social media usage and psychological harm in children. Dr Orben and Dr Andrew K. Przybylski, Director of Research at the Oxford Internet Institute, stressed to us that: “Industry are not sharing that data and the resources to support independent work have not been allocated. Without these steps being taken, online harms and thresholds of harm cannot have a scientific basis.”722

423.Facebook told us that it wanted to share data with independent researchers but had unresolved concerns about protecting users’ privacy:

“One of the things that is a particular challenge in the area of research is how we can provide academics who are doing independent research with access to data really to study these things more deeply. We are currently working with some of the leading academic institutions to figure out what the right rules are to allow access to data in a privacy protective way. One thing that we are quite supportive of, in terms of some of the legislation that we are here to talk about today, is working with regulators to set some parameters around that research that would enable that research and would enable people to have trust in the research that is done with access to our data in a privacy-protected way.”723

In written evidence, Facebook said it would welcome legislation that will address this issue, that it has “a long history of seeking to make privacy-protected data available to support research” and that it was supportive of a solution which accelerates independent researchers’ access to information: “Ofcom and the ICO should begin work on their report into researchers’ access to information immediately, and not wait until two years after the Bill has passed”724 Twitter and Google were also supportive of independent researchers having access to data.725

424.Independent researchers currently have limited access to the information needed to conduct research. This hinders progress in understanding online activity that creates a risk of harm, the way that services’ systems work, and how services’ systems could be improved to mitigate the risk of harm. It also limits the ability to scrutinise service providers and hold them accountable. This issue must be addressed urgently.

425.The transparency powers in the Bill are an important opportunity to encourage service providers to share relevant data with external researchers studying online safety and allied subjects.

426.The draft Bill requires that Ofcom produce a report on access to data for independent researchers. We recommend work on this report starts as soon as possible. We recommend that Ofcom be given the powers in the Bill to put into practice recommendations from that report.

427.Ofcom should have the power i) to audit or appoint a third-party to audit how services commission, surface, collate and use their research; ii) to request a) specific internal research from services; b) research on topics of interest to the Regulator.

428.Ofcom should commission an independent annual assessment, conducted by skilled persons, of what information should be provided by each of the highest risk services to advance academic research.

429.We recommend that the Bill should require service providers to conduct risk assessments of opening up data on online safety to independent researchers, with some pre-defined issues to comment on, including a) privacy; b) risk of harm to users; c) reputational risks (for the service provider) and; d) financial cost

430.We recommend that Ofcom should require service providers to conduct an annual formal review of using privacy-protecting technologies and enable them to share sensitive datasets.

Role and value of a Joint Committee on Digital Regulation

431.A Joint Committee to oversee online safety and digital regulation more broadly has been recommended by numerous parliamentary committees. 726 Secretary of State Ms Dorries and Mr Philp told us that they were supportive of the proposal of an ongoing Joint Committee of both Houses.727

432.The call for a Joint Committee on Digital Regulation was recently reiterated by the House of Lords Communications and Digital Committee in their in Digital regulation: joined-up and accountable.728 In their report, they highlighted that regulators are increasingly being given “broad powers to address complex and evolving challenges”, which brings risks, making sustained attention from Parliament imperative “to ensure both that regulators have the powers they need and … that regulators are using those powers appropriately and effectively.729 They identified seven different permanent parliamentary committees with remits relating to digital regulation, but none with a remit to focus on digital regulation. Just as digital regulation “needs to be cross-sectoral, so too must be the process of holding regulators to account.” A Joint Committee on Digital Regulation could “ensure coherence and draw on the full range of expertise in Parliament”.730

433.An ongoing Joint Committee would serve numerous critical functions:

a)Oversight and accountability of digital regulators in respect of the Bill: The draft Bill gives Ofcom a wide range of powers for enforcement731 with some arguing that they are too great.732 A Joint Committee would provide a greater level of democratic accountability for Ofcom and other digital regulators over how they are using their powers and this could be beneficial in alleviating the concerns raised by witnesses.733

b)Scrutiny of the Secretary of State in respect of the Bill: We have heard substantial concerns regarding the “exceptional”734 powers of the Secretary of State.735 Prof Wilson told us that: “The question we should always ask of legislation is, ‘Would I like this in the hands of my political opponents?’ because one day they will come to power.”736A Joint Committee of both Houses with proportional representation from different political parties could serve a valuable function in scrutinising the Digital Regulation work of the Secretary of State and the way that they use their powers. For example, a Joint Committee could review the priority content that the Secretary of State designates under the Online Safety Bill.

c)Monitoring Ofcom’s independence: Ofcom and numerous other witnesses have said that the powers given to the Secretary of State in the draft Bill may undermine Ofcom’s independence and their ability to show clear and evidence-based decision-making.737 Ms Denham told us that Ofcom’s independence as a regulator is “critically important”.738 A Joint Committee could monitor the independence of Ofcom and make recommendations to safeguard it where necessary.

d)Look across the digital regulation landscape: Digital regulation is a complex and evolving landscape739 and the internet is already regulated by multiple independent regulators.740 Oversight of the digital regulation landscape by a Joint Committee of both Houses could support the ongoing development of regulation and legislation and assess regulatory coherence in this area.741 The Joint Committee could also maintain an overview of international efforts in digital regulation.

e)Horizon scanning: Digital technologies are complex and rapidly evolving. Legislation will face challenges as new technologies and new risks of harm emerge, and we have heard concerns about how the Bill can handle these challenges.742 A Joint Committee of both Houses could look to the future to identify newly emerging risks or technologies that represent a challenge to the regulatory and legislative landscape.

f)Generate solutions to current issues: Numerous issues raised throughout our inquiry are complex and as yet unresolved. A Joint Committee of both Houses could be instrumental in helping to generate solutions to ongoing policy issues such as how to accurately identify disinformation and misinformation online.

434.We agree with other Committees that it is imperative that digital regulation be subject to dedicated parliamentary oversight. To achieve this, we recommend a Joint Committee of both Houses to oversee digital regulation with five primary functions: scrutinising digital regulators and overseeing the regulatory landscape, including the Digital Regulation Cooperation Forum; scrutinising the Secretary of State’s work into digital regulation; reviewing the codes of practice laid by Ofcom any legislation relevant to digital regulation (including secondary legislation under the Online Safety Act); considering any relevant new developments such as the creation of new technologies and the publication of independent research or whistleblower testimonies; and helping to generate solutions to ongoing issues in digital regulation.

435.We fully support the recommendation of the House of Lords Communications and Digital Committee in their report on Digital Regulation that, as soon as possible, full Digital Regulation Cooperation Forum membership should be extended to statutory regulators with significant interests and expertise in the digital sphere, and that partial membership should be extended to non-statutory regulators and advisory bodies with subject specific knowledge to participate on issues particular to their remits.

436.We recommend that, in addition to any other reports the Committee chooses to make, the Joint Committee produces an annual report with recommendations on what could or should change, looking towards future developments. We anticipate that the Joint Committee will want to look at the definition of disinformation and what more can be done to tackle it at an early stage.

Protections for whistleblowers

437.Whistleblowers like Ms Haugen and Ms Zhang, who both gave evidence to us, have greatly helped to increase understanding of the systems and processes of large service providers and their services. They have set out the challenges that exist with creating and enforcing effective content moderation systems, understanding known harms and emerging threats to user safety. Ms Haugen has also set out the kind of research Facebook conducts on the impact of its services on the welfare of its users and the decisions it has made when safety concerns might conflict with overall engagement with the service.

438.The Public Interest Disclosure Act 1998 provides protection to whistleblowers who make disclosures to their employer or other relevant person of potential criminal offences, the endangering of health and safety of another person or people, or other forms of protected disclosure. Such protection includes against detriment at work and being penalised by non-disclosure agreements. The “relevant persons” to whom such a disclosure can be made are set out in the Public Interest Disclosure (Prescribed Persons) Order 2014. The Order does not currently have a prescribed person relating to online safety.

439.We recommend that whistleblowers’ disclosure of information to Ofcom and/or the Joint Committee on Digital Regulation, where that information provides clear evidence of non-compliance with the Online Safety Bill, is protected under UK law.


675 Q 136; Q 146; Written evidence from: Ada Lovelace Institute (OSB0101); ITV (OSB0204); Q 72

677 This was 5.3 million members according to the Wall Street Journal reports (the Facebook Files). ‘Facebook Says Its Rules Apply to All. Company Documents Reveal a Secret Elite That’s Exempt’ Washington Post (13 September 2021): https://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353 [accessed 8 December 2021]

678 ‘Facebook Says Its Rules Apply to All. Company Documents Reveal a Secret Elite That’s Exempt’ Washington Post (13 September 2021): https://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353 [accessed 1 December 2021]; Written evidence from Glitch (OSB0097)

679 Twitter, ‘Defining public interest on Twitter’: https://blog.twitter.com/en_us/topics/company/2019/publicinterest [accessed 1 December 2021]

680 Twitter, ‘Defining public interest on Twitter’: https://blog.twitter.com/en_us/topics/company/2019/publicinterest [accessed 1 December 2021]

681 ‘Facebook oversight board to review system that exempts elite users’ The Guardian (22 September 2021): https://www.theguardian.com/technology/2021/sep/21/facebook-xcheck-system-oversight-board-review [accessed 1 December 2021]

682 BBC News, ‘Israel-Palestinian Facebook posts needed ‘bias’ review’: https://www.bbc.co.uk/news/technology-58558982 [accessed 18 November 2021]; Wired, ‘Facebook’s censorship-by-algorithm silenced Palestinian voices. Can its biases ever be fixed?’: https://wired.me/business/big-tech/facebook-content-moderation-palestine/ [accessed 18 November 2021]; Oversight Board, ‘Oversight Board overturns original Facebook decision. Case 2021–009-FB-UA’:https://oversightboard.com/news/389395596088473-oversight-board-overturns-original-facebook-decision-case-2021–009-fb-ua/ [accessed 18 November 2021]

683 Project on Middle East Political Science, ‘Digital Orientalism: #SaveSheikhJarrah and Arabic content’: https://pomeps.org/digital-orientalism-savesheikhjarrah-and-arabic-content-moderation [accessed 18 November 2021]

684 Transthetics, ‘YouTube’s moderation process is failing the LBGT community. Can we fix this?’: https://transthetics.com/YouTubes-moderation-process-is-failing-the-lgbt-community/; [accessed 18 November 2021] Talking Influence, ‘LGBTQ+ Creators’ Law Suit Against YouTube’s Alleged Algorithm Discrimination Sits With Judge’: https://talkinginfluence.com/2020/06/04/lgbtq-creators-lawsuit-YouTube-discrimination/ [accessed 18 November 2021]; Written evidence from LGBT Foundation (OSB0191)

687 Written evidence from DMG Media (OSB0133)

688 Written evidence from DMG Media (OSB0133)

689 Written evidence from Sky (OSB0165)

690 Written evidence from The National Union of Journalists (NUJ) (OSB0166)

691 Draft Online Safety Bill, CP 405, May 2021, Part 3, Chapter 1, Clause 49

692 Draft Online Safety Bill, CP 405, May 2021, Part 4, Chapter 7, Clause 101

693 Draft Online Safety Bill, CP 405, May 2021, Part 4, Chapter 7, Clause 99

694 Written evidence from Ofcom (OSB0021)

697 Written evidence from Ada Lovelace Institute (OSB0101)

699 Written evidence from Facebook (OSB0147)

700 Written evidence from Google (OSB0175)

701 OBERLO, ‘10 YouTube stats every marketer should know in 2021 [Infographic]’: https://www.oberlo.com/blog/YouTube-statistics [accessed 18 November 2021]

703 Draft Online Safety Bill, CP 405, May 2021, Part 4, Chapter 7, Clause 101

704 Draft Online Safety Bill, CP 405, May 2021, Clause 101(1)(a)

705 Written evidence from Dr Martin Moore (Senior Lecturer at King’s College London) (OSB0063)

706 Carnegie UK (OSB0095); Q 99; Q213; Who Targets Me (OSB0086); Dr Amy Orben (College Research Fellow at Emmanuel College, University of Cambridge) (OSB0131)

707 Written evidence from Reset (OSB0138); Q 62

708 Written evidence from: Ofcom (OSB0021); Q271; Twitter (OSB0072); 5Rights Foundation (OSB0206); Written evidence from Ada Lovelace Institute (OSB0101); Q 63

709 Written evidence from Demos (OSB0159)

710 Written evidence from: Who Targets Me (OSB0086), point 6; Logically (OSB0094); Carnegie UK (OSB0095); Dr Amy Orben (College Research Fellow at Emmanuel College, University of Cambridge) (OSB0131); Catch 22 (OSB0195).

711 Written evidence from Reset (OSB0138), 20.

712 Written evidence from Reset (OSB0138), 20–21.

714 Center for Cybersecurity, ‘Facebook Disables Ad Observatory; Academicians and Journalists Fireback’: https://cyber.nyu.edu/2021/08/21/facebook-disables-ad-observatory-academicians-and-journalists-fire-back/ [accessed 1 December 2021]; Q 213

715 Written evidence from Dr Amy Orben (College Research Fellow at Emmanuel College, University of Cambridge) (OSB0131); Q 185

718 Written evidence from: Reset (OSB0138); Ada Lovelace Institute (OSB0101)

719 Q 148 (Professor Jonathan Haidt) 

720 ‘Facebook knows Instagram Is Toxic For Teen Girls, Company Documents Show’, The Wall Street Journal (14 September 2021): https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739  [accessed 2 December 2021]

721 Q 148 (Jim Steyer)

722 Written evidence from Professor Andrew Przybylski (Associate Professor, Senior Research Fellow at University of Oxford) (OSB0193). See also witten evidence from Dr Amy Orben (College Research Fellow at Emmanuel College, University of Cambridge) (OSB0131)

723 Q 200; also oral evidence taken before the Democracy and Digital Technologies Committee, 17 March 2020 (Session 2019–20), Q 298-99 (Karim Palant)

724 Written evidence from Facebook (OSB0147)

725 Written evidence from Twitter (OSB0072); Q 229

726 Democracy and Digital Technologies Committee, Digital Technology and the Resurrection of Trust (Report of Session 2019–21, HL Paper 77), Recommendation 15; Communications and Digital Committee, Free for all? Freedom of expression in the digital age (1st Report, Session 2021–22, HL Paper 54), Recommendation 16; Communications Committee, Regulating in a digital world (2nd Report, Session 2017–19, HL Paper 299), Recommendation 33

728 Communications and Digital Committee, Digital regulation: joined-up and accountable (3rd Report, Session 2021–22, HL Paper 126)

729 Communications and Digital Committee, Digital regulation: joined-up and accountable (3rd Report, Session 2021–22, HL Paper 126)

730 Communications and Digital Committee, Digital regulation: joined-up and accountable (3rd Report, Session 2021–22, HL Paper 126)

731 Written evidence from Ofcom (OSB0021)

732 Written evidence from: Dr Martin Moore (Senior Lecturer at King’s College London) (OSB0063); Virgin Media O2 (OSB0127)

733 Written evidence from: The Age Verification Providers Association (OSB0122); Dr Mikolaj Barczentewicz (Senior Lecturer in Law at University of Surrey) (OSB0152)

734 Carnegie UK, ‘The draft Online Safety Bill gives too many powers to the Secretary of State over too many things’: https://www.carnegieuktrust.org.uk/blog-posts/secretary-of-states-powers-and-the-draft-online-safety-bill/ [accessed 18 November 2021]

735 Q 138; Written evidence from Professor Damian Tambini (Distinguished Policy Fellow and Associate Professor at London School of Economics and Political Science) (OSB0066); QQ 70–72; Q 77

737 Written evidence from Ofcom (OSB0021), Q 138, Written evidence from LSE Department of Media and Communications (OSB0001), Q 126, Q 266, Written evidence from: Snap Inc. (OSB0012); Vodafone UK (OSB0015); Global Partners Digital (OSB0194); Confederation of British Industry (CBI) (OSB0186)

739 Department for Digital, Cultre, Media and Sport, Digital Regulation: Driving Growth and Unlocking Innovation (July 2021): https://www.gov.uk/government/publications/digital-regulation-driving-growth-and-unlocking-innovation/digital-regulation-driving-growth-and-unlocking-innovation [accessed 18 November 2021]

740 Q 86; The Digital Regulation Cooperation Forum, Information about the Digital Regulation Cooperation Forum (DRCF), established to ensure greater cooperation on online regulatory matters (March 2021): https://www.gov.uk/government/collections/the-digital-regulation-cooperation-forum [accessed 18 November 2021]

741 Communications and Digital Committee, Free for all? Freedom of expression in the digital age (1st Report, Session 2021–22, HL Paper 54)

742 Q 66, Q 77, Q 126, Q 190, Q 244, Q 255; Written evidence from: NSPCC (OSB0109); Sky (OSB0165)




© Parliamentary copyright 2021