Session 2022-23
Online Safety Bill
Written evidence submitted to by Jeremy Peckham FRSA (OSB25)
To the Online Safety Bill Committee.
Summary
The online harms that have arisen over the last decade or so have more to do with the characteristics of centralised platforms than the content on them. Centralisation serves the business model that maximises traffic and user engagement to generate revenue. It has created the scale and amplification for communication that is the oxygen of those who seek influence, whether for good or bad. The platform design that achieves this scale is the root cause of the problems that we face today. Large scale platforms will require automation to proactively remove and predict policy infringing content, therein lies the problem.
Flawed automated content removal strategy
The need for automation makes the proposed Bill in its present form fundamentally unworkable due to the limitations of automated content filtering technologies. This technology will result in much content that is legal being removed. The prohibition of less well defined content that is deemed harmful, although legal offline, will exacerbate the problem. A by-product of this technology will be the legalisation of constant surveillance of users and a complete lack of online privacy. (see sections 1 & 2)
Automation obscures content removal decisions
Automated techniques used to classify content are statistically based and it is not possible to determine on what basis the % probability of match is produced. Proactive removal of content reappearance using tagging methods, despite impressive success rates, produce many false positives are not transparent. (see section 3)
Ofcom’s role in determining categories will lead to bias
Requiring Ofcom to produce a register of categories of illegal, let alone harmful online but legal offline content (clause 80, Schedule 10) places the process of defining what is harmful outside normal legal due process. This will result in biases in favour of the definitions arising from the Secretary of State, Ofcom and Big Technology companies that are unaccountable. This bias will be compounded by the tagging of content and selection of training data used for automated content classification and removal. (see section 4)
Human Rights laws are infringed through automation
The automated removal of content constitutes what is termed in law "prior censorship". The removal of legal content that is the result of the limitations of automated techniques, coupled with category over reach will result in the Bill promoting significant breaches of Human Rights and anti-discrimination law. (see sections 5 & 6).
Duty to protect free expression conflicts with automation
The duty in clause 28 "to have regard to the importance of protecting the rights of users and interested persons to freedom of expression within the law" conflicts with the known characteristics of content removal technologies. A trade off will be made by companies to err on the side of caution, thus significantly impacting free expression. It is simply not possible to achieve automatic content removal whilst preserving free expression. (see section 6)
Appeals process works in favour of company not user
Major platforms already have processes that would comply with clause 18 and 28 of the Bill but they do not provide the due process that is available offline. As a result, content is already being removed that is legal free expression, despite appeal. There is no simple process to hold Big Tech companies to account, they decide! The free expression provisions in the Bill are are not redressed through the proposed appeals process and are therefore unworkable. (see section 10).
Safety by Design a better way forward
What is required is safety by design to mitigate the negative impact of platforms designed to maximise advertising revenue. This can be achieved through greater user control through expanding clause 14 and 57 to cover platform functionalities and characteristics. Platforms should be required, through a user dashboard, to provide privacy by default. Recommender and filtering technologies should also be optional and turned off by default. Flagging, ranking and promoting tools should also be optional. Certain techniques, such as "shadow banning", should not be allowed.
The price that the British public will have to pay, if this Bill goes through in its present form, will be a loss of freedom of expression with an accompanying automated curation of an increasingly narrow range of opinion. It lacks democratic accountability in operation and is unfair to minorities as well as being detrimental to society. Far from being the safest place in the world to be online, Britain will become the first western state to codify in law digital authoritarianism similar to China’s internet laws. [1]
Brief Bio: Jeremy Peckham BSc FRSA
Author, technology entrepreneur and leadership mentor.
Jeremy Peckham has spent much of his career in the field of Artificial Intelligence, and latterly, as a businessman and entrepreneur. He worked as a government scientist at the UK Royal Aircraft Establishment and later moved to Logica, an international software and systems integration company. Whilst at Logica he was Project Director of the 5 year, pan European and 20m Euro research project on Speech Understanding and Dialogue (SUNDIAL) that broke new ground in AI.
He founded his first company in 1993 through a management buy-out, based on the AI technology developed at Logica, and launched a successful public offering on the London Stock Exchange in 1996. Jeremy is now a technology entrepreneur, having helped to establish several high-tech companies over the last 25 years. His latest book, Masters or Slaves? AI and the Future of Humanity, was published by IVP: London in 2021 and looks at the ethical implication of AI.
Key aspects of the Bill that are unworkable
1. "Duty of Care" provisions require use of flawed automation
1.1 It is generally accepted that human moderation of content is not possible, as the Cambridge Consultants report on behalf of Ofcom states:
it has become impossible to identify and remove harmful content using traditional human-led moderation approaches at the speed and scale necessary. [2]
1.2. Hash tagging technology has been used to date by the Global Internet Forum to Counter Terrorism (GIFCT) consortium for automatic flagging and removing terrorist propaganda and hate speech. This requires specific images and other content to be labelled and stored. Checking content against this data base can prevent reappearances of the content, often before being viewed. Despite impressive removal statistics quoted by major platforms, a GIFCT working group points out, there is no agreed definition of what terrorism or hate speech is and that it is interpreted differently in different countries. This results in an over broad removal of content, not just legitimate political speech and war crime reporting, impacting the rights to freedom of expression, association and assembly. [1] Requiring Ofcom to determine these definitions puts them above the normal process for defining legality and any agreed categories still have to be interpreted, leading to the same over reach.
1.3. Whilst hash tagging of content to automatically remove its reappearance is relatively understandable technology, AI technologies are opaque "black box" statistical classifiers. They will not solve the problem of predicting and filtering illegal content (algorithmic moderation) and even less, ill-defined harmful but legal content, because they act as a very crude pattern matching filter that cannot even approximate human intelligence. Moreover, such technology cannot be developed to provide an audit trail of decisions and by its nature is not transparent.
1.4. Notwithstanding the hype and bold claims of the parties with vested interests in internet and social media platforms, most independent experts in the field accept that AI is nowhere near achieving human intelligence and is unlikely to do so for the foreseeable future, if ever. [2] Interpreting context is challenging for humans, but it is impossible for AI algorithms.
.. filters are not good at figuring out hate speech, parody, or news reporting of controversial events because so much of the determination depends on cultural context and other extrinsic information. [3]
1.5. The Cambridge Consultants 2019 report to Ofcom echoes this point. [4] Simply training an algorithm to increase the success rate of flagging and removing content that breaks policy rules results in other, unintended consequences, ones that may not be of concern to the platform owners, but can have detrimental effects on free speech and minorities. As one insider has already admitted, he’d take down 10,000 pieces of content to catch one that might anger the regulator. [5]
1.6. Softer targets for policing, such as legal but harmful content, will be automatically removed with filters that will be set to err on the side of caution. Their crude pattern matching based on human labelled training data, operating over billions of posts, will result in hundreds of millions of false positives to the detriment of diversity of opinion and free speech. The Transatlantic Working Group on content moderation and freedom of expression recognises this problem, recommending that:
Automation in content moderation should not be mandated in law because the state of the art is neither reliable nor effective. [6]
1.7. The limitations of technology used now and in the future for proactive content removal results in automated "prior censorship".
2. Accredited Proactive Technology also flawed
2.1. Ofcom’s role in enforcing the use of accredited technology (clause 103 & 116) doesn’t help. For Ofcom to define minimum standards of accuracy is an admission that there will be overreach. Accredited technology will still be subject to the same inherent limitations of all AI based technology and other techniques.
2.3. A requirement to specify accuracy, bias and effectiveness is unworkable because these parameters are highly subjective and the subject of much debate in the academic and technical community. These parameters further illustrate that legal content will be caught up in removal and some categories will be marginalised through bias. More importantly, this technology cannot and will not offer the same capabilities as human moderators who themselves face challenges in moderating content.
2.4. To require a regulator to enforce the use of some, as yet, undefined piece of opaque technology to police online content is therefore as unworkable as requiring platform providers to do so. Despite the optimism of Mark Zuckerberg at the 2018 congressional hearing, Artificial Intelligence technology does not in fact exhibit any intelligence at all.
3. Automated content filtering decisions are obscure
3.1. The reliance on automated moderation and proactive content removal in the proposed Bill obscures the decision making process about content, not only what is deemed harmful but how it is dealt with. Content is not the same as a set of published categories with attached descriptions. Automated techniques used to classify content are statistically based and it is not possible to determine on what basis the % probability of match is produced. The nature of these techniques results in probabilities generated by a computer, not certainties.
3.2. Being transparent about Community Standards or Terms of Service doesn’t mitigate this limitation if automation becomes the norm, as will be required by the proposed Bill. As Gorwa et al have observed:
some implementations of algorithmic moderation threaten to (a) decrease decisional transparency (making a famously non-transparent set of practices even more difficult to understand or audit), (b) complicate outstanding issues of justice (how certain viewpoints, groups, or types of speech are privileged), and (c) obscure or de-politicise the complex politics that underlie the practices of contemporary platform moderation. [7]
4. Bias and lack of transparency in Training Data for automated moderation
4.1. AI and Machine Learning algorithms require significant amounts of training data labelled by humans to define illegal and legal but harmful data sets. This raises the question of who will select such material and on what criterion? There is currently no transparency of the data used for training, nor the algorithms used by the major platforms, making independent audit impossible. To date the large players have cited GDPR as a reason why data cannot be shared with academic institutes for independent evaluation. The proposed Bill will leave it to the platform providers to determine the data used to train the automatic content moderators, leaving the filtering of content down to the policies of the platform owners.
4.2. There is no agreement, even amongst academics, about the definitions of toxic or harmful speech or content that would be necessary to even begin to build training data sets and allow automated classification. Such differences of opinion coupled with the inevitable bias that will be reflected in any data used for training automated moderation technology, along with the limitations of the automatic classifiers themselves, outlined above, makes the whole approach unworkable for so called harmful (but legal offline) speech.
4.3. Even where content is flagged for human review, human moderators will not have the context in which to determine the originators intent, nor if it is truly hate speech in the context of the wider set of communications in the thread.
4.4. It is well established that training data is biased because humans are biased and this will be amplified in the case of legal but harmful content.
4.5. Content that is currently legal to express offline is already being removed by Twitter, Facebook YouTube and others. In order to stay within the law as defined by the proposed Bill, they will err on the side of caution resulting in significantly more content being removed than is currently the case.
5. Human Rights infringed
5.1. The technology limitations that I have outlined will result in significant Human Rights and Discrimination law infringement, putting the proposed Bill at variance with existing laws, thus making it unworkable in its present form.
5.2. It will enshrine in law policies that amount to prior censorship. In 2011 a joint Declaration of Freedom of Expression and the Internet was issued by the four international mechanisms for promoting freedom of expression. It stated that:
Content filtering systems which are imposed by a government or commercial service provider and which are not end-user controlled are a form of prior censorship and are not justifiable as a restriction on freedom of expression. [8]
5.3. It can also be argued that the user’s acceptance of the Terms of Service does not mitigate prior censorship because it amounts to an "unfair contract" thus removing the notion that the user is "in control" of the handling of their content by acceptance of the Terms of Service. An unfair contract can be deemed to be one that "contrary to the requirement of good faith, it causes a significant imbalance in the parties' rights and obligations arising under the contract, to the detriment of the consumer".
5.4. The Bill will also encourage the processing of user data and profiling in order to automate the policing required. Automated moderation technologies already employed effectively monitor every piece of content added to the platform, resulting in the constant surveillance of users as occurs in China and other authoritarian states. This is a dangerous state of affairs for governments to enshrine in law. Legal scholar Jonathan Zittrain suggests that; "the norms of individual freedom of liberal societies depend on the discretion of law enforcement officials and the friction that makes enforcement difficult and expensive, yet automation offers complete, simultaneous enforcement of the law and constant surveillance of behavior. Rather than simply extending a pre-existing law enforcement regime to more cases and doing it many times, the perfect enforcement of automation at scale transforms the basic premises of the relationship between the public and the law." [9]
5.5. The proposed Bill provides no transparency over how decisions about free expression will be made and worryingly provides no mechanism for civil and legal accountability. Once the Bill is passed the details of such decisions remain with the Secretary of State and will not come under the scrutiny of parliament. As the UN special rapporteur concluded in the report to the General Assembly on AI and its impact on freedom of opinion and expression:
Human rights law imposes on States both negative obligations to refrain from implementing measures that interfere with the exercise of freedom of opinion and expression and positive obligations to promote the rights to freedom of opinion and expression and to protect their exercise. [10]
5.6. The proposed Bill will have platform owners face legal liabilities for any unlawful information delivered on their platforms due to failures in taking due care. This will induce providers to over reach in removing content to avoid fines to the detriment of free speech.
5.7. Principle 3 of the United Nations Guiding Principles on Business and Human Right states that governments should ensure that laws and policies "do not constrain but enable business respect for human rights" and "provide effective guidance to business enterprises on how to respect human rights throughout their operations." This impact that the requirements of Bill will have on free expression goes against that principle.
6. Discrimination Law infringed
6.1. The proposed Bill is unworkable because it also sets one law against another. The freedom of speech required at universities to combat the cancel and no platforming culture is contradicted by the Bill that will promote the cancelling of views (that are legal offline) that, in the opinion of the accuser, are deemed harmful. The proposed "easy reporting of content" (clause 17) whilst laudable, unfortunately will promote the cancel culture to the detriment of diversity of views in the digital information space.
6.2. The discrimination and marginalisation, already occurring on platforms, will increase as a direct result of the Bill, impacting those protected under discrimination law, thus putting the Bill at odds with existing anti discrimination legislation.
6.3. The Bill will promote an ecosystem of information curated by algorithms (computer programs) rather than society at large, thus destroying the diversity and inclusiveness that the digital world has sought to encourage. Various minority groups will as a result be disenfranchised.
Statistical accuracy often lays the burden of error on underserved, disenfranchised, and minority groups. The margin of error typically lands on the marginal: who these tools over-identify, or fail to protect, is rarely random [11]
6.4. Platforms are not legally treated as media companies so there is not the same editorial obligation over impartiality. Apart from moderating the content to satisfy the regulators, they will continue to increase user engagement even if free speech is curtailed.
6.5. The duty to have regard to free expression (clause 19 & 29) will have no impact because the defence of the platform providers will be that the duty of care over harmful content had to take precedence. The duty of care over ill-defined harmful content, especially that which is legal offline, is pitted against common law free speech but without the normal due process and transparency. This will result in platform providers erring on the side of caution in order to avoid fines. Since there no requirement other than to "consider" free speech – how would anyone demonstrate that they didn’t? The opaque and statistical nature of the algorithms used in current and future moderation will make it impossible to justify that free speech has been curtailed or impacted because the defence will lie in the Terms of Service that will require users to consent to their input being filtered through such automated means. Free speech considerations, in the main, will be decisions made about what training data is used for automated moderation systems and what content is hashed in the GIFCT database.
6.6. This behaviour goes against the principles of a liberal democracy where ideas are shaped by debate and disagreement. It is already established that the online infrastructure and its technology characteristics (used in the sense of the Bill) have provided both State and platform owners significant political propaganda opportunities amplifying the curated views.
Algorithmic filtering may create filter bubbles that isolates people from ideological challenges (Bozdag, 2013). State actors have devoted substantial resources to creating and promoting propaganda online (Woolley & Guilbeault, 2017; Woolley & Howard, 2017). Owners of digital platforms derive political power from control over content moderation and curation (Wallace, 2018). [12]
6.7. Algorithms will be tweaked in the light of bad publicity (i.e. the view of the loudest or the majority) to more aggressively filter content thus marginalising more people with other views. Often views are shaped by well-funded activists who have a vastly disproportionate voice in respect of their numbers due to platform policy biases. Such users exploit the amplification effect of platform design and their content is automatically curated, thus limiting freedom of expression for others, alternative views and access to information.
7. Bill complicit in traumatic human moderation
7.1. Automated moderation is currently used by Tier 1 companies such as Facebook and Instagram to take down content and also to flag the most severe content for human moderation. It is well reported that the human moderation task for such content is traumatic with poor working conditions. The Bill pays no attention to the human cost in policing even a small fraction of content. In addition, human labelling of data, that can also cause significant trauma, is required for maintaining hashed databases and also to train Machine Learning systems. It could be argued that the Bill will enshrine in law a requirement for such human moderation, thus being complicit in creating the trauma. Humans will always have to be in the loop in automation because systems have to be trained and technology will not reach human intelligence.
8. Bad actors will continue to game the system
8.1. The reliance on algorithms to carry out content moderation will result in bad actors gaming the system, as they already do. This is possible because of the characteristics of centralised platforms with open user bases. There is empirical evidence that demonstrates how bad actors use the characteristics of platforms to spread false information and harmful content. These approaches are already adapted to algorithmic moderation and capitalise on the characteristics of the platform infrastructure, such as the decontextualization afforded by open and decentralised platforms that allow editing and reposting of content.
Previously, decentralization of the web was heralded as a value of web architecture. In recent years, however, this characteristic has been exploited as a tool by disinformation agents and media manipulators who use it to filter dissent and exploit computational propaganda techniques as they push content across platforms. [13]
8.2. Trying to automate the process of labelling content as to whether it is true or false (harmful or not) will not work because disinformation, or what some might regard as harmful content, is not simply a matter of labelling true or false.
Disinformation often layers true information with false - an accurate fact set in misleading context, a real photograph purposely mislabelled. The key is not to determine the truth of a specific post or tweet, but to understand how it fits into a larger disinformation campaign. [14]
9. Safety of Platform characteristics of service too vague
9.1. The Safety duty to consider how platform functionalities and characteristics might harm users is too vague to be enforceable and is unworkable. The platform providers will automatically moderate and remove content it doesn’t like whilst amplifying that which it deems acceptable in order to drive revenue. Children and adults will therefore remain exploited as a product to sell. The functionalities and characteristics of platforms are fundamental to supporting the business model. The non content related harms that these functionalities cause is not adequately addressed by the Bill.
10. Right of appeal works in favour of Platform providers
10.1. The complaints procedures proposed in the Bill (clause 18 & 28) are in practice, unworkable. In 2021, Facebook removed, mostly automatically, over 585m pieces of content (excluding spam), over 11m of these were appealed with just over 7m restored. This does not take into account the number of pieces removed but not appealed, leaving million of pieces of potentially legal free expression removed automatically.
10.2. Existing rights of appeal currently do not work in favour of minority groups or opinion. There is little transparency of the reasons for removal nor an opportunity for reasoned argument in the appeal process as there would be in a court of law, prior to any enforcement. The appeals process is also after the event, amounting to prior censorship and has no correspondence to legal due process regarding what is legal or not to say. In short, there is no external accountability, independent of the platform providers and Ofcom.
10.3. YouTube’s automatic copyright protection technology (Content ID), for example, is biased towards copyright holders who can determine whether the content is taken down when infringement is flagged or whether they will take a share of advertising revenue from that use of supposedly infringing content. Whilst there is a right to challenge the copyright infringement decision, proving that there was no infringement and reversing the decision is extremely difficult, costly and time consuming. [15] This automated approach, rather than an investigation prior to take down, has been followed by Instagram and Facebook and has led to an imbalance between copyright holders and those deemed by the automated system to have infringed.
Alternative Solutions
The main weakness of the bill that renders it unworkable is the reliance on automated content moderation to solve the problem of users being exposed to illegal content. The inclusion of what is legal offline but harmful online simply exacerbates the problem as we have outlined above. Can the circle be squared?
11. Platform infrastructure is the problem
11.1. The problems that have arisen over the last decade or so in the online world lie to a large degree with the characteristics of centralised platforms. Centralisation serves the business model that maximises traffic and user engagement to generate revenue. It has created the scale for communication that is the oxygen of those who seek influence, whether for good or bad. The platform design that achieves this scale is the root cause of the problems that we face today.
11.2. Platforms have moved away from community-based human centric content moderation along with the development of the rules of user engagement. This is partly as a result of scale but mostly because it serves the business model of the major platforms to increase traffic, bridge connections and increase user engagement. Automation has been used for at least three decades, even in human moderated forums, to reduce or remove undesirable content such as spam and vandalism. The design of current platforms intentionally amplifies the spread of information, or disinformation and illegal content, giving society the worst of all worlds. We are never more connected but never more alone. Never more abused and under the control of the major platform owners.
11.3. In essence the problem is one of our own making, we have nurtured a monster that controls our lives, all for the sake of free access. We are paying the price in addiction and abuse with the harms that can result. Attempts to control this have already created a Technosphere that is damaging the rich flow of ideas and opinions that free speech encourages, threatening democracy itself.
11.4. The debate on disinformation, whether in connection with political campaigns or the pandemic, has focused primarily on content and less on the underlying media infrastructure that allows circulation of disinformation to happen in a hitherto unprecedented, uncontrolled and speedy manner. [16]
12. Safety by design
12.1 I believe however that there are positive measures such as user control, covered in the Bill that could be extended to afford the protection that the Bill seeks whilst preserving free speech and avoiding discrimination and human rights violations. These extended measures would also remove much of the harm not covered in the Bill that results from platform characteristics and functionalities. They would put users in control of their online experience.
12.2. The Bill proposes that platforms should provide a means for user identity and age verification and for users to reject non verified users (clause 57, 66-69). These are a simple but highly effective means of protecting users from trolls and comments that they find offensive or disturbing. Those that choose to post as a verified user can be traced and prosecuted through the courts under existing laws if the content is deemed illegal. Age verification for children is also an effective measure although care in the design and implementation of this feature is needed to avoid this being circumvented.
12.3 Additional requirements placed on platform providers that would increase safety online without compromising free speech include:
a) Removal of bots and accounts that use bots to amplify content. Bots and filtering algorithms designed to increase engagement allow disinformation to travel 6x faster than verified stories.
b) User data and content should be private by default, changeable only by the user.
c) Recommender, filtering and ranking features should be optional and disabled by default.
d) Advertising should be banned for children.
12.4 These measures would put people back in control of content, rather than opaque algorithms. Whilst there are no perfect solutions, prioritising free expression is in the best long terms interests of a flourishing society.
12.5. This Bill, in its present form whilst laudably acknowledging the problem of online harm, will unfortunately produce other harms. It will be a final nail in the coffin, at least for UK citizens, of the free flow of ideas, allowing the platform providers to offer their own version of reality through the automated curation of information. It will marginalise minority groups and disenfranchise many people. This curated information will shape opinion, just as it does in China, Russia and numerous other autocratic regimes. At the same time, bad actors will continue to exploit and manipulate the platform characteristics and moderation algorithms. The result will be a society that have become slaves to, rather than masters of digital technology. [17]
[1] Peckham, J., The Rise of Digital Authoritarianism , Digital Persecution Conference, London, 2022 (in publication).
[2] Cambridge Consultants, Use of AI in Content Moderation, 2019 Report produced on behalf of Ofcom.
[1] BSR, 2021. “Human Rights Assessment: Global Internet Forum to Counter Terrorism.”
[2] E. Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do , The Belknap Press of Harvard University Press: Cambridge, Massachusetts, 2021. (book)
[3] Eric Goldman, professor of law at Santa Clara University – cited in the Verge 27/2/2019
[4] Cambridge Consultants, Use of AI in Content Moderation.
[5] Cited by James Forsyth in The Times.
[6] Llanso, E and Leerssen, P., Artificial Intelligence, Content Moderation, and Freedom of Expression , Transatlantic Working Group Transatlantic Working Group on Content Moderation Online and Freedom of Expression , 2020.
[7] Gorwa R, Binns R, Katzenbach C. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society . January 2020. doi: 10.1177/2053951719897945
[8] International Mechanisms for Promoting Freedom of Expression (2011) Joint declaration on freedom of expression and the internet https://www.osce.org/fom/78309?download=true
[9] Cited in : Wright L. Automated Platform Governance Through Visibility and Scale: On the Transformational Power of AutoModerator. Social Media + Society . January 2022. doi: 10.1177/20563051221077020
[10] Report of the Special Rapporteur to the General Assembly on AI and its impact on freedom of opinion and expression https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/ReportGA73.aspx
[11] Buolamwini, J, Gebru, T (2018) Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research 81: 1–15. Cited in Gillespie T. Content moderation, AI, and the question of scale. Big Data & Society . July 2020. doi: 10.1177/2053951720943234
[12] Krafft P.M, & Donovan J., (2020) Disinformation by Design: The Use of Evidence Collages and Platform Filtering in a Media Manipulation Campaign, Political Communication, 37:2, 194-214, DOI: 10.1080/10584609.2019.1686094
[13] Ibid.
[14] Starbird, K. (2019). Disinformation’s spread: Bots, trolls and all of us. Nature , 571(7766), 449. doi:10.1038/d41586-019-02235-x
[15] Gorwa R, Algorithmic content moderation: Technical and political challenges in the automation of platform governance.
[16] Anja Bechmann , A., Tackling Disinformation and Infodemics Demands
[16] Media Policy Changes, Digital Journalism, (2020) 8:6, 855-863, DOI: 10.1080/21670811.2020.1773887
[17] Peckham J. , Masters or Slaves? AI and the Future of Humanity , IVP: London, 2021, p. 187. (book)