The defining feature of platforms, as used in this Report, in the context of democracy and digital technology is that they intermediate between their customers and content that they do not create (and that they do not usually pay for either). This is achieved through indexing content that exists elsewhere on the internet, such as Google, or through user submitted content, such as Facebook, Twitter or YouTube. As a result, they often offer harmful content that can have a detrimental effect on individuals and society. This can be compounded by the business models of these platforms. The largest platforms in this space are all funded by advertising and are incentivised to increase user attention. Many of these platforms harvest users’ personal data to effectively algorithmically rank and recommend content to maintain user attention. This can incentivise an increased spread of harmful content as we discuss throughout this Report. Algorithmic ranking and recommending of content mean these platforms are making de facto editorial decisions and we consider them as such in this Report.
Neither advertising, nor algorithmic recommendation is a necessary condition for spreading harmful content. For example, WhatsApp features neither but still has been used to spread concerning content. Throughout this Report we refer to platforms, online platforms or technology platforms as ways to describe these intermediary services. These intermediaries are not necessarily bad. If platforms were to effectively abide by the norms of a democratic society through tackling harmful content rather than spreading it then they could play a powerful, constructive role in supporting democracy.
Baroness O’Neill of Bengarve neatly explained the difference between misinformation and disinformation:
“… if I make a mistake and tell you that the moon is made of blue cheese, but I honestly believe it, that is misinformation. If I know perfectly well that it is not made of blue cheese but tell you so, that is disinformation.”
Whether or not information is purposefully false does not change whether it is harmful. In our report we use ‘misinformation’ where it is unclear if there was purposeful intent to misinform and only label something ‘disinformation’ if that intent is clear.
There is no single definition of campaigning in law that can be readily used to define what a campaigner is. The Political Parties, Elections and Referendums Act 2000 (PPERA) has separate definitions of campaign spending for registered parties, third parties and referendum campaigners. The Representation of the People Act 1983 has another definition for election spending by candidates.
Advancements in digital technologies also bring the question of what a campaigner is into focus. The PPERA, drafted 20 years ago, does not account for the present situation.
In their 2018 ‘Democracy Disrupted’ report, the Electoral Commission used the term ‘campaigner’ loosely as an umbrella term for political parties, third parties, permitted participants, unregistered referendum campaigners and candidates.
We define a ‘campaign’ as coordinated activity that promotes electoral success or promoting a referendum outcome. This applies to political parties, registered and unregistered third parties and candidates in local and national elections.
We use the term ‘digital media literacy’ because our purposes go beyond, but do include, the functional skills required to use technology.
We define digital media literacy as being able to distinguish fact from fiction, including misinformation, understand how digital platforms work, as well as how to exercise one’s voice and influence decision makers in a digital context.