Age assurance. Age assurance refers to any system of age checking and estimation. The Age Verification Providers Association (AVPA) makes the distinction between “age assurance” and “age verification”: age assurance is a broad term for different methods of discerning the age or age-range of an online user; age verification is a subset of that with more stringent methods and a higher level of accuracy and confidence in the age or age-range of that user.
Age verification. Age verification is a subset of age assurance, with more stringent methods and a higher level of accuracy and confidence in the age or age-range of that user.
Artificial Intelligence (AI). AI is technology which aims to replicate the problem-solving and decision-making capabilities of the human mind.
Algorithm. An algorithm is a list of rules that must be followed in a particular sequence in order to answer a question, solve a problem or perform a computation.
ASA. Advertising Standards Authority.
BBFC. British Board of Film Classification
Codes of Practice cover a range of authoritative guidance on best practice in different sectors, often by regulators. A statutory Code of Practice has the backing of an Act of Parliament and therefore carries greater weight than, for example, a self-regulating Code of Practice.
Content refers to a range of media, including text, images, memes, audio and videos.
Content moderation is the process, policy and technology used to check and curate content and activity on a service.
CSEA. Child sexual exploitation and abuse.
CMA. Competition and Markets Authority.
DCMS. The Government Department for Digital, Culture, Media and Sport.
Digital services and products. The publication of material or provision of a service through a digital medium, either free of charge or for a price.
Disinformation is factually incorrect content that is created and /or shared with the deliberate intention of misleading or deceiving audiences (in contrast to misinformation).
DRCF. Digital Regulation Cooperation Forum.
Duty of care. A legal “duty of care” is a term derived from the common law of negligence. It is a duty on one party not to inflict damage on another carelessly. The duty of care proposed by the 2019 White Paper and the draft Bill is a statutory duty (or series of duties) on service providers to people using their platforms.
End-to-end encryption (E2EE) is a method of secure communication between a sender and recipient which stops third parties from accessing the content of the communication. ‘Third parties’ includes service providers and Internet service providers. In practice, this means that encrypted services such as WhatsApp cannot view messages sent on their services.
FCA. Financial Conduct Authority.
FOS. Financial Ombudsman Service.
FSA. Financial Services Authority.
Freedom of expression. Article 10 of the European Convention on Human Rights (ECHR) states that ‘everyone has the right to freedom of expression’. It includes the ‘freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers’.
Freedom of speech. During the course of their work the Committee heard witnesses refer to freedom of expression and freedom of speech as interchangeable terms. Both free expression and free speech can be subject to restrictions.
Friction is the degree of resistance that users encounter while posting, sharing, viewing and interacting with online content or engaging in online activity. Generally, increasing friction entails adding additional steps that users must undertake before they can act online. For instance, users face very little friction if they can post content on a platform by clicking a single button. They face more friction if they need to tick a consent box and are given a warning before they can post.
Harvesting. In the context of this Report, harvesting is data harvesting, which is the automatic collection of information from online sources, such as websites or databases. Often the purpose of data harvesting is to extract information about individual users.
Harmful content is content—whether legal or illegal—with the potential to cause physical or psychological harm to a group of users (see definition of Content above).
Harmful activity is activity—whether legal or illegal—with the potential to cause people physical or psychological harm. Activity includes, but is not limited to, any behaviour which disseminate or promotes harmful content (see above).
Inferred data is data about an individual’s personal attributes, such as their gender or their age, which can be inferred from their online activity. It is distinct from personal data that is explicitly provided by the user.
Microtargeted advertising is the process by which data is used to segment a set of users into smaller groups (typically based on their demographics, interest , outlooks or psychology) in order to send them tailored messages which promote something such as a product, political candidate or organisation.
Misinformation is factually incorrect content that is created and/or shared without the deliberate intention of misleading or deceiving audiences (in contrast to disinformation).
News publisher is any organisation that publishes news-related material which has been produced by different people. The material is subject to editorial control, as well as a standards code. The term “recognised news publisher” is defined in clause 40 of the draft Bill.
Ofcom. Office of Communications.
Pre-legislative scrutiny is the Parliamentary process by which a draft Bill yet to be introduced into the Houses of Parliament is subject to the scrutiny of a Committee of one or both Houses, who produce a Report such as this one, containing a series of recommendations for amendment.
Priority illegal content. Refers to illegal content specified by the Secretary of State for DCMS through regulations. Service providers have to proactively minimise the presence of this content on their platforms.
Regulated activity is the posting and sharing of content, as well as ways of disseminating content or interacting with other users which are regulated by the draft Bill.
Regulated entities are services which fall into one or more of the categories regulated by the draft Bill.
Risk assessment is an assessment by the service provider of individuals who might be harmed by different categories of the regulated activity and how. It also covers what steps the service has taken to control those risks, what further steps need to be taken, by whom and by when. It is an important element of the material submitted to the regulator as part of the audit process.
Risk profile. A description of the specific risks resulting from the features of a service or group of services used to ensure regulatory requirements are proportionate and robust.
A risk register is a register of the risks of harm that might be encountered by as a result of certain types of content, activity or design features. Service providers will each be required to produce a risk register. The regulator might also create a register of risks likely to be encountered by different groups of users online as part of its standard-setting guidance to service providers.
Search services. A website that provides a search engine which gathers and reports on the information available on the internet in response to a query from a user.
Service provider. Throughout this Report, the term ‘service provider’ has been used to describe those entities that fall under the scope of the Bill. In evidence received by the Committee, other terms were used on occasion, including online providers, online services and platforms.
Social media is the collective terms for a range of digital applications that allow people to interact with each other online, as well as with businesses and organisations. Prominent examples are Facebook, TikTok, Instagram and Twitter. Different social media applications are constantly being developed.
System design. In the context of this Report, ‘system design’ refers to the different ways in which regulated services design their platform in order to enable the creation and dissemination of content, including their use of algorithms.
Transparency reports are the mechanism through which regulated services are required to provide data to the regulator on areas such as the categories and quantity of harmful content and activity (see definitions above) including illegal content. Transparency reports also include data on the number of requests from user for material to be taken down and the speed of the platforms’ responses. The precise information to be included in such reports and the regularity of the reports will be determined by the regulator.
A user is a person who posts, shares, views or otherwise interacts with content published or hosted by service providers.
A user-to-user service provider is a business that hosts or publishes content produced by at least one person in order to be viewed, and engaged with, by at least one other person. Such services differ from commercial websites whose users are businesses.
Virality describes the rapid speed at which content can be spread online to large audiences. Content can spread in a range of ways, including recommendations, sharing and algorithmic amplification.