The Right to Privacy (Article 8) and the Digital Revolution Contents

Summary

Most of us use the internet every day. We use it for work, to learn, to shop, to socialise, to watch films and listen to music, and to access vital services like banking and welfare benefits. The internet has the potential to enhance our human rights. It can support freedom of expression, the right to education, freedom of association and participation in elections.

While we recognise the benefits we get from the internet, we are all too aware of the potential for harm. We recently published the report of our ‘Democracy, freedom of expression and freedom of association’ inquiry which looked, among other things, at the threats and abuse directed at MPs on social media.1 The death of Molly Russell in 2017 highlighted the danger posed by the graphic content relating to suicide and self-harm that is available online. Parents are ‘worried sick’ over the relatively easy access their children have to online pornography. Online misinformation campaigns aimed at influencing elections are the subject of inquiries across the globe. We recognise all of these concerns, but for this inquiry have focused on one specific aspect of online harm that has received less attention: the risk to our right to privacy, and the risk of discrimination, which arises from how companies collect and use our data online.

Much of what we are able to use on the internet is free because, from social media platforms to search engines, a business model has evolved in which companies make money from selling advertising opportunities to other companies rather than charging individuals to use the service. This makes having access to our data an extremely valuable commodity. Because, unlike advertising in a newspaper or on a bus stop, internet content can be personalised (meaning different people using the same website can be shown different advertisements), companies want as much information about us as possible, so that they can effectively target their advertising and maximise their revenue.

Companies collect this information from the forms we fill in online when we sign up to a website or buy something online or even when we agree to cookies when visiting websites . But they also use our photos, our social media ‘likes’, our browsing history and a wide range of other sources to build up a profile of us, which they may then sell on to other companies. The legal basis companies use for doing this is, in most cases, ‘consent’: we click a box when we sign up for a service, to say we accept how our data will be used.

The evidence we heard during this inquiry, however, has convinced us that the consent model is broken. The information providing the details of what we are consenting to is too complicated for the vast majority of people to understand. Far too often, the use of a service or website is conditional on consent being given: the choice is between full consent or not being able to use the website or service. This raises questions over how meaningful this consent can ever really be.

Whilst most of us are probably unaware of who we have consented to share our information with and what we have agreed that they can do with it, this is undoubtedly doubly true for children. The law allows children aged 13 and over to give their own consent. If adults struggle to understand complex consent agreements, how do we expect our children to give informed consent. Parents have no say over or knowledge of the data their children are sharing with whom. There is no effective mechanism for a company to determine the age of a person providing consent. In reality a child of any age can click a ‘consent’ button.

The bogus reliance on ‘consent’ is in clear conflict with our right to privacy. The consent model relies on us, as individuals, to understand, take decisions, and be responsible for how our data is used. But we heard that it is difficult, if not nearly impossible, for people to find out whom their data has been shared with, to stop it being shared or to delete inaccurate information about themselves. Even when consent is given, all too often the limit of that consent is not respected. We believe companies must make it much easier for us to understand how our data is used and shared. They must make it easier for us to ‘opt out’ of some or all of our data being used. More fundamentally, however, the onus should not be on us to ensure our data is used appropriately - the system should be designed so that we are protected without requiring us to understand and to police whether our freedoms are being protected.

As one witness to our inquiry said, when we enter a building we expect it to be safe. We are not expected to examine and understand all the paperwork and then tick a box that lets the companies involved ‘off the hook’. It is the job of the law, the regulatory system and of regulators to ensure that the appropriate standards have been met to keep us from harm and ensure our safe passage. We do not believe the internet should be any different. The Government must ensure that there is robust regulation over how our data can be collected and used, and that regulation must be stringently enforced.

Internet companies argue that we benefit from our data being collected and shared. It means the content we see online - from recommended TV shows to product advertisements - is more likely to be relevant to us. But there is a darker side to ‘personalisation’. The ability to target advertisements and other content at specific groups of people makes it possible to ensure that only people of a certain age or race, for example, see a particular job opportunity or housing advertisement. Unlike traditional print advertising, where such blatant discrimination would be obvious, personalisation of content means people have no way of knowing how what they see online compares to anyone else. Short of a whistle-blower within the company or work by an investigative journalist, there does not currently seem to be a mechanism for uncovering these cases and protecting people from discrimination.

We also heard how the ‘data’ being used (often by computer programmes rather than people) to make potentially life-changing decisions about the services and information available to us is not even necessarily accurate, but based on inferences made from the data they do hold. We were told of one case, for example, where eye-tracking software was being used to make assumptions about people’s sexual orientation, whether they have a mental illness, are drunk or have taken drugs. These inferences may be entirely untrue, but the individual has no way of finding out what judgements have been made about them.

We were left with the impression that the internet, at times, is like the ‘Wild West’, when it comes to the lack of effective regulation and enforcement.

That is why we are deeply frustrated that the Government’s recently published Online Harms White Paper explicitly excludes the protection of people’s personal data. The Government is intending to create a new statutory duty of care to make internet companies take more responsibility for the safety of their users, and an independent regulator to enforce it. This could be an ideal vehicle for requiring companies to take people’s right to privacy, and freedom from discrimination, more seriously and we would strongly urge the Government to reconsider its decision to exclude data protection from the scope of their new regulatory framework. In particular, we consider that the enforcement of data protection rules - including the risks of discrimination through the use of algorithms - should be within scope of this work.

The internet is increasingly prevalent in all of our lives. More and more of us use ‘virtual assistants’ like Siri and Alexa and ‘wearable tech’ that collect our health data as we exercise and monitors our sleep. More and more services are inaccessible other than through the internet. The Government should be regulating to keep us safe online in the same way as they do in the real world - not by expecting us to become technical experts who can judge whether our data is being used appropriately but by having strictly enforced standards that protect our right to privacy and freedom from discrimination.

The internet has great potential to bring people together, give marginalised people a voice and enable access to learning at a scale that would be impossible offline. But we have heard how it has also led to vast swathes of, sometimes very personal, data being held and shared without our knowledge, used to make assumptions about us and discriminate against us. In the latest of Sir Tim Berners-Lee’s annual letters on the ‘birthday’ of the World Wide Web he invented, he wrote:

“Against the backdrop of news stories about how the web is misused, it’s understandable that many people feel afraid and unsure if the web really is a force for good. But given how much the web has changed in the past 30 years, it would be defeatist and unimaginative to assume that the web as we know it can’t be changed for the better in the next 30. If we give up on building a better web now, then the web will not have failed us. We will have failed the web.”2

We cannot afford to wait 30 years; internet companies, regulators and the Government must step up now.


1 Joint Committee on Human Rights, First Report Session 2019–20, Democracy, freedom of expression and freedom of association: Threats to MPs, HC 37 / HL Paper 5

2 ‘30 years on, what’s next #ForTheWeb’, World Wide Web Foundation, 12 March 2019




Published: 3 November 2019