Session 2013-14
Publications on the internet
Home Affairs Committee - Minutes of EvidenceHC 70
Oral Evidence
Taken before the Home Affairs Committee
on Tuesday 26 February 2013
Members present:
Keith Vaz (Chair)
Mr James Clappison
Michael Ellis
Steve McCabe
Bridget Phillipson
Mark Reckless
Chris Ruane
Mr David Winnick
________________
Examination of Witnesses
Witnesses: Sarah Hunter, Head of UK Public Policy, Google, Simon Milner, Director of Policy, UK and Ireland, Facebook, and Sinéad McSweeney, Director of Public Policy EMEA, Twitter, gave evidence.
Q146 Chair: The Committee is now in session. I refer all those present to the Register of Members’ Interests, where the interests of Members of this Committee are noted. This is a session of the Committee’s inquiry into e-crime, and at the end of this session we will go into private session to consider other business. Could I welcome Simon Milner, Sarah Hunter and Sinéad McSweeney? Thank you very much for coming to give evidence.
It has been a long, hard struggle for this Committee to try to get your companies to appear before us. As you know, we were very keen to hear evidence from the people responsible for security, because of course this inquiry is into e-crime, but that I understand was not possible. Is that correct, Ms Hunter?
Sarah Hunter: We are happy to provide further evidence in private with our security leads, and I am happy to talk about the broader policy issues today.
Q147 Chair: Yes, we did want to do this, but I think there was a problem with getting them because they are all in America. Are your security people in America, Mr Milner?
Simon Milner: Yes, the people who lead for us on law enforcement liaison are in the US.
Q148 Chair: Yours, Ms Hunter, also in America?
Sarah Hunter: That is right, yes.
Q149 Chair: Yours, Ms McSweeney, also in America?
Sinéad McSweeney: Yes, that is correct.
Q150 Chair: Therefore, the operation of Google, Facebook and Twitter in the UK is quite limited, is that correct? How many people do you have working here, Mr Milner?
Simon Milner: We have around 130 people working in the UK, predominantly in our sales operation, although we also established an engineering group in October last year, so that is a growing part of our operation in London.
Q151 Chair: I am going to start with a question that was raised by the judge in the Birmingham case last week concerning the use of the internet by organisations and individuals who are perpetrating attacks and verbal incitement of attacks against individuals in the state. I went on YouTube this morning, which of course is owned by Google, and I noted the fact that the preachings of Anwar al-Aulaqi, who was head of al-Qaeda in the South Arabian Peninsula, are still on YouTube, and those addresses, some of which could be seen to be inciting religious and racial hatred, are still on YouTube. Why is it that they are retained on there, given the record of that individual?
Sarah Hunter: It is a good chance to talk about this. Thank you for bringing it up. It is worth saying from the outset that we in no way condone the use of YouTube for terrorist content, and to that end we have very, very strict community guidelines on YouTube that go way beyond the law. For example, it is not allowed on YouTube to post content that is inciting violence; it is not allowed to post content that is hate speech. When a user flags to us that there is content up on there that is breaking those guidelines, we review that content and we take it down, and these flags get reviewed within an hour, so it is a very quick process.
Q152 Chair: How many people do you have doing that?
Sarah Hunter: We have many people. They are spread across the globe so that we can make sure that whatever time zone you are in, when you flag something, it is immediately looked at. They are spread across a number of different locations.
Q153 Chair: But bearing in mind that this particular individual has been described as, when he was alive, number three to Osama bin Laden, and that he headed the organisation in the South Arabian Peninsula that was responsible for many deaths, why is his content still on YouTube?
Sarah Hunter: When content gets flagged to us as having broken our guidelines, these people review it, and they look at every single one and they look at it very carefully, and they look at the context. They look at what is actually being said, and they look at whether it is indeed inciting violence.
Q154 Chair: So you have looked at it? Somebody has looked at all these references to Anwar al-Aulaqi?
Sarah Hunter: If someone has flagged it to us, yes.
Q155 Chair: "Flagging" means what? Can you tell us?
Sarah Hunter: When you go on to YouTube and you look at a video, in the bottom right-hand side there is a little flag sign, and you can click on that and it says, "Do you want to report this content?" and you have to click on the reason why, and that is what flagging means.
Q156 Chair: Sure, and this has been done in this particular case?
Sarah Hunter: I don’t know if it has been done in every single video. Anyone could do it. I could do it; you could do it.
Q157 Chair: No, I know, but I am referring to it specifically, and if you do not know the content-I am very happy to accept that you do not know the content, but this is the content of speeches by Anwar al-Aulaqi, who was wanted for a number of criminal activities and whose preachings were noted by the judge in the bombing trial of last week in Birmingham. Are you familiar with what I am talking about?
Sarah Hunter: I have seen some of this content on YouTube.
Chair: You have?
Sarah Hunter: I have seen some of his content on YouTube, yes.
Q158 Chair: You are satisfied that this content is not content that YouTube is concerned about and that ought to be taken down? Somebody has looked at this content, they are very happy that it comes within your guidelines and it therefore remains on the internet? You are happy with that, are you?
Sarah Hunter: I haven’t personally looked at all of this content, and it is just worth remembering the scale of content on YouTube. There are 72 hours of content uploaded on to YouTube every single minute of the day, so it is just physically not possible for us to look at every single video that gets uploaded. We rely on our users, and there are hundreds of millions of people across the world looking at YouTube all the time. When they tell us there is content that breaks the guidelines, that is when our team kicks in, reviews it and removes it.
Q159 Chair: You will then look at it. As a matter of policy, can you just tell me how many of these videos you have taken down as a result of somebody alleging a criminal act is being incited and therefore you have had to remove those videos?
Sarah Hunter: I don’t think we have specific numbers of how many broadly are flagged and then removed. We have numbers of how many-
Q160 Chair: You have no indication of how many people have complained or flagged? An internet company like yours, with so many databases, so many experts, will not know how many people have flagged a particular video?
Sarah Hunter: We probably would internally within the YouTube removals team, but I don’t personally have that number here, no.
Q161 Chair: No. So, you would know? You do have that information?
Sarah Hunter: When a video gets flagged, yes.
Q162 Chair: Yes, and you would also have the information of how many of these videos have been taken down?
Sarah Hunter: Of the ones that have been flagged, yes.
Q163 Chair: Yes, and how many are there?
Sarah Hunter: I couldn’t tell you now. I can probably ask.
Q164 Chair: Would you write to the Committee?
Sarah Hunter: Absolutely. I will ask.
Chair: We are very happy if this needs to be in private. I do not see why it should be, because this is just a matter of fact. If you could write to us and tell us the figures as to how many of these videos have been flagged and how many have been taken down-
Sarah Hunter: Absolutely.
Q165 Mr Winnick: Following what the Chair has just said, recognising again the extent of the communications involved-some totally unknown, obviously, up to 10 to 15 years ago-it is not simply the rantings of the cleric mentioned by the Chair, but other incitements to hate crimes, certainly against Muslims, anti-Semitism and the rest. You say matters are flagged up when complaints are made. My question is, before complaints are made, what sort of control is there to try to ensure that hate crimes-incitement against people because of their racial origin, religion or sexuality-do not go on?
Sarah Hunter: Because of the scale of the amount of content that gets uploaded on to YouTube, we do not have a way of reviewing it in advance of its being posted, but it is an amazingly effective system, this flagging system. Because we have hundreds of millions of people looking at YouTube all the time, things get flagged very, very quickly, so if there is content that the users, the real people using YouTube, believe is breaking our hate speech rules, for example, it gets reviewed and taken down within an hour.
Q166 Mr Winnick: There is all the difference, obviously, between that and a newspaper, which would be very anxious, if it was responsible, regardless of its political stand, not to include any item that incited hatred. In this form of technology, that is not possible, or does that-
Sarah Hunter: Yes. YouTube is a very different platform to a website that a newspaper publishes.
Mr Winnick: I understand.
Sarah Hunter: We do not choose what content gets put up on there, and that is one of the great things about this platform-that anyone can put content up on YouTube and express a view or launch a band or put a film up. It is an incredibly open system, and it means that people have an opportunity to express themselves in a way they never had before. If you look at the use of YouTube, for example, in the Middle East, it has been an amazingly powerful force for good in terms of improving democracy. Yes, our role is very, very different from a newspaper publisher’s.
Q167 Mr Winnick: It is open, really, to any hate merchant, until fortunately, hopefully, someone flags it up pretty quickly?
Sarah Hunter: As I said earlier, we really don’t want the platform to be used for those ends-
Mr Winnick: Obviously not.
Sarah Hunter: -and we do have these strict guidelines. I think what we are talking about here is the means for making sure the platform is kept open and the means for which it is being kept clean. I think, as I said earlier, this is a rather effective way of making sure bad content is taken down as effectively and efficiently as we can.
Q168 Chair: Yes; thank you. Let us move on to the issue of criminal targeting. Mr Milner, those of us who are using Twitter declare our interest. I am a very bad Twitter user, but you have, I understand, 6 million, 6.2 million-
Simon Milner: I am Facebook, so-
Chair: Ms McSweeney is Twitter; all right.
Simon Milner: I am happy to answer any questions about Facebook.
Chair: There are 6.2 million people on Twitter at the moment in the UK, or is it more?
Sinéad McSweeney: Sir, no. Our worldwide users, 200 million-
Chair: In the UK?
Sinéad McSweeney: -and 10 million in the UK.
Q169 Chair: Facebook?
Simon Milner: In the UK, 33 million, and globally, a billion.
Q170 Chair: As far as Google is concerned, how many users do you have?
Sarah Hunter: I have to say I don’t know. I apologise. I could find out for you. Obviously, we are different from these two companies, and we have about 53 different services. We have Gmail, we have YouTube, and we have Google Search.
Q171 Chair: Indeed. We have received very powerful evidence from the police and others about the way in which criminals are hacking into the internet-in particular Twitter, Facebook and other internet service providers. Ms McSweeney, is this on the increase, or is it being contained?
Sinéad McSweeney: I think that our own view would concur with some of the evidence that you have heard: that there is an increase in sophisticated, well-resourced attacks on platforms. I draw a distinction between the incident that we spoke publicly about recently, which was an attack on the platform, and the individual account compromises that people see occasionally, which generally arise from a compromise of the individual’s account or their password or their email because they clicked on a link. So there are two different things.
But in terms of advanced, persistent threats from sophisticated and well-resourced individuals with expertise, with resources, there has been an increase in those, and we are currently working with law enforcement in the United States on the recent incident, but in its broadest sense it is important that there is a sharing of information between companies and law enforcement and the kind of work that you are doing here to highlight it, because it is a threat to the internet, rather than to individual companies.
Q172 Chair: Indeed. Mr Milner, do you share that concern of Ms McSweeney? Is it on the increase as far as Facebook is concerned? Are people hacking into Facebook?
Simon Milner: Yes. That is something certainly that my security colleagues would concur with: that we see consistent evidence in the UK, as you have heard from a number of senior people from law enforcement over the past several months, and in the US.
For instance, the FBI’s Internet Crime Complaint Centre, which you may have come across, reports every year on the statistics on the complaints it receives about internet crime, and they have consistently reported year-on-year increases. The main areas of crime that they say are on the increase are financial scams, including criminals posing as the FBI and saying, "Your computer has been compromised. Give us your details, and we can help you out"- actually they are scamming them-and identity theft. Those are the two areas they report as on the increase, and that concurs with our own view about attacks on our own users.
Q173 Chair: There has been evidence put forward that this is coming from countries like China. Would you have a list of countries or individuals where these attacks are coming from? Obviously you have read about what is being alleged concerning the launching of attacks in China. Would you concur with that?
Simon Milner: We are certainly aware of that suggestion from the authorities, and it is very much something we leave to the authorities, to law enforcement and to international authorities to offer a view on where those attacks are coming from. It is not something that we have been public about in terms of our views on that.
Q174 Chair: For you, Ms McSweeney, I think you were referring to the recent events when 250,000 emails and other information of your users were, in effect, stolen.
Sinéad McSweeney: Yes. Our security people noted some unusual activity and quickly took action to close that down, but in the course of which they believed there was a possibility that some password information of users had been compromised, so they reset the passwords of those users and immediately notified them by email, and we also made a public statement. We don’t hold a lot of personal information on our users, so it would generally be email and passwords.
Q175 Chair: Presumably, your organisations employ very clever and sophisticated people. Are they able to tell you where these attacks are coming from-which country or which individuals?
Sinéad McSweeney: In the context of the recent incident, our security people, those clever people that you talk about, are working very closely with law enforcement, and in that context, given my own background in policing for 10 years, I would be anxious not to say anything here that might compromise or prejudice those investigations.
Q176 Chair: No, we understand that, but we have had evidence from the City of London police that most of the attacks are coming from gangs in countries like Russia and Eastern Europe, and China was also raised, but you do not have any information for this Committee on particular countries? They have it.
Sinéad McSweeney: I think they, given their expertise in the area, are best placed to make those public pronouncements.
Q177 Chris Ruane: Before you mentioned that if somebody flags up a hate crime, you will be on top of it within an hour, but what action do you take when users report that their accounts have been hijacked or that they have been victims of online scams or abuse, and how long does that process take? Also, when you are deciding to take an item down, whose standards do you use? Whose laws do you use? Is it the US, China, EU, or an amalgamation? Are there regional differences around the world?
Sarah Hunter: Shall I start on the hijacking issue? Hijacking of accounts is a significant problem. There has been some evidence that phishing emails, as in emails that have been sent to people in an attempt to try to get their passwords out of them, are increasingly coming from accounts-emails from people they think they know. Of course, they are not from people they know; they are from those accounts that have been hijacked. We spend a lot of money and a lot of time trying to prevent accounts from being hijacked in the first place. We spend hundreds of millions of pounds in keeping our users’ data safe. We employ 350 security engineers dedicated to this task.
In the last two years, we have seen the number of accounts hijacked-Google accounts hijacked; that is, across all the Google products-decrease by 99.7%. We have done that by developing a technology that scans account activity and looks at suspicious activity. For example, if you have a Gmail account and you signed in from London, and then an hour later signed in from Australia, we would see that as a signal of suspicious activity, and we would ask you a few questions, some security questions; "Are you really you?" That is an amazingly effective way to stop hijacking, and as a result we have significantly reduced the number of hijacked accounts.
In the few cases where the accounts do unfortunately get hijacked, you as a user can go to the sign-in page, so the YouTube sign-in page or the Gmail sign-in page, and click, "I don’t have my password. Someone has stolen my account", or whatever, and you automatically are taken through some security questions to identify that you are indeed yourself, something like, "What mobile phone number did you give us to associate with the account?" or, "What was the back-up email address you gave us when you set up the account?" Those pieces of information you would know but the criminal would not know. Once you provide us that information, we restore your settings and block the person who has hijacked the account, so it is a very, very quick and automatic service that we have put in place to prevent people being locked out of their accounts for long.
Q178 Chris Ruane: Whose standards do you-
Sarah Hunter: That is a Google set of standards.
Chris Ruane: Right, but the other aspect of abuse and slander and hate crimes, that monitoring: whose standards do you use for that?
Sarah Hunter: On YouTube, going back to the conversation earlier, our community guidelines are set by Google, and they are our own internal standards. They have evolved over the years.
We introduced a flag for terrorist activity just a couple of years ago, and that is a relatively new innovation. Those standards are things that we ourselves have set up. With YouTube, it is a platform where we own, we host all the content, we set the rules of the road, and we want it to be a platform where-is the balance right between people feeling like it is a platform they want to enjoy, they feel safe on, but also that is used for free expression? That is almost a premise of the guidelines that we have set.
Q179 Steve McCabe: I just wanted to ask you if you were familiar with a Trojan or a virus called Ukash, which I believe is spelled U-K-A-S-H, which masquerades as an official police document. I wondered if any of you had encountered that or had any complaints from your users about being victims of it.
Simon Milner: It is not something I have come across, but I am happy to ask my security colleagues, and if they have heard of it, I will write to the Committee to explain, but it is not something that I have come across from any of my colleagues involved in the security side of our platform.
Sarah Hunter: Me too, I am afraid. I can find out.
Q180 Steve McCabe: Would that be because you do not necessarily have the level of technical detail that would identify that? I am asking because I understood this was quite a common occurrence and that it is actually quite a nasty piece of work because it demands money by untraceable vouchers that are designed to permeate the system. There may be other versions of the same thing; I see you nodding. I understood it was quite common, and I was just surprised. What I wanted to ask was: what do you do about something like that?
Sinéad McSweeney: If it is of assistance, it is not something that we would be experiencing within the platform, and I think that is what my colleagues are saying. It is a common scam within email systems, rather than within a platform like Twitter or like Facebook.
My familiarity with it comes from work around crime prevention in communication in my previous job with the Irish police, where we had to highlight the fact that people may be receiving this email that purported to come from law enforcement, as my colleague from Facebook mentioned earlier, and was looking for either money or information in order to get somebody out of a perceived difficulty that the email suggested they had got themselves into. But in terms of it being within the platforms, no, that is not our experience.
Simon Milner: It is the same for us, in the sense that when people are receiving messages on Facebook, it is from people they know. We are a platform on which you have to use your real identity. You make friendships, typically with people you know in the real world, and you can only receive messages from those friends. Therefore, we do not have the same kind of email functionality, where if somebody can find your email address on a list somewhere, they can send you an email.
Something like 90% of email is spam, whereas significantly less than 5% of all the traffic on Facebook might involve some kind of spam. It is a very different order of magnitude, and I suspect Ukash-although I will ask my colleagues about this-is an email-based Trojan, rather than something that affects our platform.
Steve McCabe: Maybe I can come back later, Chairman, to the question about whether you always get Facebook messages from people you know, but we can return to that.
Q181 Chair: Yes, of course, Mr McCabe. The Norton 2012 Cybercrime Survey reported that 40% of users of social networks have said that they were the victims of e-crime, which is a very large figure. Are you surprised at that figure, Mr Milner?
Simon Milner: I am surprised at that figure, in that, as Ms Hunter was explaining earlier, we similarly use very sophisticated technology to block attempts to attack our users at the source in invisible ways that our users would never see. We are constantly updating that.
Security is an arms race and you have to be very vigilant to see what is coming around the corner and to make sure you are prepared for it, and the great majority of our users never experience a problem. That is certainly a number I don’t recognise in respect to Facebook, and I will happily look at the Norton survey to understand whether or not they break down their data into particular social media platforms, but it is certainly not something that we see on a regular basis in the UK, or anywhere else around the world, in terms of those kinds of numbers.
Q182 Chair: Ms McSweeney, what does e-crime cost your organisation? Can you put a cost on e-crime as far as Twitter is concerned?
Sinéad McSweeney: No, I couldn’t put a cost on it. From our point of view, we want users to enjoy the platform that is provided to them to discuss any range of issues, so it is in our interests to ensure that that experience is not being disrupted by e-crime. Twitter is a slightly different platform in that it is very public, so most of what people communicate on Twitter is visible to anybody who wants to look at it, so it is less attractive, even for spam activity. It is detected more easily, because it is visible if one particular account is "@-replying" lots of accounts at the same time, so from our point of view it is not-
Chair: Yes, thank you. Ms Hunter?
Sarah Hunter: We haven’t made an assessment across the board. As I said earlier, we have spent hundreds of millions of dollars to date on protecting our users’ data, so it is not cheap, but it is incredibly important. I think user trust is really at the heart of-
Q183 Chair: Hundreds of millions of dollars?
Sarah Hunter: Yes, and I think user trust is at the heart of our business model. If you think about all of our businesses, they are free, and there is lots of competition. There are lots of alternatives. If users do not believe we are keeping their data safe, they will go somewhere else, so it really is in our commercial interests to make sure the platform is kept as safe as possible.
Q184 Chair: Mr Milner, could you put a cost on it?
Simon Milner: No, it is not a number that we have ever made public, nor is it one that I am aware of.
Chair: But presumably you do spend money on it.
Simon Milner: Of course. I am sure we spend quite a lot of money on it, but it is not something where, as I said, we have released a public figure on how much we spend on that.
Q185 Mr Clappison: Perhaps I could ask Ms Hunter this question. How do private companies navigate the patchwork of different national laws when it comes to online security and data protection, and do you think there should be a more international approach?
Sarah Hunter: It is complex. The internet is a global platform and people across the world use it, and Governments across the world want to keep their users safe online. I suppose from a law enforcement perspective it is no different to international crime offline. If you are pursuing an international investigation, you have to deal with lots of different colleagues in other countries. Google Inc is a US-based company, so I think one of the key tactics or the key tools in ensuring that law enforcement can address online crime is to think of the MLAT process, the multilateral assistance treaties, and those are the agreements between the US and other countries for cross-border investigations to take place.
I looked at some of the previous evidence you had talking about MLAT and how it was a slow process, and I think that is something we should be definitely looking at to speed up. The UK and the US have a renowned close relationship, and I think if we can make that process work any better, that is surely going to help law enforcement.
Q186 Mr Clappison: Perhaps I can put the same point to Ms McSweeney, because she obviously has a different perspective on this, coming from her background.
Sinéad McSweeney: I think, again, because of the nature of the platform, an awful lot of the material that law enforcement would be interested in obtaining is available and public to them. Similar to my Google colleague, our emphasis, other than emergency requests, would be on the MLAT procedure, but that is something that we also gave evidence at the previous hearing and felt that this was something that could be improved to make it better for all of the parties to the process in terms of acquiring information.
In terms of the patchwork of laws, aside from things like law enforcement requests and data privacy, we also have country-withheld content, where if there is a tweet or an account that is illegal in one country but not in others, it can be withheld in that country. For example, an account advocating Nazi messages in Germany was withheld in Germany, and anti-Semitic content was withheld in France.
Q187 Mr Clappison: Would it be withheld here as well, then? Would it be withheld in this country because it was withheld in Germany?
Sinéad McSweeney: If the content was illegal within the jurisdiction-
Mr Clappison: It might not be technically illegal here, but we probably do not want to see it.
Sinéad McSweeney: The standards by which we judge that content are the Twitter rules and the legal content, if it is illegal-the laws of the particular country within which the report is coming from. If the content is illegal in a country, it can be withheld on request from law enforcement or Government.
Q188 Mr Clappison: Could I ask the witnesses what they think of the new European draft data protection regulation?
Simon Milner: I am happy to help you with that. It might be worth, just by way of preface, explaining that the way Facebook operates in Europe is that all users in Europe, including all 33 million account-holders in the UK, have a contract with Facebook Ireland, and therefore they are regulated under EU data protection law by the Office of the Data Protection Commissioner in Ireland. We are regulated here, and therefore of course we are very interested in changes in the EU framework that will impact on Irish national law and therefore the rules that we have to face.
We think the law does need modernising. It has been a long time since it was last updated and it certainly needs modernising for the internet age. There is a very vigorous debate going on in Brussels, as you may know, involving national Governments. We think there are some good proposals that have come out of the Commission, including the idea of a one-stop shop, so companies that are operating across Europe and are handling citizens’ data should be able to be regulated in one place under a regulation that applies right across Europe, and not be subject to the oversight of regulators in each one of those countries. We think that is a very good proposal, and indeed that is effectively how we operate with the Irish Data Protection Commissioner.
There are some things that are more worrying, and-
Q189 Mr Clappison: I was going to ask you if there was anything that was more-
Simon Milner: Yes, there are some things that are more worrying, but there is a lively debate and an openness we see in the Commission and some Members of Parliament for reflecting on these. Things like requiring explicit consent every time your data is used for something new, we think that that should be content-specific, so in a service like Facebook, which people join to share their data-you do not join Facebook to keep things to yourself; you join it to share-it shouldn’t be the case that every time we are introducing a new feature, you have to provide explicit consent to that.
Q190 Mr Clappison: Could you explain how exactly that would work? What is being proposed by the European Commission?
Simon Milner: Remember that there are a number of different proposals. There are the Commission’s original proposals. We have had two reports from the Parliament that have contained different sets of amendments, so there are now a range of different proposals out there, but one of them is certainly requiring a much more granular form of explicit consent almost at every turn.
One of the things I hope Members will be familiar with is the e-privacy directive. How it is played out is that every time you go to a new website, you see this similar kind of banner saying, "This site uses cookies. Click here to make sure you are all right with them". If you started seeing that on more and more websites all the time, the whole experience of using those sites would become much less attractive, frankly, much more fragmented, and it would also stymie innovation. There are lots of policy-makers who agree that we have to get the balance right between allowing companies to innovate-and that is not just the likes of us on this panel, but also lots of small companies that are using our platforms and creating new data-driven businesses, including in the UK-while also allowing users of the internet to protect and control their data.
Q191 Steve McCabe: I was struck by that point about having to keep telling people about cookies. Doesn’t that really mean that if you can do away with that, you are entitled to give people forced advertising whether they want to view it or not?
Simon Milner: No, not necessarily. One of the things that we should recognise is that you can offer different kinds of control. For instance, on our platform we provide very granular control that enables you, if you see an ad from a company you do not want to see, to click on that ad and tell us, "I don’t want to see ads from this company again", and we will ask you why, so you can control it.
Q192 Steve McCabe: After you are forced to view it; that is my point. You are giving people something that they did not ask to see, aren’t you?
Simon Milner: I am not sure that is entirely right. I think people recognise that they are getting some fantastic services for free, but those services have to be paid for, a bit like watching ITV. You expect that you are going to see adverts when you watch ITV. You don’t decide what ads you are going to see; ITV does. With the kind of platform we operate, we can provide you with advertising that is much more likely to be of interest to you because we know more about you, and we can use that data to help you have a better experience.
Chair: That is enough advertising from Facebook.
Q193 Mark Reckless: Mr Milner, you said that you had 33 million Facebook users in the UK and I think around 1 billion globally. What is the number for the EU?
Simon Milner: I would have to check that and come back to you. I don’t have an EU number in my head, but it will be obviously substantial. The UK I am pretty sure is our biggest market in Europe, but there are lots of other markets where we do quite well as well.
Q194 Mark Reckless: Assuming we are in the hundreds of millions potentially for the EU, isn’t that rather a lot of users for the Office of the Irish Data Protection Commissioner to oversee?
Simon Milner: No, because the Facebook platform is the same wherever you are in the world. We have a single platform. There is no such thing as facebook.co.uk or facebook.ie for Ireland. It is a single platform that operates on the same basis throughout Europe, and indeed throughout the world, and therefore when it comes to the Irish Data Protection Commissioner he is able-and indeed he has over the last few years conducted a major audit of all of our data use policies and dived deep into everything we do in terms of how we handle data. He has produced a public report. I am happy to share the link to that report with you. No, in fact, he is absolutely able to handle that volume of users, because the service is the same and the way we handle people’s data is the same wherever you are.
Q195 Mark Reckless: I am glad that Facebook has such confidence in him, but we were asked to have a similar measure of trust in the Icelandic authorities in respect to financial regulation. Are you able to clarify to the Committee how many staff there are in the Office of the Irish Data Protection Commissioner?
Simon Milner: I think that is really a matter for Mr Hawkes and his team, and-
Q196 Mark Reckless: You were relying on him and telling us how deep he had gone and what fantastic work he had done.
Simon Milner: I think the best thing to do would be to look at the report. They have produced two substantial reports, one in December of 2011, which runs to several pages, lots of recommendations, a highly detailed, technical report, and they brought in technical experts from outside of the Commission to help do that. They then did a follow-up report, which was published in September of last year. I am happy to share those reports, and we certainly have not had any other authority come to us and say they have not done a decent job. Certainly, Chris Graham, of the Information Commissioner’s Office in the UK, recognised that as a high-quality audit of our business.
Q197 Mark Reckless: Yes. Mr Milner, you relied on the Irish Data Commissioner, and you ask the Committee to have assurance in the work that it does on the basis of that Commissioner. It is not an unreasonable question for me to ask you how many people are in the Office of the Irish Data Commissioner. I do not particularly want our Clerks to find out. I understand if you do not know immediately, but could I ask you to write to the Committee with that information?
Simon Milner: I am happy to write to the Committee, and I will also provide a copy of his reports.
Q198 Mark Reckless: Thank you. Ms McSweeney, clearly I understand the Twitter position is that the liability for all content posted through Twitter lies with the user who has posted it, but can I ask what responsibility you feel to remove hate speech or threatening content from Twitter?
Sinéad McSweeney: The fundamental basis for Twitter’s existence as a platform is to facilitate the sharing of ideas and to facilitate discussion on a range of issues, and we have found that and put a premium on the fact that we don’t mediate or monitor that content. However, we do take some responsibility for the content in terms of we have an objective set of standards, the Twitter rules, by which that content can be judged if it is reported to us by another user, and also, as I have mentioned earlier, the laws of the individual countries. We feel that, as a platform founded on the ideals of free speech, the only way in which we can do that is to measure content against those objective standards, because we don’t want a situation where people would feel that content was not available on Twitter because of Twitter’s view, as some kind of corporate view or a subjective view on an issue.
Going back to the old John Stuart Mill quote that anybody who studied jurisprudence would have studied in college, the best counter to bad speech is good speech, and in some ways the concept around community self-regulation and the process by which users and individuals are educated as to what is good speech and bad speech is better achieved when people are called out on bad speech than when it just disappears and nobody is sure of why it has disappeared. There was a recent example in recent days where an account tweeted something that many, many people considered to be offensive about a young actress who was attending the Oscars. Rather than somebody external stepping in and removing that content, the people who tweeted that themselves removed that content and apologised for it because of the outrage that they received from the community. That is not always an easy place in which to be.
It does not mean that Twitter condones the content of some of the speech that appears on our platform. However, where speech is short of being illegal-and we have seen examples with homophobic speech, where an offensive and homophobic discussion was taken over by others and ended up being a more affirming approach, so that is the approach we take.
Q199 Chair: Thank you, Ms McSweeney. In respect to what Mr Reckless has just asked you, pictures purportedly of James Bulger’s killer, Jon Venables, were posted on Twitter on 14 February, and the Attorney General has said that he is taking contempt proceedings against those who posted the photographs.
I understand that this is a huge network and there is a lot of information going up on the internet, but here is an example where somebody is acting unlawfully, where the Law Officer-you have worked for the Attorney General in Ireland, I understand-has said that he is going to take the people who have posted these photographs on Twitter to court. Why are you not taking down those photographs when you know that it is unlawful for them to put them up?
Sinéad McSweeney: There are a number of aspects to this. I am conscious of the sensitivity of this particular case, and I don’t want to be drawn into issues around any individual accounts. We work with law enforcement here in the UK. We have established points-
Chair: Just on the principle, as opposed to the detail.
Sinéad McSweeney: We have established points of contact with law enforcement in the UK. Where they communicate with us about content and bring content to our attention that is illegal, the appropriate steps and actions are taken by the company, and you may read into those words what you wish in the context of the-
Q200 Chair: You would expect an approach from a law officer, not necessarily on this particular case? If something is on the internet, on Twitter unlawfully, you would expect somebody to come to you and say, "We are going to launch contempt proceedings. Take it down", and it clearly has not happened?
Sinéad McSweeney: No, I didn’t say that. As I say, we have ongoing contact with law enforcement in the UK, and we have established points of contact with law enforcement in the UK.
Q201 Chair: They would come to you?
Sinéad McSweeney: When they come to us, we take the appropriate action. Just to be clear, there are a number of reasons why it has to be reported to us. The first is a very practical one: it is the scale of the material. We have 400 million tweets a day, so we cannot proactively monitor and mediate that content. Also, we need to be sure. There are straightforward cases like the one you have mentioned, but there are others where we need to be clear that the report is coming to us from an authorised legal entity who is acting in good faith.
Q202 Chair: That is the only bit that concerns me about your evidence so far. I think the whole Committee accepts that the internet is a power for good. With the evidence that we have received in terms of criminality, I would just have expected more proactive activity on the part of yourselves as providers. In answer to what Mr Winnick said earlier and my previous questions and what Mr Reckless has put to you all so far, you all seem to be waiting for someone to come to you before you act. Is that unfair?
Sarah Hunter: I don’t think that is fair-I think it is a little unfair, if I may. For example, we run a service called Safe Browsing. When we developed Google Search, we had to scan the trillions of web pages out there to create our search index. We have developed technology that scans those sites and that identifies where sites are hosting malware, so codes that can infect your computer. That scanning technology, this Safe Browsing technology, identifies about 10,000 websites every single day that we think are suspect.
That information we create into a list that comes up in Google Search results. You may have noticed in Google Search that sometimes there is a website, and beneath it it says, "This site may have been compromised". That tells you that there is probably malware or something bad on that site. We have developed this technology, we spent a lot of money developing this technology, and this technology is now free to other browsers to use.
We developed the list, but it is then used by Safari, by Firefox, by our competitors, to make their own search results and browsing safer. The idea that we are not taking responsibility I think is a little unfair. This is a significant investment. As I think Sinéad said earlier, we do depend on people to trust the internet for the good of our businesses, because if they don’t, they are not going to use it.
Chair: Sure. We will come back to you, because other colleagues want to come in.
Q203 Bridget Phillipson: Certainly in terms of Twitter, there have been a number of prosecutions recently that have resulted from comments that people have tweeted. Do you think your users fully understand how the law works online, and how would you respond to the recent guidance offered by the Director of Public Prosecutions in this area?
Sinéad McSweeney: I think it is important that people increasingly understand that online is no different from offline, that what is illegal offline is illegal online, and in that context, when people sign up to Twitter, they agree in very simple language that they will abide by the laws of the country in which they themselves are when they are using Twitter.
There is an extent to which you can over-complicate it and talk about, "People should not have to understand the law", but an awful lot of that which becomes law is just common sense or human decency, or it is good interpersonal behaviour. The law is a way of ensuring that there is a method by which society can enforce those standards, so to that extent I think users have-as indeed across the kind of keeping safe, as well as breaking the law-their own obligations to educate themselves about how to stay safe, how to stay secure online, but equally they need to deploy their own judgment about how they use the platforms in the context of the laws of the country in which they are.
Simon Milner: Perhaps I can help on the Director of Public Prosecutions point. That is an area where the law is different online than it is offline, in that you can say some things in this space, in a spoken way-and indeed I have heard Keir Starmer talking about this. He could say things in a public forum that would be perfectly legal. If he put them in an email or in a Facebook message or a tweet, they would be illegal, and it is one of the ways in which the online and offline are not properly aligned, and something hopefully the Government will look at as it looks at the Communications Act in the coming years.
Therefore, the approach that Mr Starmer is proposing, and indeed he is already asking public prosecutors to adopt, we think is the right one. He has it spot on, but he should focus on the context and the harm that might result from a communication that might, as it were, accentuate the impact of it, rather than just exactly what was said in the communication itself. We are very impressed by the analysis, and we think he has it spot on. I guess the proof will be in the pudding as it plays out over the next several months.
Q204 Bridget Phillipson: Just one follow-up to Ms McSweeney. There was a case just last year where a rape victim was named repeatedly via Twitter. Clearly, the responsibility for doing that is the responsibility of those who choose to post that content, but is that something that you have learned from? How would you respond to that then and now? Is there any difference?
Sinéad McSweeney: Again, we would have to be aware that it was happening, because we won’t necessarily know that it is happening on the platform. Again, where issues like that are brought to our attention, we can take action. The naming of rape victims-again it should be obvious to most people that that is not something-you do not have to know that it is illegal or that it is contempt, because in some senses in personal, offline conversations we tend to talk about rape or the victim of rape in whispered tones. We know that it is a sensitive issue, so why people would change their behaviour when they are online is different, but again, the issue-if it is flagged, if it is reported to us, yes, we can take action.
Q205 Bridget Phillipson: It is just the number of people that you can reach with such a message is far greater than a conversation you might have with one or two people. You could potentially reach millions of people, as opposed to a conversation one-to-one or in a small group. That is the damage, isn’t it?
Sinéad McSweeney: Yes, it is. It is, potentially, and that is why, for example, the Law Commission here is currently doing a substantial consultation on contempt, and we have attended the symposium and are taking an interest in that, because the world has become more complicated.
Chair: Thank you. I should say to colleagues and witnesses that we are expecting a vote shortly.
Q206 Mr Winnick: Ms McSweeney, arising from the replies you gave to Mr Reckless and to the Chair, I am slightly concerned because, without putting words into your mouth, I think you said, in effect, rather like Ms Hunter previously, that there is a debate and that people can put their views-obviously they can-and then if there are complaints, the matter will be looked into. You see, if someone on Twitter said, "Hitler was right", or, "The Holocaust never occurred" which is not a criminal offence in this country-there is no reason why it should be-or a rape victim very much in the media "asked for it"-such a crude sort of description, and absolutely disgusting-presumably that is simply on Twitter and, until someone complains, it remains on Twitter. Am I right?
Sinéad McSweeney: But those events, those instances that you talk about, don’t just happen on Twitter. People stand up in football stadiums and hurl racial abuse at players on the field. Those-
Q207 Mr Winnick: Does that justify going on Twitter?
Sinéad McSweeney: Those around them will call them out on that, and similarly on Twitter, rather than Twitter deciding as a corporation or as a bunch of individuals whether that is good or bad. Our approach is that the other users of the platform decide what is good speech and what is bad speech. Also, we do give users the ability to control their own experience. If I have a particular set of interests, that is what I will get from Twitter. I may never see a tweet about football or golf or sport in general. I will see lots of tweets about politics, about policing, about things in which I am interested, so people can define their own experience.
The problem is that taking the bad speech away doesn’t remove the thoughts from somebody’s mind, doesn’t remove those sentiments from society, and sometimes is it better to see those thoughts and see them challenged than to just remove them from the public mind and public view.
Q208 Mr Winnick: So, anything should really go on Twitter until someone complains?
Sinéad McSweeney: No, we don’t say that anything should go on Twitter. We have a set of rules, we have rules by which we believe our users should behave, and we also ask that our users obey the laws of the countries in which they live.
Q209 Mr Winnick: The examples I gave of someone, sick in mind, obviously-"Hitler was right. The Holocaust never occurred", or, "The rape victim asked for it"; that could and would go on Twitter?
Sinéad McSweeney: There are individuals who stand in universities and make those statements.
Q210 Steve McCabe: It does sound as if you are coming dangerously close to describing yourself as the innocent arch-facilitator, that Twitter trolls are the responsibility of everybody else and that cyber-bullying is entirely the responsibility of those who do it. I do not deny their responsibility, but it does seem to me they are able to do it with enormous reach because of the service you provide, and if that results in a youngster deciding to take his own life or some other tragedy-certainly the parents of a child who killed themselves in my constituency have met with Facebook staff-surely if it results in that, you have to go back and examine what you do and decide what more you can do to control this thing that you have unleashed.
Sinéad McSweeney: I do not think we are standing back from our responsibility. I know that within Twitter we have-
Q211 Steve McCabe: What is it that you have done that you have not told us about so far that shows you taking more control and responsibility for it? Because what I have heard so far is how you react when somebody else takes control and reports it to you.
Sinéad McSweeney: I think there are two sides to that. On the safety side, not only do we have a set of rules by which users’ behaviour is measured and observes the laws of the country, we also have a hugely-densely, almost-populated safety centre with advice. There are safety tips for parents, teens, teachers-like all of the other companies here, we participate in Safer Internet Day. We have relationships with all of the key organisations in this space, and just as the bad speech, as you would term it, reaches millions of people, those safety messages, that advice, those resources that are there to help people who are experiencing bullying, who are experiencing depression or mental health difficulties are also there on all our platforms and accessed by the individuals who are vulnerable and who are helped by them.
On the security side, again, yes, we talked about the instances that are flagged to us, but, as I know only too well from 10 years in policing, the best antidote to crime is prevention. It is all very well to react to a crime, to detect a crime, but the best activity that any law enforcement involves itself in, or corporations like ourselves, is to educate people.
Q212 Steve McCabe: But I am asking you how you are preventing. I hear a lot about how you react when something has happened and somebody reports it, and you say you have some warning material displays, but how do you prevent it? It happens persistently.
Sarah Hunter: Shall I give an example of something we have done at Google, because obviously we have been around a bit longer than Twitter? There was a case a couple of years ago where there were some suicide cases, and they were, in the inquest, reported to have used Google Search to identify ways to harm themselves and eventually, sadly, kill themselves. There was quite a public outcry about this and, sort of, "What can be done?"
We met with the Samaritans to talk about this, because obviously no one wants to see these sort of cases, and we are companies run by human beings who feel responsible, so we wanted to talk to them about how to prevent this sort of thing from happening. Some people were saying, "You should just remove all sites that mention how to kill yourself from the internet. You should just block them from the search". The Samaritans said, "No, that is not how we think you should react, because a lot of these sites are sites where people go and talk and find people with common interests to help them not kill themselves. They are support groups as much as they are information sites". Their preferred response, and what we ended up doing, was when someone searches for "How to kill yourself" on Google-
Q213 Chair: Sorry, could you just clarify? You are telling us that a website that says, "How to kill yourself" is actually a support group to help to keep people alive?
Sarah Hunter: In some cases, the sites they were referring to were actually self-help forums for people who are feeling depressed, and someone saying, "I want to kill myself"-"Well, no, don’t kill yourself", and they were as much forums for preventing suicide as they were for-but the Samaritans’ solution, and this is going back to the original question, was that when someone searches for "suicide", an advert should come to the top of the Google Search box, saying, "Are you feeling depressed? Do you want to talk to someone? Call the Samaritans". So the searcher was prompted to go and seek help, rather than going to one of the more invidious sites, so I think there are ways-
Q214 Chair: So, you do that now?
Sarah Hunter: We do that now, yes.
Q215 Chair: Would you do that for other areas, for example, somebody who was following the site of Anwar al-Aulaqi? Would you have a little thing going up saying, "If you want to blow people up, come to this site instead"?
Sarah Hunter: We do offer a service for all charities; that they can get free advertising, up to $10,000 a month worth of advertising on Google, so a lot of charities do take up that offer. I can’t think of another example. Someone like the NSPCC is-
Q216 Chair: Would you give us some of those examples? It would be very helpful if we had examples, if you could write to us-
Sarah Hunter: Absolutely.
Chair: -of where you have now put up a banner when people search against a particular site, and there-
Sarah Hunter: It is the charities who do that, not us, but yes-
Chair: No, but if you could give us examples-because you must have it, because obviously you do not give it away free.
Sarah Hunter: We do. It is free for any charity.
Chair: All right, so you would have a list of all these? If I could have it-
Sarah Hunter: Yes, I do. I will happily do that.
Q217 Steve McCabe: Chairman, if we are going to get those examples, could we get a little bit of background on how the charity was selected? The example you quote is very-
Sarah Hunter: Any charity can do it.
Steve McCabe: Yes, but what I am saying is the example you quoted where the Samaritans came to you, that is rather obvious. I would interested to know how other charities have been-
Sarah Hunter: I am happy to do that.
Chair: Along with the statistics of how many complaints were made and how many sites were taken down?
Sarah Hunter: Of course.
Q218 Chris Ruane: This is to Simon Milner. We understand that some models of HTC mobile phones have a Facebook app in the root directory which cannot be removed or reliably turned off, which therefore transmits information about the owner’s internet use back to Facebook. Are you aware of this, and do you think this respects users’ privacy and their right to choose whether or not they wish to share their data with Facebook?
Simon Milner: That is clearly a highly specific question and one that warrants a highly specific answer that I do not have, but I am happy to write to you afterwards. I will investigate it and come back to you.
Q219 Chris Ruane: This one is to Sarah Hunter. Google has previously been criticised by 10 Information Commissioners for not taking adequate account of users’ privacy. It has since been fined $22.5 million by the Federal Trade Commission for side-stepping security settings on the Safari web browser so that it could track users’ internet use. What impression do you think this gives of Google’s respect for users’ privacy?
Sarah Hunter: We deeply regret both of those incidents. As we said separately at the time, they were mistakes. We did not intend for that to happen, and as soon as we identified it, we owned up, we were very public about it and we tried to rectify the coding mistakes and make amends. Users trusting us and keeping their data safe is incredibly important for us. We take it incredibly seriously, and I think it is our responsibility to try to earn that trust back when things like that happen.
Q220 Chris Ruane: How have you done that? How have you earned that trust back-or have you?
Sarah Hunter: In the UK, the ICO did investigate us; and they annually audit us now, and we have made a number of changes to our processes internally as a result of that audit. In fact, they are due to come back again very soon, so it is an ongoing process. We always want to improve, but the ICO audit is part of that process.
Chair: Thank you. I am afraid we are going to have to stop; not quite saved by the bell, because you have been here for an hour and a quarter, and we really are very grateful to you for giving evidence. It has been most enlightening.
Sarah Hunter: Thank you for having us.
Chair: We will write to you with further questions. There are a number of issues that we wanted to take up with you before we complete our inquiry, but we are very grateful. Thank you.