Culture, Media and Sport CommitteeWritten evidence submitted by Twitter
Introduction
1. Twitter is an open, public platform built around small bursts of information, which we call Tweets. Each Tweet can be no more than 140 characters long. In addition to text, Tweets can include photographs, videos, or links. Users can control their experience on the platform by choosing who to follow and what they wish to see.
The Twitter Rules
2. Twitter provides a global communication service which encompasses a variety of users with different voices, ideas and perspectives. With 200 million active users across the world and 15 million in the UK alone, the platform now serves 500 million tweets a day. Like most technology companies we are clear that there is no single silver bullet for online safety, rather it must be a combined approach from technology companies, educators, governments and parents to ensure that we equip people with the digital skills they will need to navigate the web and wider world going forward.
3. As a general policy, we do not mediate content. However, there are some limitations on the type of content that can be published with Twitter. These limitations comply with legal requirements and make Twitter a better experience for all. These limitations include prohibitions on the posting of other people’s private or confidential information, impersonation of others in a manner that does or is intended to mislead, confuse, or deceive others, the posting of direct, specific threats of violence against others, and trademark and copyright infringement.
4. Our rules and terms of service clearly state that the Twitter service may not be used for any unlawful purposes or in furtherance of illegal activities. International users agree to comply with all local laws regarding online conduct and acceptable content.
5. Full details of Twitter’s rules and terms of service be found in our support centre: https://support.twitter.com/articles/18311-the-twitter-rules
Illegal Content—Child Sexual Exploitation Policy
6. We do not tolerate child sexual exploitation on Twitter. When we are made aware of links to images of or content promoting child sexual exploitation they will be removed from the site without further notice and reported to The National Center for Missing & Exploited Children (“NCMEC”); we permanently suspend accounts promoting or containing updates with links to child sexual exploitation.
7. We have established a line of communication and maintain an ongoing dialogue with the Child Exploitation and Online Protection Centre, both in relation to its investigative work and also in its important work around education and awareness raising. Twitter is also a member of the Internet Watch Foundation.
8. We are in the process of implementing Photo DNA into our backend technologies and, like other technology companies, are engaged with law enforcement and non-governmental organisations on global efforts to track and eliminate child abuse images online.
9. Twitter is part of the technology task force for Thorn (http://www.wearethorn.org), a cross industry foundation which aims to disrupt the predatory behavior of those who abuse and traffic children, solicit sex with children or create and share images of child sexual exploitation. Thorn exists to continue the work started by the Demi and Ashton (DNA) Foundation in 2009.
10. In the rare instance that a user finds a Twitter account which they believe to be distributing or promoting child sexual exploitation while using Twitter, they are asked to notify us by sending an email to cp@twitter.com. They are also asked not to Tweet, retweet or repost child sexual exploitation for any reason and rather to report it to us immediately so we can take steps to remove it.
Online Safety
11. Twitter provides a global communication platform which encompasses a variety of users with different voices, ideas and perspectives. As stated above, the platform now serves 500 million tweets a day.
12. Our policy is that we do not mediate content or intervene in disputes between users but we do have a clear set of rules which govern how people can behave on our platform. These rules are designed to balance offering our 200 million global users, a service that allows open dialogue and discussion all around the world whilst protecting the rights of others. As such, users may not make direct, specific threats of violence against others; targeted abuse or harassment is also a violation of the Twitter Rules1 and Terms of Service.2
13. To help users navigate issues they may be confronted with online, we offer a number of articles in our Safety and Security Centre3 that guide users to make better decisions when communicating with others.
14. Over the course of the next months, we will be publishing new content with more information, including local resources provided by our partners in the safety community across the world. Some of the new content will be aimed at bystanders—those who may witness abusive behavior, but are not sure what actions to take. Teaching users to help each other as well as knowing when and how to contact Twitter for help is vital to Twitter being a safe space for our users.
15. We also work with a number of safety organizations around the world to provide content to our users that may be useful for navigating problems online, and plan to host more content from them and other safety experts in future iterations of the site.
16. We often use the analogy of Twitter as a public town square where users are free to speak with and interact with others; that said, just as in a public town square, there are behaviors that are not allowed.
17. In addition to reporting violations of our Terms of Service and Rules, users can control what content they see through features like blocking and sensitive media warnings. When our users see or receive an @reply they don’t like, they can unfollow the author of the Tweet (if they are following the user), which will remove that user from their home timeline. They can also block the user4 which will mute the communication. When you block a user you will not be notified when they mention you, retweet you, favourite your content, add you to a list or subscribe to one of your lists. You also won’t see any of these interactions in your timeline. This prevents you from receiving unwanted, targeted and continuous @replies on Twitter. Abusive users often lose interest once they realize that their target will not respond.
18. Another option available to our users is to protect their accounts.5 Users with protected accounts can approve requests from other users to follow their accounts on a case-by-case basis. Additionally, their tweets are only viewable and searchable by themselves and their approved followers. As such, they can prevent any unwanted followers from viewing their content.
19. Sometimes users see content they don’t like in the form of images or video, and for that we have settings that allow users to label their media for the appropriate viewers, and select whose media will display on their own Twitter homepage. We ask users to mark their Tweets as sensitive if they contain media that might be considered sensitive content such as nudity, violence, or medical procedures.
20. For the viewer, the default setting is that if a Tweet is marked as containing media that might be sensitive, they will be required to click through a warning message before that media is displayed to them.
21. If another user notices that Tweets have not been marked appropriately, that user may flag the image or video for review. The content will then be reviewed and a determination made as to whether that media requires an interstitial in order to comply with Twitter’s Media Policies.6
22. We are continuing to expand our user support and safety teams to ensure we are supporting our growing userbase appropriately. We also continue to invest heavily in our reporting system from a technological perspective and recently rolled out a significant update which simplified our system, allowing people to report violations7 to us via an in-Tweet reporting button as well as via the forms in our support centre.8
23. Once a user files a report to Twitter, they are sent a ticket number and the form is routed to a team of trained reviewers. If accounts are found to be acting in way which breaks our rules, for example posting specific threats, then we take action, up to and including suspending accounts. At the end of the process the specialist reviewer follows up with the person who filed the ticket to let them know that their report has been addressed.
24. Reports that are flagged for threats, harassment or self-harm are reviewed manually. We also use automated tools to help triage reports filed for other types of complaints, such as spam.
25. It is important to stress that where a user believes the content or behavior he or she is are reporting is prohibited by law or if they are concerned for their physical safety, we advise them to contact local law enforcement so they can accurately assess the content or behaviour. If Twitter is then contacted directly by the Police, we can work with them and provide assistance for their investigation.
26. Twitter works closely with UK law enforcement. We have updated our existing policies around handling non-emergency requests from UK law enforcement to more closely mirror our process for US-based law enforcement requests (including providing notice to users). We have always responded to emergency disclosure requests from law enforcement in situations involving danger of death or serious physical injury to any person.
27. Twitter invokes these emergency disclosure procedures in the United Kingdom if it appears that there is an exigent emergency involving the danger of death or serious physical injury to an individual. Twitter may provide UK law enforcement with user information necessary to prevent that harm.
28. Twitter has published Guidelines for Law Enforcement (“Guidelines”; https://t.co/leguide) as well as a webform (https://t.co/leform) through which UK Law Enforcement can file a request or make an inquiry
29. Since opening our UK office, Twitter has worked to build relationships with stakeholders in the online safety sector. Twitter is a member of UKCCIS. The company has also established a single point of contact for the UK Safer Internet Centre and has worked with the South West Grid for Learning.
30. We participated in Safer Internet Day for the first time in 2012 implementing our “Tweets for Good” programme whereby we promoted safety messaging on the platform. We will continue this involvement and our work with our safety partners and wider voluntary organisations into the future.
September 2013
1 http://support.twitter.com/entries/18311#
2 https://twitter.com/tos
3 https://support.twitter.com/groups/57-safety-security#
4 http://support.twitter.com/articles/117063#
5 https://support.twitter.com/articles/14016-about-public-and-protected-tweets#
6 https://support.twitter.com/articles/20169199-twitter-media-policy#
7 https://support.twitter.com/articles/15789-how-to-report-violations#
8 https://support.twitter.com/forms