About the report

What have social media companies done to combat the malicious use of their platforms? What do the leading players’ initiatives tell us about their coping strategies? How are their actions supported by their current terms and policies? And have there been any substantial policy changes as a result of the proliferation of malicious use of social media? We examined the company announcements and terms of service (ToS) agreements of Facebook, Google, and Twitter between November 2016 and September 2018 and found: 

  • In 2016–18 the platforms made 125 announcements about initiatives aimed at addressing disinformation to various degrees, including: 
    • Changes to the algorithms underlying newsfeeds or ad targeting  New partnerships with third-party fact-checkers 
    • Investment and support for professional journalism 
    • ‘Ads centres’ and greater transparency about electoral advertising, including reporting, labelling, and enforcement 
    • Greater transparency in internal content moderation practices, as well as additional investments in both automated and human moderation 
    • Heightened security and education for ’high risk‘ targets 
    • Changes to third-party access to user data 
  • The companies’ official blogs indicate that ‘enforcement of current terms’ is the most prominent response currently being undertaken, often through a combination of automation/AI, ads centres, and human content moderation.
  • The platforms’ responses seem heavily influenced by news events. Official announcements often reference current reporting and the companies’ actions suggest that their coping strategies are emergent at best, reactive at worst.
  • The initiatives taken show differences between the strategies of the three largest platforms as they search for effective self-regulatory responses amid a firestorm of public and political opprobrium.
  • Overall, we observed no major changes to terms and policies directly related to disinformation, leading to the conclusion that existing terms and policies provide platforms with levers to address these issues.
  • It is apparent that new and impending regulations are impacting company policies. Over the course of this study all three platforms have updated their terms and policies in May 2018, largely reflecting the General Data Protection Regulation (GDPR).

Methodology

This paper provides an inventory of the self-regulatory initiatives taken by three Internet platforms between November 2016 and September 2018 in response to disinformation activities. Internet companies have a variety of terms and policies, ranging from high-level user-oriented community standards to detailed legal terms. We limited our analysis to three sources of primary and secondary documentation: (1) official announcements and company blogs; (2) ToS agreements, Community Guidelines, and Privacy Policies; and (3) selected news reports relating to company self-regulatory responses.

In total, 125 company announcements, policies and news articles were reviewed. Of those, we identified 10 categories of interventions described in detail below. We then analysed the Terms of Service, community guidelines, and privacy policies of the three companies to determine whether company announcements resulted in changes to the rules that govern the platforms and their use. Table 1 summarises the terms and policies we analysed for this paper. Oral and written evidence of official inquiries, the FBI’s indictment of the Internet Research Agency, and the majority of news reports published during the research period were not included in the scope of this study, but provide a rich source of contemporaneous information for future inquiry. As comparing terms and policies across jurisdictions was not a focal point of this study, only terms relevant to Europe and the UK are included here. Some terms are universal, but many companies have additional or different terms for those living within and outside of the USA. 

Conclusion

 

2016 was a defining moment for social media platforms. The ongoing shock relating to election interference, computational propaganda, and the Cambridge Analytica scandal, combined with deeper concerns about the viability of the business model for established news media, all conspired to undermine the confidence of citizens and of public authorities in social media platforms. Initially, major social media companies fell back on traditional postures—minimising the impact by quoting statistics about the number of accounts involved—but our inventory of industry responses identifies and tracks changing attitudes. 

Since November 2016, there has been a raft of self-regulatory responses by all three of the platforms examined in this paper. A key area for intervention is enforcement of existing terms and policies, as well as taking steps towards increased collaboration with other actors, including news media, election committees and campaigns, fact-checkers, and civil society organisations. However, we found little evidence of major changes to the underlying user policy documents. This may change as pressure to regulate platforms continues to mount following formal government inquiries into Cambridge Analytica, the spread of ‘fake news’, and evidence of foreign interference. 

There may be trouble ahead as Google, Twitter, and Facebook appear to be taking conflicting stances on their responsibility for content. As government regulation appears inevitable, the platforms have formulated numerous solutions to combat the malicious use of social media. Yet, despite more than 20 months of inquiries and bad press, there is little evidence of significant changes to the companies’ terms and policies, which grant extensive powers over users’ content, data, and behaviour. Thus far, most of the self-regulatory responses have been reactive, responding to media cycle concerns around Cambridge Analytica or foreign interference in elections. The platforms themselves have not taken any meaningful steps to get ahead of the problem and address the underlying structures that incentivise the malicious use of social media—whether for economic gain or political influence. For meaningful progress to be made, and trust to be restored, the relationships between platforms and people needs to be rebalanced and platforms need to proactively work alongside government and citizenry as responsible actors.