Antagonists, from foreign governments to terror groups, anti-democratic groups, and commercial companies, continually seek to manipulate public debate through the use of coordinated social media manipulation campaigns. These groups rely on fake accounts and inauthentic behaviour to undermine online conversations, causing online and offline harm to both society and individuals.

As a testament to the continued interest of antagonists and opportunists alike to manipulate social media, a string of social media companies, researchers, intelligence services, and interest groups have detailed attempts to manipulate social media conversations during the past year. Therefore, it continues to be essential to evaluate whether the social media companies are living up to their commitments to counter misuse of their platforms.

In an attempt to contribute to the evaluation of social media platforms, we re-ran our ground-breaking experiment to assess their ability to counter the malicious use of their services. This year we spent significant effort to improve our methodology further, and we also added a fifth social media platform, TikTok, to our experiment. 

The Experiment

To test the ability of social media companies to identify and remove manipulation, we bought engagement on thirty-nine Facebook, Instagram, Twitter, YouTube, and TikTok posts, using three high-quality Russian social media manipulation service providers. For 300 € we received inauthentic engagement in the form of 1 150 comments, 9 690 likes, 323 202 views, and 3 726 shares on Facebook, Instagram, YouTube, Twitter, and Tiktok, enabling us to identify 8 036 accounts being used for social media manipulation. 

While measuring the ability of social media platforms to block fake account creation, to identify and remove fake activity, and to respond to user reports of inauthentic accounts, we noted that some of the platforms studied had made important improvements. However, other platforms exhibited a continued inability to combat manipulation. Of the 337 768 fake engagements purchased, more than 98 per cent remained online and active after four weeks, and even discounting fake views, more than 80 per cent of the 14 566 other fake engagements delivered remained active after a month. 

Conclusions

The most important insight from this experiment is that platforms continue to vary in their ability to counter manipulation of their services. Facebook-owned Instagram shows how this variation exists even within companies. Instagram remains much easier to manipulate than Facebook and appears to lack serious safeguards. Tellingly, the cost of manipulating Instagram is roughly one tenth of the cost of targeting Facebook.

In 2020, Twitter is still the industry leader in combating manipulation, but Facebook is rapidly closing the gap with impressive improvements. Instagram and YouTube are still struggling behind, but while Instagram is slowly moving in the right direction, YouTube seems to have given up. TikTok is the defenceless newcomer with much to learn.

Despite significant improvements by some, none of the five platforms is doing enough to prevent the manipulation of their services. Manipulation service providers are still winning.

This ongoing and evolving threat has underscored the need for a whole-of-society approach to defining acceptable online behaviour, and to developing the frameworks necessary to impose economic, diplomatic, or legal penalties potent enough to deter governments, organisations, and companies from breaking the norms of online behaviour. 

Based on our experiment, we recommend that governments introduce measures to:

  1. Increase transparency and develop new safety standards for social media platforms
  2. Establish independent and well-resourced oversight of social media platforms
  3. Increase efforts to deter social media manipulation
  4. Continue to pressure social media platforms to do more to counter the abuse of their services