From the 2014 invasion of Ukraine to more recent attempts to interfere in democratic elections, antagonists seeking to influence their adversaries have turned to social media manipulation.
At the heart of this practice is a flourishing market dominated by Manipulation Service Providers (MSPs) based in Russia. Buyers range from individuals to companies to state-level actors. Typically, these service providers sell social media engagement in the form of comments, clicks, likes, and shares.
Since its foundation, the NATO Strategic Communication Centre of Excellence in Riga has studied social media manipulation as an important and integral part of the influence campaigns malicious state and non-state actors direct against the Alliance and its partners.
To test the ability of Social Media Companies to identify and remove manipulation, we bought engagement on 105 different posts on Facebook, Instagram, Twitter, and YouTube using 11 Russian and 5 European (1 Polish, 2 German, 1 French, 1 Italian) social media manipulation service providers.
At a cost of just 300 EUR, we bought 3 530 comments, 25 750 likes, 20 000 views, and 5 100 followers. By studying the accounts that delivered the purchased manipulation, we were able to identify 18 739 accounts used to manipulate social media platforms.
In a test of the platforms’ ability to independently detect misuse, we found that four weeks after purchase, 4 in 5 of the bought inauthentic engagements were still online. We further tested the platforms ability to respond to user feedback by reporting a sample of the fake accounts. Three weeks after reporting more than 95% of the reported accounts were still active online.
Most of the inauthentic accounts we monitored remained active throughout the experiment. This means that malicious activity conducted by other actors using the same services and the same accounts also went unnoticed.
While we did identify political manipulation—as many as four out of five accounts used for manipulation on Facebook had been used to engage with political content to some extent—we assess that more than 90% of purchased engagements on social media are used for commercial purposes.
We identified fake engagement purchased for 721 political pages and 52 official government pages, including the official accounts of two presidents, the official page of a European political party, and a number of junior and local politicians in Europe and the United States. The vast majority of the political manipulation, however, was aimed at non-western pages.
We further assessed the performance of the four social media companies according to seven criteria designed to measure their ability to counter the malicious use of their services. Overall, our results show that the social media companies are experiencing significant challenges in countering the malicious use of their platforms. While they are better at blocking inauthentic account creation and removing inauthentic followers, they are not doing nearly as well at combating inauthentic comments and views.
Based on this experiment and several other studies we have conducted over the last two years, we assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behaviour on their platforms.
Self-regulation is not working. The manipulation industry is growing year by year. We see no sign that it is becoming substantially more expensive or more difficult to conduct widespread social media manipulation.
In contrast with the reports presented by the social media companies themselves, our report presents a different perspective: We were easily able to buy more than 54 000 inauthentic social media interactions with little or no resistance.
Although the fight against online disinformation and coordinated inauthentic behaviour is far from over, an important finding of our experiment is that the different platforms aren’t equally bad—in fact, some are significantly better at identifying and removing manipulative accounts and activities than others. Investment, resources, and determination make a difference.
Based on our experiment, we recommend:
- Setting new standards and requiring reporting based on more meaningful criteria
- Establishing independent and well-resourced oversight of the social media platforms
- Increasing the transparency of the social media platforms 4. Regulating the market for social media manipulation