Robotic activity is highly dynamic. The online discussion about the NATO presence in Poland and the Baltics shows sharp changes in focus and intensity. The current reporting period August–October has been comparatively free of large-scale, politically motivated robotic interventions. In contrast, the period March–July stands out as one in which content was heavily promoted online.
Political actors use bot accounts in the social media space to manipulate public opinion about regional geopolitics. According to our estimate, such accounts produced 5–15% of the activity about the NATO presence in Latvia and Estonia in the period March–July 2017. Bot-generated messages differ depending on the target audience. Messages aimed at the West suggested that Russian exercises pale in comparison with NATO operations. Messages targeted to the domestic audience rarely mentioned the Russian exercises.
Russian-language bots create roughly 70% of all Russian messages about NATO in the Baltic States and Poland. Overall, 60% of active Russian-language accounts seem to be automated. In comparison, 39% of accounts tweeting in English are bots. They created 52% of all English-language messages in the period August–October. Our data suggest Twitter is less effective at removing automatically generated Russian content than it is for English material. Nonetheless, we have seen improvement in social media policing by the platform. A ‘cleaner’ social media is good not only for individual users, but also for businesses. Pressure should continue in order to ensure further improvements.