Executive Summary

Online social networks are used by everyone in our everyday lives, including by malicious actors and organisations. Previous work has characterised the specific online behaviour of Middle East-based terror groups. However, this behaviour is constantly evolving, as a response to events such as the battle of Mosul and also due to the strengthening of the platforms’ moderation rules. Terror groups target social media platforms such as Twitter, Telegram, and Discord, and while their past behavioural patterns and narrative strategies have been well documented, the adaptive nature of these groups require continuous analysis of their online presence.

A social platform can contain up to two billion accounts (for Facebook), and are a central space where virtual propaganda, recruitment, and discussion happen. During the rise of Daesh, Twitter was used as the online backbone of the organisation’s propaganda; more than 100,000  accounts were actively promoting DAESH ideology back in 2014. The combined action of anonymous hackers, improved enforcement of the platform’s terms of use, and kinetic military action have greatly reduced this number.

The findings of our study are consistent with those of other research carried out on this topic. In particular, we observe how extremism is no longer tied to monolithic entities, cohesive groups are no longer the standard. We are witnessing a qualitative change—supporters of extremist ideologies are not necessarily active members of an organisation. Extremists individuals do make use of private platforms, but they still are active on mainstream social media platforms. This is a feature which is deeply ingrained in the nature of online propaganda.

In the domain of computer science, the last years have witnessed the improvement of social network analysis at scale. One of the most challenging aspects of social network analysis is community detection; analysts use a variety of tools to visualise the spontaneous group structure emerging from interactions and friendship relations in multi-million-user networks. This visualisation, combined with influencer detection and automated text analysis tools such as topic detection, enables the analyst to grasp most of the complexity of a social network.

This computer-science-oriented study explores three lines of research concerning online extremism. First, about the emerging narratives and the topics that can be found on open platforms. We show that many actors actively use terror-grouprelated terms; most cannot be directly tied to any specific organisation. A second axis concerns the connections between platforms: the information space has no central point as content is shared across platforms. However, the links reveal clusters of locations: we observe a group of PakistanIndia conflict mentions, and a cluster of US alt-right websites, transforming terrorism into a migration problem. The third axis relates to the social media landscape structure. We rely on a combination of document-level topic modelling and graph analysis to detect and explore the social data, visualising the types of groups that are active on the topic. Among the results, we found a small botnet circulating a pro-Daesh pamphlet and a set of grassroot reactions that managed to moderate a controversial pro-Jihadi post on Reddit.

Methodology

Research questions This article discusses three lines of inquire into online social network analysis:

  • Emerging narratives: How easy is it to find obvious terrorist messages among today’s online social network noise? The first part of our study focused on events that triggered a high usage of jihad-related terms on Twitter.
  • Connections between platforms: While analysts once had an ‘all-Twitter’ focus, today we suspect that radical groups hide and share information across various platforms. The second part of our study asks How many external links can we find? How does this new information compare with previous reports?
  • Social media landscape structure: In the past, online social networks have been used by structured terrorist organisations for propaganda, coordination, and recruitment. Are they still used the same way today? How can we identify the discussions that imply radical accounts? The current trend is for radicals to ‘hide’, suggesting that open public discussion would be more general, respecting the moderation rules, while the real recruitment and indoctrination would happen in private channels such as on Telegram.

This study poses three questions concerning the collected datasets. The first axis of analysis proposes insights to characterise the current narratives amongst the radicalised-themes, on social media. Then, we investigate the links between the collected social platforms, and the rest of the Web. Finally, we propose to sketch a cartography of social media, through the detection of its constitutive communities, and their characterisation.

Research scope

We limit the scope of our analysis to three datasets, collected from three very different social networks. All data used in this study was publicly accessible at the time of collection; some messages, accounts, or pages have been banned or have since disappeared. We analysed data from three social media platforms – Discord, Reddit and Twitter.

To conduct our study, we used keywords already mentioned in the literature. The keywords are presented in this Table, divided in three types: a) terror-related terms, referring to well-known Daesh narratives, b)  more general terms related to Islam, included to expand the scope of collection and to investigate the presence of radical groups within this nonradical topic, and c) faction-specific terms. We do not publish user profiles unless they are impersonal accounts—news is OK, people are not.

Conclusions

We observe that extremism is no longer tied to monolithic entities (parties, movements, organizations), cohesive groups are no longer the standard. We are witnessing a qualitative change—supporters of extremist ideologies are not necessarily active members of an organisation. Extremists individuals do make use of private platforms, but they still are active on mainstream social media platforms.

Many actors actively use terror-group-related terms; most cannot be directly tied to any specific organisation. However, the links reveal clusters of locations: we observe a group of Pakistan-India conflict mentions, and a cluster of US alt-right websites, transforming terrorism into a migration problem. We also found a small botnet circulating a pro-Daesh pamphlet and a set of grassroot reactions that effectively moderated a controversial pro-jihadi post on Reddit.

The complexity of online social media constitutes a challenge: the presence of malicious entities is difficult to detect and to qualify. In this article, we focused on terror groups based in the Middle East and their main narratives: it is obviously important to update our perception of their online presence.

During the case study, we underlined the presence of an open ‘philosophical’ discussion that may be an entry point to less acceptable private channels, hosted on non-public platforms such as Telegram or Discord. We identified a strong alt-right and fake news website presence in the themes we investigated, which boosts the Jihadi-related footprint in the public debate, perhaps with the goal of triggering a reaction from their supporters. Finally, on Reddit, an open media platform with a strong comment structure, opponents to Islamism appear with their views and arguments and seem eager to discuss and debate extensively.

To tackle the challenges coming from both data quantity and data complexity, social network analysts must adopt and adapt AI-powered tools to increase the speed and scale of their work, and to reduce the natural noise among the collected data. We presented a method for exploring a social media platform through its communities, to qualify their social and topical cohesion, and to illustrate their dispersion around their main topics of discussion.