The aim of this contribution is to outline the types of legal frameworks that have been set up by sovereigns to maneuver through and against the malicious use of social media networks, comment on the challenges faced, and identify policy trajectories. Focus is placed on the German Network Enforcement law (NetzDG) 1 as the prototypical archetype for a comprehensive and binding regime for social media intermediaries. Through a transatlantic comparison with other jurisdictions and courts, the legal tendencies of the malicious use of digital space are outlined, and recommendations are provided for the path forward.

Analysis and Recommendations

Legal initiatives in the West cover a range of issues ranging from fake news to illegal content, and on to information manipulation. The majority of new initiatives are focused on the traditional understanding of social media – Facebook, Twitter, Google – in light of their observed ability to enable disinformation73 via fake profiles and groups, online advertising and clickbait, and micro-targeting and manipulative use of third-party data analysis. However, the novel legal frameworks do not proceed far beyond the modalities of moderating specific pieces of content. The discussion remains framed around how users, social networks and governments can work together to achieve a fair and democratic outcome regarding singular units of information. Creating procedural frameworks for handling individual cases with high levels of certainty and legitimacy has been a logical priority. Such an approach leaves many gaps for disinformation-fostering behavior to proceed, but it is an important first step toward covering at least the ends of the disinformation campaigns before tackling the means.

The means involve a wide range of tools, ranging from the purposeful dissemination of inciteful opinions, to the abuse of social media algorithms to advance a narrative, to even the malignant and fraudulent use of technological tools for hacking or impersonation. The vast range of such tools and their different degrees of legality at this point prevent a comprehensive framework. Purchasing followers or likes to support certain content is currently only regulated by the Terms of Service of social media platforms. Ensuring that parties cannot use such a tool without the voluntary participation of platforms would also require cross-checking jurisdictions to the origin of the service provider. The test for legality would have to be made against the relevant system’s framework. Troll farms are an even more evasive tool, as attempts to shut down their dissemination of certain narratives would face a direct challenge to the freedom of expression. It is also conversely difficult to account for the innate partialities of individuals who make them the targets or amplifiers of such campaigns, or the motivations for producing and distributing them. Studies often find that online consumption of information and its further proliferation are more of an indicator of membership to certain communities than a search for or even belief in its objectivity.

At the moment, a comprehensive legal framework is unlikely. Important steps have been taken to create a capacity to affect particular content, but major European nations have taken divergent approaches to legislating online behavior, focusing efforts instead on different platforms, sources, and chronologies. The US still has not chosen a path for updating its online regime. Further legislative progress is highly likely to vary across jurisdictions, and legal trajectories will concretize only at a national level. This is similar to the multiple rounds of legislative development in other nascent legal regimes like money laundering or virtual currency regimes, where many initiatives are based on trial and error. The iterative legislative process will be critical for creating definitions, praxis, and data for further examination.

While investigation into halting these means of disinformation continues, the law has multiple access points to expand in, so as better to halt the spread of illegal content, including the strengthening of existing legislation on misleading advertising, election silence periods, political spending, consumer rights, and data protection rules. The examined legal documents highlight that the updating of existing norms and bringing them directly into the online flora via connector legislation is a path which faces little resistance. Many options for regulating automated content recognition technology in the area of disinformation have been proposed, ranging from allowing for the continuation of the status quo, to forms of selfregulation with differing extents of audits, co-regulation between governments and industry, and statutory regulations ordering a regulator to combat disinformation directly by licensing or other moderation mechanisms. In the European Union alone, a litany of policy proposals and discussions have deliberated on the use of new technology in such initiatives, ranging from voluntary compliance programs, codes, principles, and varying degrees of recommendations for algorithmic and AI based content moderation, particularly calling for transparency in their use. The creation of an internet Ombudsman has been proposed at the Council of Europe to assess whether content is legal or illegal, and it could accept questions from Internet intermediaries.

However, critical to the development of legal rules in the online environment is an understanding of the purpose of such goals. The current regime was developed in reaction to fear of interferences in political campaigns, terrorist threats, and other highlevel incidents. The resulting mechanisms function as quick-response firemen teams capable of putting out contentbased incidents. The approach has been conservative, and various stakeholders have urged for continued restraint to ensure that the legal instruments cannot be used to stifle fundamental freedoms. Governments espousing different values have shown how the mechanisms can be abused, and the worries are warranted.

Countries should continue conducting gap reviews to better understand which ones the law cannot fulfil beyond the aforementioned need to have an emergency instrument. In the meantime, there are three important steps that states should take:

  • First, states should confidently increase transparency requirements for Internet entities to amass the data necessary to understand these gaps and their origin. This will allow for more tangible analysis of the impact brought by malevolent activity online, which will necessarily be contextspecific. States can formulate their priorities and engage with the relevant stakeholders on the basis of these findings.
  • Second, states should expand their focus on bringing traditional non-discrimination rules to the digital domain, so as to develop a more plural digital environment, and ultimately assert which information typologies are purposefully malicious, thus requiring counter activity, or are authentic emanations of people’s opinions. This information can further be used across states to develop more comprehensive approaches.
  • Third, countries should also begin capacity-building for the digital environment, as more pertinent gaps in capabilities between states may become a factor for exploitation. An advanced legal framework will be ineffective without appropriate resources for its enforcement. The weakest links in the Transatlantic space can become hotbeds for disinformation propagation, or even data off-shores, impeding the defensive capacity of all states.

While in the short-term, legal drafting and enforcement of malicious internet use will likely remain a domestic responsibility, international standards should be calibrated concurrently. The various experiences and dimensions nations have taken to counter the malicious use of the Internet provide ample learning opportunities. Standards should be centered on collective, democratic and liberal principles, juxtaposing the emanations of some regimes beyond the Transatlantic. These foundations will ease cooperation and normalize the still early fight against Internet exploiters. Once a unified trajectory and groundwork are secured, a more comprehensive multilateral legal framework can be developed.