L'era dell'interlegalità
DOI: 10.1401/9788815370334/c13
Inter-legality and online states
Notizie Autori
Sümeyye Elif Biber è dottoranda di ricerca in «Law», Scuola Superiore
Sant’Anna.
Notizie Autori
Nedim Hogic è dottorando di ricerca in «Law», Scuola Superiore
Sant’Anna.
Abstract
Starting from the political controversies generated by the disputes between the
former president of the United States Donald Trump and the social network Twitter
following the 2020 presidential elections, an attempt is made here to analyze the
involvement and role of new media platforms in the development of the phenomenon of
disinformation and in the ongoing political crisis. First of all, the development of
these new technologies will be summarized following two distinct regulatory phases,
the first one managed by national or international regulations and the other one,
starting from 2016, characterized by an autonomous regulation based on the
protection of the right to freedom of expression. >Inter-legality approach is here
taken into consideration as a potentially useful tool in order to mediate between
the various regulatory sources involved.
1. Introduction
In the wake of the widespread unrest
following the announcement of the US 2020 Presidential election results and the storming
of the US Congress by a mob of supporters of Mr. Trump, the defeated Presidential
candidate, had his account on the social platform Twitter suspended
[1]
. Explaining this decision, Twitter cited the high risk of incitement to
violence. This, however, was just a continuation of the clash between Trump and the
platform. The clash first erupted when Mr. Trump was sued for blocking other Twitter
users from accessing his account upon receiving unfavorable reactions to his posts from
them. The US District Court in New York found in this case that the comments made to Mr.
Trump’s Twitter account represent political and, therefore, protected speech
[2]
. Moreover, the Court found that the limitation of access of the plaintiffs
not just to tweets but also to the “interactive space” provided in the replies and
retweets, which constitute a Twitter thread, is an infringement of their First Amendment rights
[3]
. Thus, in January 2021, when Twitter decided that President Trump’s tweet is
an incitement to violence, it did not only shut down a communication channel for the US
President, but it also shut down a designated public forum for exchange of views and
opinions that it operated. ¶{p. 366}
This episode represents what has so
far been the highest point of a global legal and political crisis surrounding the
private social media platforms, such as Twitter, Facebook and YouTube. That crisis began
in 2016, following the political outcomes such as the Brexit referendum in the UK and
the US Presidential elections that were to a certain extent shaped by an organized
campaign of public disinformation that took place on the platforms
[4]
. In this chapter, we deal with the relationship between this legal crisis
and the inter-legality approach to the legal issues arising out of it.
Firstly, we explain the development
of the social platforms in the domain of content moderation observing the two different
phases in their development: the first from the end of the
20th century up to 2016 which was the period of light
regulation and the post-2016 period of heavy regulation. It was in this second period,
we argue, that the content moderation of the platforms evaluated into a separate legal
order with its own standards of freedom of speech and the bodies that were to revise
them. Resembling legal or governance systems
[5]
, these private self-regulating entities have had a profound normative impact
on the freedom of speech, privacy, data protection and economic development which is why
we believe the term “online states” to be appropriate in describing them.
Secondly, we describe the first phase
of their evolution, the phase of light regulation when the platforms were operating
largely without infringement from the national or international regulators. That
evaluation is followed by the description of regulatory developments and cases that have
¶{p. 367}led the platforms to creations of their own internal regulatory
standards, case law and judicial organizations that governed the policies concerning the
freedom of expression for the users of the platforms. We proceed to examine the
regulatory backlash that came from the national state regulators post-2016 events seeing
it as an attempt to limit the power of the platforms to manage the user created content
but also to make them more accountable.
Thirdly, we see this interwovenness
between the national, international and the “legal orders” created by the social
platforms as an interactional legal order that should not only be explained by inter-legality
[6]
but for which inter-legality is a useful tool concerning the disputes that
consider the different normativities stemming out of the three legal orders. Therefore,
we developed a three-step analysis consisting of, (i) taking the vantage point of the
affair – the case at hand – under scrutiny, (ii) understanding the relevant
normativities controlling the case, (iii) looking at the demands of justice stemming
from the case in order to apply this concept
[7]
. Following Palombella and Klabbers we argue that taking the vantage point of
the case
[8]
is crucial for resolution of such disputes
[9]
. In this context, we examine the decisions of the Oversight Board and found
that although the attention was paid to the international law, this was at the expense
of national legislation and other relevant legalities. Instead, inter-legality offers us
a clear judicial path in resolving the issues at hand providing more just solutions
based on inclusive reasoning and not the exclusion of the legal
orders.¶{p. 368}
2. Public Role of the Social Platforms
The rise of the Internet in the late
1990s provided its users with great autonomy in reaching more speech and people and
accessing more information than ever before in human history. In that period, we
witnessed the change “from atoms to bits” happening
[10]
as most information previously available in the form of books, magazines,
newspapers, and videocassettes became universally accessible in the form of inexpensive
electronic data
[11]
. This change in analog to digital technologies also shifted the basis of the
society from an industrial to an informational one
[12]
, as the increase in digitalization created new market opportunities and new
jobs for individuals. The online platforms, acting as disruptive forces
[13]
, dramatically accelerated these developments by providing billions of users
with unprecedented access to different types of content and innovation opportunities in
the digital market. Much of this was possible because, during this period, the States
took a general orientation to “ignore online activities or to regulate them very lightly”
[14]
.
This meant a liberal constitutional
approach towards the regulation of the freedom of speech standards on the platforms
[15]
. The EU and the US have both provided liability
¶{p. 369}exceptions for platforms considering them as “online intermediaries”
[16]
. In practice, this meant that the platforms were not treated as public
forums but merely as content providers that enable access to content produced by users.
This limited their responsibility before the law to actively moderate the content that
the user shares. Left to self-regulation, content moderation on the platform was greatly
influenced by the tradition of US lawyers in the interpretation of regulation of speech
set forth by the First Amendment to the US Constitution
[17]
.
In the US, the Communication Decency
Act § 230 (CDA) shielded platforms with broad immunity from liability for user-generated content
[18]
. The first relevant case in which the interpretation of the immunity granted
under § 230 (also considered the most important case of Internet Law to this date)
[19]
was the case of Zeran v. America Online
[20]
. Plaintiff
¶{p. 370}Zeran claimed that
the American Online Inc. (AOL) is liable for defamatory messages posted by an
unidentified third party. He argued that AOL had a duty to remove the defamatory post,
notify its users of the post’s incorrectness, and screen future defamatory material
[21]
. However, the Court found that AOL was not liable under § 230, as the latter
provided federal immunity to AOL
[22]
. According to the Court, purposive reading of the provision has demonstrated
that “the imposition of tort liability on service providers for the communication of
other represented, for Congress, is simply another form of intrusive government
regulation of speech. Section 230, was enacted, in part, to maintain the robust nature
of Internet communication and, accordingly, to keep government interference in the
medium to a minimum”
[23]
. Thus, according to the Court, the “specter of tort liability” is precluded
to avoid chilling effect
[24]
. Furthermore, the holding encouraged “service providers to self-regulate the
dissemination of offensive material over their services”
[25]
.
Note
[1] See J. Schultz, Twitter Puts an End to Trump’s Rhetorical Presidency, 2021, available at https://www.lawfareblog.com/twitter-puts-end-trumps-rhetorical-presidency.
[2] United States District Court Memorandum and Order, Knight First Amendment Institute v. Trump, 2017. No. 1:17-cv-05205 (S.D.N.Y.)
[3] Ibidem.
[4] Generally, see Y. Gorodnichenko, T. Pham and O. Talavera, Social Media, Sentiment and Public Opinions: Evidence from #Brexit and #US Election, in National Bureau of Economic Research, No. w24631, 2018 (explaining the effects that the social media platforms had in shaping the public opinion).
[5] K. Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, in «Harvard Law Review», 131, 2017, pp. 1599 ff., 1603 (arguing that these platforms operating as the new governors of online speech, and are part of a new triadic model of speech that is between the State and speakers and publishers).
[6] In this sense following S. Taekema, Between and Beyond Legal Orders Questioning the Concept of Legal Orders, in J. Klabbers and G. Palombella (eds.), The Challenge of Inter-legality, Cambridge, Cambridge University Press, 2019, pp. 69 ff., 74.
[7] See the chapter Interlegality e tecnologie per la sorveglianza by Sümeyye Elif Biber in this volume.
[8] J. Klabbers and G. Palombella, Introduction, in Iid. (eds.), The Challenge of Inter-legality, cit., pp. 1 ff.
[9] Ibidem.
[10] See O. Pollicino, Judicial Protection of Fundamental Rights in the Transition from the World of Atoms to the Word of Bits: The Case of Freedom of Speech, in «European Law Journal», 25(2), 2019, pp. 155 ff.
[11] See ibidem.
[12] See Y. Benkler, The Wealth of Networks, Yale, Yale University Press, 2006.
[13] According to European documents on online platforms, social networks provide a “hosting service” meaning that “an information society service consisting of the storage of information provided by a recipient of the service.” Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’).
[14] J.G. Palfrey, Four Phases of Internet Regulation. Social Research, in Berkman Center Research Publication No. 2010-9, Harvard Public Law Working Paper No. 10-42, Vol. 77, 3, 2010. Available at SSRN: https://ssrn.com/abstract=1658191 (referring to this era as an era of the “open internet”).
[15] See G.D. Gregorio, From Constitutional Freedoms to Power of the Platforms: Protecting Fundamental Rights Online in the Algorithmic Society, in «European Journal of Legal Studies», 11(2), 2019, pp. 65 ff.
[16] According to the Council of Europe, the term Internet intermediaries “commonly refers to a wide, diverse and rapidly evolving range of service providers that facilitate interactions on the Internet between natural and legal persons. Some connect users to the Internet, enable processing of data and host web-based services, including for user-generated comments. Others gather information, assist searches, facilitate the sale of goods and services, or enable other commercial transactions. Internet intermediaries also moderate and rank content, mainly through algorithmic processing, and they may perform other functions that resemble those of publishers”. See Council of Europe, Internet Intermediaries at https://www.coe.int/en/web/freedom-expression/internet-intermediaries.
[17] See Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, cit., p. 1598.
[18] Communication Decency Act (1996) 47 USC § 230 (c) (1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”.
[19] E. Goldman and J. Kosseff, Commemorating the 20th Anniversary of Internet Law’s Most Important Judicial Decision, in E. Goldman and J. Kosseff (eds.), Zeran v. America Online E-book at https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=3286&context=historical, 2020, p. 6.
[20] United States Court of Appeals, Fourth Circuit, Zeran v. America Online Inc. 129 F.3d 327, 1997.
[21] Ibidem.
[22] Ibidem.
[23] Ibidem.
[24] Ibidem. The judgment also held that Internet service providers were not liable even when receiving notice of potential defamatory post.
[25] Ibidem. The conceptualization of online platforms within the First Amendment was another critical issue for the courts in the US. The reasonings analyzed analogies to State, company towns, broadcasters and editors. See this debate in Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, cit.