By Ashutosh Bhagwat
In this piece, the author argues that state intervention in digital platforms to regulate and enforce free-speech is undesirable because of the state’s access to the threat of violence, its monopoly of control, and generally the motivations of the state (both democratic and autocratic) to suppress free speech.
In recent years, a consensus appears to be emerging among commentators on free speech issues that government, in the form of restrictive regulation, no longer poses the greatest threat to freedom of expression. The real threat, rather, comes from the handful of private firms (often controlled by individuals) who host and manage expression on the internet. This includes the search engine Google and the major social media platforms Facebook, Instagram, Twitter, YouTube, and Tik Tok. These platforms account for the vast majority of modern expressive activity, and the incredible concentration ownership of these platforms (Mark Zuckerberg personally controls Facebook and Instagram, Elon Musk controls Twitter, and Larry Page and Sergey Brin jointly control Google, which also owns YouTube) means that today a handful of individuals effectively manage free speech across the world. The solution, it is argued, is that governments should regulate these platforms to ensure that they provide fair access to users, while not spreading harmful or dangerous speech.
And regulate governments have. In the United States, the states of Florida and Texas have adopted far-reaching legislation restricting platform content moderation practices (constitutional challenges to both those laws are currently being litigated). In the European Union, some countries have enacted strict restrictions on what they consider hate speech on platforms, including notably Germany’s NetzDG legislation; and the EU itself has adopted some EU-wide legislation, notably the “right to be forgotten” element of the EU’s GDPR data privacy regulation. Finally in India, the 2021 Information Technology Rules, implementing the Information Technology Act of 2000, impose significant restrictions on online content. Sections 14 and 15 of the Rules, acting on the authority of Section 69A of the 2000 Act, in particular create a mechanism permitting executive branch officials to order the blocking of online content deemed to be illegal or otherwise unprotected. Governments worldwide, it would seem, are stepping in to address the problem of platform power.
That concentration of power in internet platforms and their owners is troubling cannot be doubted. But I aim to argue against the second proposition, that government is the answer. To the contrary, I believe and will argue that governments remain today the greatest extant threats to expressive freedoms, platform power notwithstanding. The major point here is that when governments exercise regulatory authority to reduce or control platform power as Germany’s NetzDG law and India’s IT rules do, such actions inevitably increase state power at the expense of platforms and of the public, both by permitting the state to block content that platforms wish to carry and members of the public presumably wish to post and/or view, and by forcing platforms to carry content that they deem harmful. The conclusion I will seek to defend is that if our ultimate goal is to protect a free and energetic public discourse, this is a bad tradeoff.
The most important reason that governments remain the greatest threat to free speech (and all other aspects of human liberty) is quite simple: governments continue to enjoy a legal monopoly on violence. For all of Mark Zuckerberg’s wealth and power, he cannot arrest you, lock you up, physically harm you, or even take your property against your will—but the local constable can do so perfectly legally (albeit, in jurisdictions with operating legal systems, having to later justify these actions in court). And ultimately unlike platform content moderation rules, violations of legal regimes such as NetzDG, as well as India’s IT Act and implementing rules, subject platforms to sometimes whopping fines, as well as the imprisonment of platform employees. NetzDG, for example, authorizes fines of up to 50 million euros, and Section 69A(3) of the Indian IT Act authorizes imprisonment for up to seven years for violations of that section. This point may be obvious, but its significance should not be minimized. The threat of force and violence has a deterrent effect unlike any other. If an individual violates Facebook’s Community Standards, the worst that will happen is that their post will be removed; and if that person violates Facebook’s rules consistently, their account may be blocked (i.e., they may be de-platformed). Such consequences, while plainly inconvenient and irritating, can hardly compare with the threats of fines or imprisonment that governments can credibly make. And that is why individuals are far more willing to violate, or come to the edge of violating, platform rules than laws adopted by governments.
Aside from deterrence, another consequence of the government’s access to the threat or reality of violence is that violence silences in a uniquely effective manner. This is most obviously true of imprisonment (and in the extreme, execution). And judicial injunctions, backed by the threat of criminal enforcement, similarly can utterly silence speakers. But even less intrusive steps such as fines (civil or criminal) can in practice, by denying individuals financial security, interfere significantly with their ability to focus the time and energy needed for effective communications campaigns.
In addition to its monopoly of violence, the State also enjoys a monopoly of control, unlike any private firm, even a dominant platform owner such as Meta (which owns Facebook and Instagram). Within its own jurisdiction, a government can simply ban any and all speech on a particular, disfavored topic or with a particular, disfavored viewpoint. And it can then apply that ban to all platforms, as well as other media, operating within its jurisdiction—consider as examples the Modi government in India’s recent efforts under its 2021 Rules to ban the BBC documentary “India: The Modi Question,” or the Putin government in Russia’s ban on the use of the words “war” or “invasion” to describe its, well, invasion of the Ukraine. Of course, the ultimate weakness of both of these efforts demonstrates the difficulty of enforcing speech bans in an age of cross-border, digital communications; but it remains true that a determined and technologically competent state, such as say the People’s Republic of China with its “Great Firewall,” can be infinitely more effective than any private actor, no matter how powerful, in suppressing speech.
Moreover, in the actual world we live in today, no platform has remotely the sort of monopoly of control that governments enjoy, because of the simple fact of competition and alternatives. Twitter, for example, restricts far less content than Facebook, and under Elon Musk’s control has loosened up its restrictions even further. An example of this is Musk’s decision in November of 2022 to reactivate Donald Trump’s account. Facebook then did the same in January 2023, and while it gave neutral reasons for doing so, one does wonder if competitive pressures (in particular, the fear of losing Trump supporters as users) was a factor in that decision. And even during the period when Trump was banned from the major social media platforms, he still had alternative means to communicate including the traditional media, alternative platforms such as Telegram, and eventually his own social media platform Truth Social. All of which is to say that even the most powerful social media figures such as Mark Zuckerberg cannot exercise the kind of universal control over speech that the government of even a small political jurisdiction can.
It should also be noted that insofar as domestic law imposes an obligation on platforms to respond to complaints from the public, as for example Rules 3(2) and 4(c) of India’s 2021 IT Rules do, members of the public have some ability to influence and control platform content moderation. This is in sharp contrast to state censorship, over which the public has little control (except, in democracies, ultimately at the ballot box). Put differently, while at first cut platforms establish their own content moderation policies, ultimately they must also comply with state rules and, in some jurisdictions, public complaints. What the government chooses to censor, however, is strictly up to the government alone, subject to often-loose constitutional constraints.
This takes us to the last, and most fundamental, difference between governments and private internet firms regarding speech, which is the question of motivation. The motivations of political leaders regarding free speech stems from a basic fact, which is that political leaders of all stripes like to stay in power. In democracies, that means winning elections. In autocracies, it means suppressing dissidents. But the goal remains the same.
In light of this fact, let us first consider the motivations of democratically elected leaders regarding speech. In democracies, free speech is foundational and essential. Without free speech, citizens cannot meaningfully discuss public policy or the achievements and failings of elected leaders, and so cannot cast their vote intelligently. And more broadly, freedom of expression and related liberties such as assembly and association are the crucial, necessary tools with which citizens engage and communicate with their leaders. But from the point of view of elected officials, free speech is of course a threat, because it can be used to reveal their errors and weaknesses. Over time, this in turn can undermine support for them, and so their ability to prevail in elections. Hence the motivation to censor unfavorable speech. Of course, elected leaders must be careful in how they censor, or the censorship itself becomes a political problem, but so long as leaders target minority or unpopular viewpoints, they can often get away with suppression. After all, democratic leaders do not need universal support, just that of a majority of citizens. That is why constitutional protections for freedom of speech, ideally enforced against elected leaders by an independent judiciary, are an essential element of a successful democracy.
Now consider autocracies. Here, the motivation to suppress speech is even more obvious—speech is the primary and essential means to organize opposition to autocratic leaders. It is no coincidence that the largest and most successful autocracy in the world—China—also has the most elaborate and successful censorship systems. And unlike democratic leaders, autocratic leaders do not face democratic checks on their desire to censor.
Finally, consider the motivations of private social media platforms. Unlike government officials, at heart the goal of such firms is to maximize speech, because that is in some sense the product there are providing. To be more precise, platforms host speech to attract users, and then make money by selling access to those users to advertisers. Platforms cannot adopt aggressive rules restricting content because their financial goal is to maximize users; and to maximize users they need to maximize the speech that attracts them. And from the point of view of the platform, it is entirely irrelevant if the speech they host is favorable to the government, unfavorable to the government, or has nothing to do with government—the more the merrier. Furthermore, even content which is unpopular with the majority of users typically is of interest to some elements of the population, and so to maximize users, platforms are incented to permit that speech. It is only when speech is so unpopular with users that it is likely to chase them off the platforms that platforms have will want to suppress it. That then is the role of content moderation policies: not to suppress speech broadly, or to tilt the political dialogue, but rather to suppress the worst of the worst—e.g., terrorist propaganda, hate speech, threats, and (for some platforms such as Facebook) pornography— that is likely to repel significant numbers of users, while otherwise maximizing speech in order to maximize engagement and profits. This is unlike with governments, even in democracies have no incentive to permit unpopular speech, since their interests are in pleasing the majority, not niche minorities.
These points may seem obvious but they have an important implication, which is that contrary to current orthodoxy, we should be far more trusting of platforms restricting speech than politicians restricting either speech or platforms. The reason is that platforms have no systematic anti-speech bias, but government officials most certainly do, at least as to speech critical of them. As a result, it is as predictable as the sun rising in the east that anytime a government regulates in the expressive sphere, including regulating platforms, one of the core purposes of the regulation will be to maximize speech favorable to the government, and to minimize speech unfavorable to it. This is obviously true in autocratic states like China; but it is also true of democratically enacted legislation such as the laws recently enacted in the U.S. states of Florida and Texas, both of whose governors publicly admitted (indeed, emphasized) that the purpose of the laws was to enhance conservative voices (both Governors are leading conservatives, and members of the Republican Party).
In short, government remains a much greater threat to free speech than internet platforms not only because of the formers’ monopolies on violence and control, but also because of their perverse incentives. The primary motivation of internet companies, on the other hand, is to make money, which in the free speech sphere is actually quite innocuous—after all, that is the motivation that drives all privately owned media. Of course, the motivation to maximize engagement, and so profits, can have other perverse consequences such as encouraging platforms to emphasize incendiary content, or to generate addictive behavior. But it is not a threat to freedom of speech as such.
 NetChoice, LLC v. Att’y Gen., Fla., 34 F.4th 1196, 1205 (11th Cir. 2022); NetChoice, L.L.C. v. Paxton, 49 F.4th 439 (5th Cir. 2022).  Germany Starts Enforcing Hate Speech Law, BBC (Jan. 1, 2018), https://www.bbc.com/news/technology-42510868.  General Data Protection Regulation 2016/679, 2016 O.J. (L 119), art. 17, https://gdpr-info.eu/  Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.  Information Technology Act, 2000.  It should be noted, moreover, that governments can exercise control over platforms in many different ways. The primary text focuses on formal, legal regulation, but the mere threat of regulation or enforcement action can also cause platforms to comply with the wishes of officials. Governments have also at times used financial incentives such as grants or advertising purchases to effectively bribe platforms to support government policies. Indeed, there may be occasions when platforms voluntarily cooperate with governments to obtain mutually beneficial outcomes. The key is that in all of these situations, free and independent platform decisionmaking has been compromised to serve state interests.  Germany Starts Enforcing Hate Speech Law, BBC (Jan. 1, 2018), https://www.bbc.com/news/technology-42510868.  https://transparency.fb.com/policies/community-standards/.  Sameer Yasir, As India Tries to Block a Modi Documentary, Students Fight to See It New York Times (Jan. 25, 2023), available at https://www.nytimes.com/2023/01/25/world/asia/india-bbc-modi-documentary.html.  Anton Troianovski and Valeriya Safronova, Russia Takes Censorship to New Extremes, Stifling War Coverage New York Times (March 4, 2022), available at https://www.nytimes.com/2022/03/04/world/europe/russia-censorship-media-crackdown.html.  https://about.fb.com/news/2023/01/trump-facebook-instagram-account-suspension/.  While the scope of constitutional protection for online content is beyond the scope of this essay, it should be noted that unlike in the United States, whose First Amendment provides robust protection for such speech, other jurisdictions are less protective. The European Court of Human Rights, for example, has interpreted Articles 10 and 17 of the European Convention on Human Rights to permit many limitations on free speech; and Article 19 of the Indian Constitution similarly recognizes a number of permissible grounds to limit speech.  The one caveat here is that if the government and/or political parties affiliated with the government are themselves a major source of platform profits, say from purchasing political advertising, then there might be occasions when platforms find it profitable to block anti-government speech in order to retain government business, to the detriment of other users. Such situations seem likely to be relatively uncommon, however, because political advertising constitutes a tiny fraction of overall advertising revenues for platforms—Facebook, for example, is the single largest conduit for online political ads, yet political advertising constituted less than 1% of company revenues. https://www.businessinsider.com/zuckerberg-facebook-political-ad-revenue-2020-10.  Id.  NetChoice, LLC v. Att’y Gen., Fla., 34 F.4th 1196, 1205 (11th Cir. 2022); NetChoice, LLC v. Paxton, 1:21-CV-840-RP, 2021 WL 5755120, at *1 (W.D. Tex. Dec. 1, 2021).
Ashutosh Bhagwat is a distinguished Professor of Law and Boochever and Bird Endowed Chair for the Study and Teaching of Freedom and Equality, University of California, Davis School of Law (email@example.com).
Sukarm Sharma, an undergraduate student at NLSIU, provided research assistance on the Indian aspects of censorship and regulation