The crisis resulting from violent white supremacist riots that breached the U.S. Capitol on Jan. 6 placed in the spotlight, once again, questions about freedom of speech and the role of digital platforms in our lives.
Facebook and Twitter’s decisions to suspend President Donald Trump’s accounts; Google and Apple’s suspension of Parler from their app stores; and Amazon’s decision to remove Parler from its cloud have provoked differing reactions broadly breaking down along on partisan lines. They range from welcoming the decisions as safeguarding democracy and preventing future terrorist attacks, to condemnations that the moves are forms of private censorship.
The underlying question in this debate is whether private companies should be the ones who decide which content can be made available and shared among the millions of people who use these platforms and which cannot. Do these companies have an obligation to suspend content that constitutes incitement to violence when there’s risk of damage to people or property? And, if so, how should they decide when this is the case and based on which parameters? Is there a robust system in place for this kind of moderation, or are we simply subject to what are essentially one-man decisions? Does Trump’s speech deserve different treatment from that of an average user? Do platforms apply their policies evenly around the world?
There are so many questions to answer, and while none of these questions are new, civil society, academics, decision makers, regulators, and the platforms themselves have been struggling to provide answers to them for years. Now, the debate has reached a crescendo in the wake of the attack on the Capitol.
What seems clear from Big Techs’ moves following Jan. 6 is that they are emergency responses that try to fill a huge gap in law and in government policy. For a long time, Trump has used Facebook and Twitter as vehicles to drive responses from followers based on disinformation. Since November’s election, he has prepared the ground for the Capitol’s assault using a variety of spokespeople, many of whom found a home on the social network Parler. Parler’s carefully crafted image as a place to share content without censorship, paving the way for hateful and violent views to proliferate, has been in development for three years. Parler continued to stoke hate and distrust in the days leading up to the attack. The tech companies waited until it all ignited into terrible violence, the loss of five peoples’ lives, and what many now realize is something much more dangerous. In the days following the attack on the Capitol, various companies started announcing they would no longer host Parler, forcing its users to migrate to new platforms.
While ARTICLE 19, where I work, welcomes the decisions taken by Big Tech in the last few days, the enormity of the challenges posed by the equivocal position of these companies requires a determined effort to prevent rushed, reactive decision-making in the future.
We will not protect free speech, democracy, or counter the toxic polarization and violence we have seen develop in various contexts in the last few years, unless we develop a structural response. We need procedures based on international human rights standards, not the apparent goodwill of a powerful man (or woman, although the companies in question are all currently led by men) under political pressure. The problem is that most of the world’s attention is on the visible part of the problem, not the part that lies under the bonnet. Like an iceberg, its greatest mass lies under the water, and that is why we must look at the layers of internet infrastructure and identify where the problem really lies.
Yes, rules on content moderation are essential. Demanding social media platforms are guided by human rights standards in their content moderation practices is fundamental if we are to avoid censorship and other violations of individuals’ free expression rights, as well as to guarantee that illegal content, such as clear incitement to violence is not spread online. But violations, and especially censorship, can occur at different points in the system as recent events clearly demonstrate.
The infrastructure underpinning social media content means that if people want to share or access content, they have to use a social media application whose terms of service allow that content to circulate. They also need an app store where they can find and download the app. And the app, in order to work, requires a cloud provider to host it. Those are the main, although not the only, key intermediary gateways our free speech online goes through. Some call them “chokepoints”; others use the word “gatekeepers” to define these intermediaries. Businesses guard, and impede, access to the market, as well as control the flow of content in the digital environment.
As we saw, a number of gatekeepers took action following the Capitol assault. Apple gave Parler 24 hours to mitigate the “planning of illegal and dangerous activities” occurring on its service or face expulsion from its App Store. The App Store is the only means to download mobile apps on Apple devices and Parler’s removal means the app can no longer be downloaded on devices where it isn’t already present or updated on devices where it is.
Google has also suspended Parler from its PlayStore based on similar grounds: as they have recently stated “for us to distribute an app through Google Play, we do require that apps implement robust moderation for egregious content.” However, as Google allows other app marketplaces on Android, and its decision applies only to the Play Store, people with Android devices will still be able to get the app, if they’re willing to pass through further steps.
Amazon notified Parler that it would be removing it from its cloud-hosting service, Amazon Web Services. It argued that calls for violence propagating across the social network violated its terms of service and said it was unconvinced that the service’s plan to use volunteers to moderate calls for violence and hate speech would be effective. The suspension is extremely problematic for Parler, which is currently unable to operate and offline until it can find another hosting service.
There are various ways to read these moves. These decisions can be considered part of Apple, Google, and Amazon’s contractual freedom. As economic actors, they are, in principle, free to decide who they want to provide their service to, and based on what terms and conditions. Nonetheless, the impact of these decisions on users’ free expression should be considered. Do we want the Big Tech giants to “police” how the businesses that need their services act toward end-users? Do we want them to set the standards not only in the markets where they operate (in the cases at stake: app stores and cloud services), but also in those where their clients operate (in this case: social networks), and do that based on obscure profit-oriented logic, which they can change any time they like based on convenience, or other economic reasons?
A way to solve this problem would be to put limits on Big Techs’ contractual freedom. However, this solution looks only at the content or services these companies might carry or not, rather than at the overall marketplace. Any invasive regulatory intervention with regards to content should be accompanied by a careful balancing of conflicting rights, and by a strict necessity and proportionality test.
Yet, all the challenges above are amplified by the fact that, as explained, we are talking about key intermediaries, or gatekeepers. Indeed, a more attentive look at market dynamics unveils a fundamental factor at stake: The players that are taking action these days all enjoy a position of power in the market and constitute the main channels for people to exercise their freedom of expression and information rights online. The problem, then, lies in the fact that we are not talking about a cloud provider or an app store among many, but about companies that account for a very large percentage of the market, and that, therefore, are able to rule the roost. If an app does not match their standards, whatever they are and however they are set, end-users might not be able to access it at all, or only with certain limits or hurdles. Do we really want Amazon, Apple, and Google to decide which apps we can use and which we cannot?
With this premise, a better way to solve the challenge is to work to counterweight the power of Big Tech by diminishing it, rather than regulating it. To return or resort to competition, rather than to regulated monopolies or oligopolies. Indeed, the main problem, as we have repeatedly said, is the excessive centralization of power at various layers of the communication infrastructure required to exercise our free expression rights.
To solve the numerous challenges linked to content moderation, the spread of incitement to violence, censorship etc., we certainly need standards based on human rights law, but we also need to diminish the control and power that a handful of players are able to exercise on the communication infrastructure. It is not with chokepoints controlled by profit-oriented companies that we’ll have our rights respected and guaranteed, and our democracy protected. It is with markets for new and innovative players, fair competition, and more options for people to choose from. We need decentralized power, a variety of providers with different business models and systems, which compete fairly. Users need viable alternatives and the option to switch providers. Any other solution appears short sighted: a patch that cures the symptoms but leaves the cause unaddressed.
Decision-makers and regulators should impose pro-competitive measures to achieve those objectives, and to implement and protect an internet infrastructure which is free, open, and non-discriminatory. They should call for remedies that lower barriers to entry to the market, forbid gatekeepers from excluding competitors or from leveraging their market power to control access in other markets. They should also put in place measures to empower users, providing them with viable alternatives. Interoperability, fair and non-discriminatory access, and transparency are among the tools that could help, at any layer of the infrastructure.
The protection of free speech and democracy does not require us to only look at how Facebook and Twitter moderate content. It also demands that we take a step back and look at the entire iceberg. We need measures that help shape the marketplace not as the home of gigantic power in the hands of a few, but as an open place for innovation, competition, and an increase in people’s welfare. This way, our rights will no longer be dependent on the ad hoc decisions of unaccountable monopolies or gatekeepers.