On August 6, President Donald Trump issued an executive order banning TikTok in the United States within 45 days and asking the company ByteDance to divest itself from the popular app by the same time. According to Trump, the data collected from the app, including location and browsing data, make TikTok a threat to U.S. national security.
It is not the first time U.S. regulators have raised security concerns over Chinese apps being used by Americans. Back in March, the LGBTQ dating app Grindr was sold by its Chinese owner, Beijing Kunlun Tech, to the U.S.-based company San Vincente Acquisition, following such concerns.
Beginning at the end of June, the Indian government has banned a number of Chinese apps. The list includes TikTok, Baidu, WeChat, and, more recently, the popular gaming app PUBG. According to the Indian government, these apps have engaged in activities which are “prejudicial to sovereignty and integrity of India, defence of India, security of state and public order.”
Apps are being banned by public authorities also on grounds other than national security. For instance, the Pakistan Telecommunication Authority (PTA) has sent notices to the management of Tinder, Grindr, and three other dating apps, claiming they do not adhere to local laws, and they disseminate “immoral content.” The PTA has asked the apps to remove the dating service and to moderate content in compliance with local laws.
Apps are an important instrument for people to communicate, access, and share content. They also provide opportunities to connect and entertain individuals and communities. As such, the contemporary exercise of individuals’ freedom of opinion and expression owes much of its strength to the use of these apps, which act as a gateway for information and an intermediary for expression. This is all the more so for apps used by already marginalized or criminalized communities.
Notwithstanding this, the narrative about bans focuses on security, privacy, morality, competition, or regulatory issues, failing to take into account the impact that these bans on apps can have on individuals’ rights to freedom of expression.
The grounds for banning apps are often vague and broad and do not comply with international human rights law
Since apps are a tool for exercising the right to freedom of expression, any restrictions on their use must meet the international freedom of expression standards. This means that any limitations on them can only be justified if they meet the three-part test that all limitations on free expression rights are required to satisfy the international framework. The restriction being introduced has to be provided by law, pursue a legitimate aim, and States must prove that the restriction is appropriate and narrow, and that there are no less restrictive alternatives to achieve the same protective function.
National security has long been considered a legitimate ground to limit citizens’ fundamental rights. States are allowed to take a variety of measures to protect their citizens from national security threats, including when this implies a certain degree of limitation of citizens’ rights and freedoms.
However, some conditions and guarantees have to be respected. For example, the Siracusa Principles on the Limitation and Derogation Provisions in the International Covenant on Civil and Political Rights highlight that national security cannot be invoked as “a pretext for imposing vague or arbitrary limitations.” In a similar vein, the Johannesburg principles on national security and freedom of expression recall that a restriction sought to be justified on the ground of national security is not legitimate unless “its genuine purpose and demonstrable effect is to protect a country’s existence or its territorial integrity against the use or threat of force, or its capacity to respond to the use or threat of force.”
The new U.S. bans
Trump’s executive order states that TikTok data collection “threatens to allow the Chinese Communist Party access to Americans” personal and proprietary information — potentially allowing China to track the locations of U.S. federal employees and contractors, build dossiers of personal information for blackmail purposes, and conduct corporate espionage. In addition, the order claims that TikTok
reportedly censors content that the Chinese Communist Party deems politically sensitive, such as content concerning protests in Hong Kong and China’s treatment of Uyghurs and other Muslim minorities. This mobile application may also be used for disinformation campaigns that benefit the Chinese Communist Party, such as when TikTok videos spread debunked conspiracy theories about the origins of the 2019 Novel Coronavirus.
The wording of the order also includes vague and broad concepts, and relies on hypothetical scenarios. The order says TikTok “threatens to allow” or “potentially allow,” not that it allows right now or that there is evidence or reasonable grounds to believe it currently does. About censorship, the order uses the adverb “reportedly,” leaving room for uncertainty about the source of the information.
In addition, the risk of disinformation, including the spread of conspiracy theories, does not amount to a national security threat to be addressed with a ban on the use of the apps. On the one hand, the argument is redundant, as the spread of disinformation and conspiracy theories have been highly reported on U.S.-based platforms. On the other hand, this spread could also be solved with less drastic measures, such as adding fact-checking labels or placing notifications on content that violates apps’ policies on disinformation. More generally, governments should fight disinformation not through censorship and bans, but rather through promoting a free, independent and diverse communications environment, including both diversity of content and variety of sources. They should be also working together with companies and civil society to develop and maintain independent, transparent, and impartial oversight mechanisms to ensure the accountability of the apps’ content-restriction policies and practices.
All in all, the justifications that ground the U.S. ban do not comply with the international standards on free expression. The order does not seem to adequately describe the risks to national security that justify the ban in order to satisfy the necessity test. In addition, it fails to consider less invasive measures to address the problem, thus violating the proportionality test under the international framework.
A similar situation happened earlier this year with Grindr. There as well, the Committee on Foreign Investment in the United States (Cfius) put forward broadly defined and generic considerations to compel Kunlun to sell Grindr. Cfius argued that the Chinese government could have used personal data given to the app by its 3.3 million users, including U.S. officials and military personnel, to blackmail them based on their sexual preferences or HIV status. Cfius fears were shaped in such vague terms that they could apply to any Chinese deal-making company that collects users’ personal data. The vague wording used in both the Grindr and TikTok cases might be laying the groundwork for future actions, perhaps looking at other countries besides China. It leads one to wonder: Where would such a list end?
The fast-expanding ban in India
In India, the ban is being imposed because all of the apps on the list are considered to be “prejudicial to sovereignty and integrity of India, defence of India, security of state and public order.” No details are provided about this prejudice. The Ministry of Information Technology has pointed at the misuse of some mobile apps available on Android and iOS platforms for stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers located outside India. The Indian government considered the compilation of this data, its mining and profiling by elements hostile to national security and defense of India, as a matter of very deep and immediate concern, which requires emergency measures.
Once again, the justification is shaped with reference to an extremely vague scenario, and claims do not seem to be adequately detailed. Among others issues, the order does not explain what is the legal framework of reference to consider the data transmission “unauthorized,” nor why the simple transmission to servers outside India is sufficient to put in danger the sovereignty and integrity of the country. Here again, the legitimacy of the measure is unclear, and the necessity and proportionality tests are not adequately performed. Risks to privacy and data-protection rights can be minimized or fixed with measures that are more proportionate than a total ban on the use, and therefore would impact less on users’ free expression rights. As repeatedly stated by United Nations Special Rapporteurs, and recently recalled by the U.N High Commissioner for Human Rights,
Safeguarding everyone’s right to freedom of expression and privacy, and promoting public participation, are paramount goals shared by most stakeholders engaged in discussions about how online spaces should be regulated. Preserving the immense benefits brought by digital technologies to our social and political lives, while addressing the numerous risks, are key challenges for law makers today.
Apart from the wording in the bans, it seems that in a number of States, the narrative to justify them includes a need to maintain technological sovereignty with regards to foreign countries. For example, Trump raised these types of concerns last year, and wanted to impose restrictions on the U.S. technology that could be exported overseas, and in particular to China, in order for U.S. industry to dominate the next generation of technologies. And similar concerns seem to motivate the Chinese government when it added the recommendation algorithms, a core function of TikTok, to the list of technologies that requires approval for export. In this way, bans are being used more as a tool for trade battles than to protect national security, but trade objectives shouldn’t be pursued at the expense of people’s rights and freedoms.
Recommendation algorithms as export-controlled items
One of the most interesting developments of the TikTok saga in the United States came with the Chinese Commerce Ministry inserting the personalized content-recommendation algorithms to its list of export-controlled items. This move would imply that the sale of TikTok to a U.S. buyer would not include TikTok’s recommendation algorithm, basically the core of its business.
Usually, States use export controls to better promote technological progress and economic and technological cooperation, and also protect their own economic security. Export controls are also an important element of reducing the risks caused by the private surveillance industry and the repressive use of its tools. The relevant international framework, the Wassenaar Arrangement, focuses on arms and dual-use goods and technologies, but is far from perfect. To start with, it does not include enforcement mechanisms but relies on national implementation. In addition, the Wassenaar Arrangement has not meaningfully limited the spread of surveillance technologies and their use for repressive purpose. Furthermore some States, strongly encouraged by their national businesses, have shown strong resistance to the inclusion of stricter controls, thus diluting human rights safeguards.
The move of the Chinese Commerce Ministry highlights at least two important challenges with regards to export control. On the one hand, it confirms the need for greater transparency and responsibility in export regimes, which should include a framework under which the licensing of any technology would be conditional upon a national human rights review and companies’ compliance with the U.N. Guiding Principles on Business and Human Rights. On the other hand, it reminds us that misuse of export control can be as impactful on people’s human rights as the lack of its control.
The inclusion of content-recommendation algorithms in the export control list proves that the Chinese Government considers them of significant strategic importance. The choice cannot go unnoticed, as this kind of algorithm is at the center of harsh debates worldwide. Among others, it is accused of causing the spread of disinformation, hate speech, illegal content, as well as reducing the diversity of content each user is exposed to. Some argue that recommendation algorithms used by large platforms can go as far as to manipulate and polarize public discourse and influence electoral results.
If recommendation algorithms are of such strategic importance, one might wonder if they should be subject to export rules but remain completely unregulated domestically. Indeed, governments and regulators around the world are struggling to cope with the challenges these algorithms raise. At ARTICLE 19, a free speech organization, we have contributed to the debate with a series of calls and recommendations that go in the direction of having more transparency in the decision-making processes, setting adequate internal complaint mechanisms, and guaranteeing effective remedies in case of violation of users’ freedom of expression.
In addition, ARTICLE 19 argues that the strategic importance of these algorithms would not be the same if markets were less concentrated, and major platforms did not act as gatekeepers. In fact, in a more open and competitive scenario, with numerous options easily available to users, the impact of each recommendation algorithm on people’s free expression rights — including, at the societal level, the capacity to influence discourse — would strongly diminish. Of course, certain features of these algorithms, and therefore certain elements of the business model built around them, which are incompatible with human rights standards or fundamental policy objectives such as media diversity, need to be changed or prohibited regardless of whether we have one player or dozens in the market. However, a competitive market with a plurality of players is certainly a better scenario than an oligopoly or quasi-monopoly to trigger innovation, improve quality services, and provide more choices for consumers.
Justifications other than national security appear problematic too
If bans on apps appear often unjustified when based on national security grounds, this might be all the more so when the ban or limitation is grounded on the claim that the app service contributes to the spread of “immoral content.”
General Comment No.34 states that the determination of what constitutes “public morals” must not be based on principles derived exclusively from a single tradition. Rather, it should be understood in the light of the universality of human rights and the principle of non-discrimination. A prohibition based on the grounds that the app is used to share immoral content may violate Article 19 of the International Convention on Civil and Political Rights (ICCPR) if it is applied to impose the values held by the government or theocratic elite rather than reflective of the diversity of views held within society. Further, the pluralism that is essential in a democratic society requires that people, even when in the majority, tolerate speech that they deem offensive.
The ICCPR envisages a high threshold for when offensive speech reaches a degree of harm that would warrant a restriction on expression, and, in this case, a ban or limitation on the use of an app. In Pakistan, the PTA’s notices intimate blanket prohibitions on a spectrum of expression without clearly articulating a discernable threshold standard that distinguishes offensive expression from that which causes actual harm to society. The notices appear to provide a framework for the imposition of a singular conception of morality rather than a mechanism for protecting the public from harm. As such, it cannot be said to pursue a legitimate aim.
All in all, governments that plan to ban apps should duly assess their impacts on people’s freedom of expression rights and proceed to a close scrutiny of the legitimacy, proportionality, and necessity of the measure. Restrictive measures must also be used as a last resort, when other options fall short. Moreover, these bans should be written in more precise and narrower terms, to make it easier to assess if they meet the three-part test under international human rights law or whether they illegitimately restrict people’s rights.
Apps should comply with international human rights framework anyhow
In principle, to require apps to guarantee users’ rights in order to be able to enter or operate in a market might be a welcome development. To a certain extent, this has been the mantra of the European Commission in its works toward a fair, open, and competitive EU digital single market. As the Executive Vice-President Margrethe Vestager often reminds us, the European way ensures citizens are empowered to take decisions on how their data are used, that technology is developed to serve humans, not the other way around, and that it is shaped to fit EU values.
However, ad-hoc bans are not the right way to go. Far from protecting human rights, the exclusionary effect of bans is problematic for users and businesses alike. Users are deprived of choices, while businesses lose incentives to work on new and better products. In addition, ad-hoc bans lead to national fragmentation, which frustrates the idea of the internet as a free, open, and democratic environment for all people around the world.
To avoid that, we need a technology-governance model that gains consensus and trust all over the world. To work on a governance model based on widely accepted security safeguards and human rights guarantees would make it easier for businesses to operate globally, for users to access services, and for governments to protect their citizens and to support fair competition in their markets.
In addition, a multi-stakeholder process would be a more efficient and more legitimate way to impose minimum standards on apps to make sure that they guarantee users’ human rights. A reference framework already exists: the U.N. Guiding Principles on Business and Human Rights, a set of principles and recommendations addressed to both States and private actors, which provide the first global standard for preventing and addressing the risk of adverse impacts on human rights linked to business activity. Moreover, the U.N. Special Rapporteur on freedom of expression has also issued recommendations for States, the private sector, international organizations, and multi-stakeholder processes aimed at guaranteeing that information and communication technologies are developed and deployed in a way that promotes and respects freedom of expression. More can and should be done. Because of the huge information asymmetry in the market though, decision-makers and regulators need a dialogue with companies to be able to make informed decisions and establish red lines. And because the challenges at stake have a huge impact on people’s rights and freedoms, civil society needs to be part of the process too, and people need to be empowered in order to stop being the product rather than the recipients of it.