Editor’s Note: This article is part of Regulating Social Media Platforms: Government, Speech, and the Law, a symposium organized by Just Security, the NYU Stern Center for Business and Human Rights, and Tech Policy Press.
There is a growing international consensus that governments should take a more active role in overseeing digital platforms. As 2025 began, this was no longer a theoretical discussion: the past few years brought a surge of legislative action across major economies. The European Union’s Digital Services Act (DSA) and Digital Markets Act (DMA) are now in full force, transforming how major tech platforms are allowed to operate in Europe. Meanwhile, the United Kingdom, Ireland, Australia, and many other countries have passed robust online safety laws that are now entering the enforcement stage. The United States is also an active arena of digital regulation, at least at the state level, although the federal government has yet to enact sweeping digital regulation. This is an opportune moment to analyze ongoing regulatory efforts and shape the future of digital governance.
To help policymakers and the public navigate this complex landscape, the NYU Stern Center for Business and Human Rights (where I work) undertook a global survey and analysis of online safety regulations. Through a systematic review of 26 laws in 19 jurisdictions, what initially seemed like a morass of requirements revealed itself as a set of discernible approaches to platform regulation. We grouped these approaches into four categories based on the requirements they impose:
Content-based regulation: This approach establishes classes of prohibited content that online services are required to remove. There is significant variation among these measures. While some regulations — for example, the EU’s Terrorist Content Online Regulation — only prohibit material that is explicitly illegal, others — for example, New Zealand’s Harmful Digital Communications Act and the United Kingdom’s Online Safety Act (OSA) — also regulate content defined as harmful or undesirable.
The duties of online services with respect to such content may be reactive — that is, triggered by an official takedown order or user report — or proactive, requiring the platforms’ ongoing monitoring and removal of proscribed content. A more recent variant of legislation prevents online services from taking down content. Often called “must-carry” provisions, these types of regulations — enacted in Florida and Texas but now the subject to legal challenges — have been motivated by the notion that platforms should not be allowed to suppress certain “viewpoints.”
Design-based regulation: The design-based approach mandates technical and interface-related changes to achieve certain outcomes, such as protecting users’ data privacy, minimizing exposure to harmful material, and reducing compulsive usage. This approach — on the books in the EU, the United Kingdom, Australia, and several U.S. states (for example, California’s Age-Appropriate Design Code, NY SAFE for Kids Act, and Utah’s Minor Protection in Social Media Act) — focuses on upstream harm prevention, rather than downstream (after-the-fact) mitigation. These laws regulate platforms as products, targeting their architecture and features.
Certain jurisdictions have attempted to regulate platform designs in a variety of ways — from targeting specific design features, such as default settings or algorithmic recommendation systems, to imposing a general “duty of care” on platforms. Another recurrent type of design-based requirement instructs platforms to allow users to customize key aspects of their online experience, such as their public visibility and susceptibility to geolocation tracking. Design-based requirements appear most commonly in legislation aimed at protecting children, but these requirements can be leveraged to protect all users.
Transparency mandates: The transparency-based approach sets forth requirements for online services to disclose information about their operations, revenue streams, algorithms, and moderation processes. Online platforms can be compelled to make a number of disclosures. By far the most common requirement — in effect, for example, in the EU, the United Kingdom,[vi] Australia, and Ireland — is for platforms to produce reports disclosing aggregate data on their moderation of third-party content. A small number of jurisdictions also require platforms to release basic information about their user base, such as the total number of users.
Another emerging trend in transparency regulation is to mandate the disclosure of information about the workings of algorithms that recommend content and target advertisements to users. Other types of transparency requirements include expanding independent researchers’ access to platform data and mandating that platforms subject their disclosures to external audits.
Procedural safeguards: The last approach focuses on platform processes aimed at ensuring basic fairness and accountability. The EU and the United Kingdom, for example, have enacted requirements for platforms to lay out their terms of service in clear and accessible language and to institute internal mechanisms for users to appeal platform actions.
Another common procedural requirement is mandating that platforms conduct risk and/or impact assessments that identify how their products might lead to individual or societal harms and describe efforts to mitigate those harms. Some regulations require that platforms then disclose those assessments to a third party, in which case this procedural safeguard serves a transparency function as well.
These approaches are not mutually exclusive. Rather, they reveal an extensive menu of requirements from which policymakers can choose options that best align with their values and objectives. Most regulations contain elements of these various approaches, resulting in considerable diversity but also some overlap in online safety regulations across the world.
The NYU Stern Center advocates for a combination of regulatory measures consistent with widely accepted human rights standards, including the rights to freedom of expression and privacy as enshrined in the International Covenant on Civil and Political Rights.
Our analysis yields the following recommendations:
1. Ensure that content-based requirements pertain only to content that is explicitly illegal. Governments should not require platforms to remove content that could be harmful but is not illegal, unless the harmful content is defined precisely enough to meet the “legality” standard under international human rights law, such as the International Covenant on Civil and Political Rights (ICCPR).
2. Compel platforms to disclose information about their business operations and subject those disclosures to external audit and analysis by independent researchers. A key target of disclosure requirements should be platforms’ algorithmic recommendation systems — in particular, the parameters and streams of user data that determine the output of those systems. These disclosures, audits, and data access regimes for researchers should be accompanied by robust safeguards to protect user privacy and legitimate trade secrets.
3. Regulate design features to enhance user agency. Regulators should crack down on the use of “dark patterns,” user interfaces that trick people into buying a product or signing up for something, and incentivize platforms to create design features that allow users to customize aspects of their online experience that impact their rights and well-being. Any highly prescriptive design-based mandates should be grounded in empirical research and proportional to the regulation’s aims.
4. Ensure that procedural requirements are about more than just box-ticking. For procedural safeguards to be meaningful, regulators need to issue concrete implementation guidance that sets out clear expectations. For example, platforms need to know what an adequate risk assessment should contain or what a functional user reporting mechanism should look like. Regulators should also put teeth in any requirements that platforms fulfill the promises they make to users in their terms of service. If a platform claims to prioritize user safety, as some do, regulators should require that companies demonstrate their investments in trust and safety, including in content-moderator workforces and systems.
In addition, we recommend that regulators delegate enforcement to an independent agency with limits on its authority and ensure that this agency is appropriately funded and staffed with expert personnel. Finally, regulators should cast a broad scope in the coverage of their online safety regulations but also differentiate among platforms based on their service (for example, live-streaming versus e-commerce) and reach, tailoring some requirements as appropriate to avoid crippling small platforms and reducing healthy competition.
The growing adoption of online safety regulations reflects a spreading consensus on the need to protect people from harm. The challenge now lies in aligning these measures with human rights standards. Heeding the recommendations above would be a step in the direction of meeting that challenge.