Editor’s Note: This article is part of Regulating Social Media Platforms: Government, Speech, and the Law, a symposium organized by Just Security, the NYU Stern Center for Business and Human Rights, and Tech Policy Press.

For decades, the U.S. Congress has been unable to pass comprehensive online platform regulation. While Congress has stalled, the European Union (EU) and U.S. states have charged ahead. The EU has enacted comprehensive regulatory frameworks such as the Artificial Intelligence Act (AIA), the Digital Services Act (DSA), and the Digital Markets Act (DMA), and U.S. states have introduced issue-specific regulations.

The EU2States pathway proposed here leverages international standards (loosely defined as a document or technical protocol written and maintained by a multistakeholder organization), the EU’s market size and propensity to center fundamental rights, and the global nature of the internet to protect consumers in the U.S. The path begins with provisions of Europe’s laws well suited for standardization informing the work of international standards organizations, and ends with U.S. state laws and regulations referencing those international standards. The specifics for each standard (risk assessments, ad libraries, transparency reporting, photo sharing portability APIs, researcher APIs, etc.) will vary. They will also face a range of challenges, but close monitoring, engagement and encouragement from civil society, academics, and funders could lead to thoughtful platform regulation in the United States.

Specifically, the EU2States pathway offers ways for U.S. states to move beyond today’s narrow regulations, which are often centered on child safety protection, non-consensual sexual imagery, and political deepfakes. State-level efforts to implement comprehensive mandates are constrained by the absence of well-resourced agencies to regulate the technology sector. States lack capacities to continually conduct audits, review risk assessments, and update requirements and guidance as technology changes—and as they try, their processes will continue to be overrun with companies, as civil society and consumer advocates do not have the capacity to contribute to 50-plus  separate stakeholder engagement processes. However, U.S. states can delegate the maintenance of requirements to international standards bodies, which are continuously evolving to align with Europe’s broader regulations and human rights considerations.

Path Well-Trodden: Web Accessibility and Clinical Trials

Prior to the recent package of EU laws, the United States, the EU, and other nations have referenced international standards to harmonize regulatory compliance. For example, both the EU and U.S states require that government websites be accessible and mandate the adoption of the World Wide Web Consortium’s Web Content Accessibility Guidelines (WCAG) standard. Outside of technology, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) maintains global pharmaceutical guidelines. The ICH formed in the 1980s as regulators around the world, starting with the EU, then Japan and the United States, had different laws regarding safe clinical trials, which made it challenging to develop drugs with a global sample of patients. ICH guidance is frequently cited in U.S. state privacy laws under the permissible purpose clauses relevant to research (e.g. Virginia’s Consumer Data Protection Act § 59.1-576 C.4.)

Path Under Construction: AI Risk Management

The AIA tasks European standards organizations, specifically the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC), to define parameters, frameworks, and standardized approaches to the AIA’s requirements related to risk management under Europe’s Standardisation Regulation. While representing companies and organizations headquartered in the EU, CEN-CENELEC has longstanding agreements that dictate they should not develop standards in the same area of application as leading traditional international standards bodies such as the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). CEN-CENELEC may undertake exclusively European projects in cooperation with ISO/IEC, where the same national members are present. The current standards work program under the AIA suggests that existing ISO standards on AI risk management will likely serve as references or compliance tools for the AIA, while CEN-CENELEC concentrate on areas not covered by ISO/IEC, particularly standards on fundamental rights and ethics—more normative concepts that ISO/IEC has generally avoided.

Meanwhile, Colorado’s SB24-205 Consumer Protections for Artificial Intelligence, which, similar to the AIA, requires deployers of a high-risk AI system to implement a risk management policy and program. The law stipulates that risk management policies must consider: 

the guidance and standards set forth in the latest version of the “artificial intelligence risk management framework” published by the national institute of standards and technology in the United States department of commerce, standard iso/iec 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this part 17.” 

Colorado’s text is also a nod to the EU2States pathway, using progress already made in the development of AI risk management standards prior to the passage of the AIA and a rare National Institute of Standards and Technology (NIST) publication that includes socio-technical considerations.

An open question for the future of AI risk management is whether international standards bodies such as ISO/IEC will expand to address more normative issues. If they do not, regulators will need to reference multiple documents, raising the question of which organizations will take the lead in developing normative frameworks. If EU requirements for fundamental rights can infiltrate international standards (either traditional bodies like ISO/IEC or alternative organizations), U.S. states could reference those publications.

Deer Trail That Could Become A Path: Platform Transparency and Data Access

The DSA and DMA do not include specific references to the EU Standardisation Regulation, but both acknowledge the broad potential for standards. This means that the provisions in the DSA and DMA that are potentially well suited for standards currently do not require the formal process like those underway for the AIA. Additionally, ISO/IEC and other traditional standard development organizations (SDOs) are not actively developing the standards needed for the DSA and DMA. This presents an opportunity to shape the multistakeholder bodies overseeing this work to be more inclusive and open than traditional SDOs. There are several organizations working on codes and frameworks that align with mandates in the DSA and DMA (e.g. Open Terms Archive, Digital Trust and Safety Partnership, Global Network Initiative, Data Transfer Initiative, etc). If these organizations—or new ones—can secure the resources to effectively convene stakeholders and continuously update documentation that meets regulatory requirements, U.S. states can rely on these references. This could include a hybrid approach, such as Colorado’s model, which cites ISO/IEC for technical standards while referencing a separate publication for normative concepts.

Current trends in First Amendment case law suggest that for U.S states to mandate platform transparency, the mandates will need to be narrowly tailored containing only a fraction of the requirements in EU laws. States, therefore, could pass court-approved legal requirements while listing specific international standards that would constitute compliance. This approach would encourage companies to follow the more expansive international standard for transparency reporting, while acknowledging that attorneys general can only enforce the narrow set of requirements defined in law.

Looking Forward

Tasking international standards bodies with a large role in protecting consumers online is fraught with challenges ranging from how to tackle normative concepts, copyright protections on traditional standards, and companies’ outsized influence in the standards process. Regardless, the EU2States pathway is taking shape in real time. Here are steps that civil society organizations, think tanks, and tech policy scholars can consider taking to ensure these paths lead to the best protections for consumers:

  • Watch Europe’s progress and use of their Standardisation Regulation.
  • Monitor standards (civil society organizations need to approach standards the way they approach legislation: build coalitions to advocate for improvements, write white papers for policymakers, engage with decision-makers within standards bodies, etc.).
  • Serve as members or observers on standards bodies whenever possible.
  • Create and evolve new multi-stakeholder organizations that can draft and maintain open standards where they are lacking.

Additionally, U.S. state governments that are active in technology regulation may want to consider a more structured approach to engaging in international standards, shadowing some of  NIST’s capabilities at the federal level.

IMAGE: Visualization of EU-U.S. flags composed of data.