With the launch of ChatGPT late last year, Congress is racing to catch up to the great promise and peril presented by the rapid deployment of artificial intelligence (AI). Just this year, seven Senate and House committees have held hearings on AI, and more are planned. Many bills have been announced, although the text of these proposed laws is often unavailable. Some bills are targeted specifically to generative AI like ChatGPT, but most also cover other forms of AI that have long raised concerns, such as the recommendation algorithms used by social media platforms and tools used to make decisions about matters such as eligibility for social services or credit.
In contrast, the European Union is well on its way to developing a robust suite of regulations. The European Council and European Parliament have published their own versions of the European Union Artificial Intelligence Act (EU AI Act) based on the European Commission’s original proposal, with the Parliament issuing its version in June. Negotiations on a final regulation are underway. These proposals cover the range of AI tools that have been deployed over the last several years.
A few themes emerge from this flurry of activity, suggesting some conceptual points of consensus between the United States and Europe, as well as points of divergence. First, unlike Europe, U.S. lawmakers have been reluctant to declare any uses of AI as being too dangerous to deploy. Second, while the model of requiring transparency and risk management from AI developers—which is at the heart of the European approach—is reflected in documents such as the January 2023 AI Risk Management Framework, published by the National Institute for Standards and Technology (NIST), the October 2022 White House Blueprint for an AI Bill of Rights, and some legislative proposals, it is unclear whether and how these principles will be incorporated into a binding legal framework. Third, there is broad agreement that a regulatory agency is needed, but little consensus on whether existing agencies should be given additional authorities or whether a new agency should be established. Fourth, given the stronger regulatory base in Europe, it is likely that approaches to liability will differ considerably—at least in the near term.
1. Bans and High-Risk Designations
The European approach plans to ban AI systems that create an “unacceptable risk,” although there is not yet consensus on which systems meet this threshold. The European Parliament’s version of the EU AI Act, for example, lists predictive policing and the use of real-time remote biometric identification systems (e.g., facial recognition) in public spaces as “unacceptable risk systems.” The other two European bodies have taken a more limited approach as to which systems should be banned outright. But all three bodies broadly agree that certain types of AI should be treated as “high-risk,” including systems used in determining eligibility for immigration, employment, and in deciding who receives public services and benefits. The European Parliament also recently added AI-based tools designed to influence elections and voting behavior to the potential “high-risk” category. Deep fakes and chatbots are classified as limited risk systems subject to less regulation, while no requirements are imposed on minimal or no-risk systems such as spam filters or AI-enabled video games.
U.S. lawmakers have introduced bills that address certain categories of AI use that could be considered “high-risk,” such as the use of AI by minors, in elections advertising, in nuclear warfare, in the workplace, and to make certain “critical decisions” (e.g., determinations affecting access to education, employment, financial services, healthcare, and housing), but it is unclear whether any of these will garner broad support. On facial recognition—a key concern for civil society groups—multiple bills have been introduced to little effect.
Just last week Axios reported that Senator John Thune (R-SD) was circulating a draft discussion bill which seems to create a scheme for categorizing AI systems based on risk. These are reportedly: “critical high-impact AI” (i.e., biometric identification, management of critical infrastructure, criminal justice, or fundamental or constitutional rights); “high-impact AI” (i.e., systems that impact housing, employment, credit, education, physical places of public accommodation, healthcare, or insurance in a manner that poses a significant risk to fundamental constitutional rights or safety); and Generative AI (i.e., system that generates novel outputs based on a foundation model).
2. Evaluations and Transparency
The European scheme relies heavily on evaluations and transparency, buttressed by the oversight authority of a regulatory body. Purveyors of most “high-risk” AI systems must assess them prior to release to ensure that they conform to a host of specific requirements, including evaluating the risk posed by the systems, verifying the accuracy, resiliency, and security of the tool, meeting standards for data collection and recording, as well as maintaining documents that allow regulators to check the creation and operation of the system. The European Parliament would also require “high-risk” AI systems to identify and mitigate risks to fundamental rights prior to deployment, including impacts on marginalized persons or vulnerable groups and on the environment.
Companies must monitor the use of “high-risk” systems once they are on the market, allowing for the identification of emerging risks. National regulatory authorities have access to training, validation, and testing datasets used by the AI provider. And authorities charged with protecting fundamental rights under EU law may examine an even broader range of material. For systems that present a risk to health, safety, or fundamental rights, the regulatory authority may require corrective action and even remove the system from the market.
U.S. lawmakers are farther from developing such a detailed regulatory framework, although many of these ideas are reflected in proposed legislation such as the Algorithmic Accountability Act of 2022 (requiring self-assessments of AI tools’ risks, intended benefits, privacy practices, and biases) and American Data Privacy and Protection Act (ADPPA) (requiring impact assessments for “large data holders” when using algorithms in a manner that poses a “consequential risk of harm,” a category which undoubtedly includes at least some types of “high-risk” uses of AI).
However, Senator Thune’s model—which has not been publicly released—would reportedly require the Commerce Department to “develop a five-year plan for companies to test and certify their own critical high-impact AI systems to comply with government safety standards,” with a different (presumably less strict) certification scheme for high-impact systems. Generative AI systems would be required to meet these schemes if they met the definitions of critical high-impact or high-impact.
Senate Majority Leader Chuck Schumer has also announced he is leading a bipartisan effort to develop a policy response, based on the SAFE Innovation Framework. While details are scant, the framework emphasizes the need for AI systems to be accountable and responsible and “address concerns around misinformation and bias,” to be transparent, and to align with democratic values by protecting elections and avoiding potential harms.
All versions of the EU AI Act require that AI systems “intended to interact with natural persons” are designed in such a way that individuals are aware they are interacting with an AI system and require that content generated by AI, specifically that which generates images, audio, and video content that could be construed as authentic depiction of reality—such as a fake video depicting a person without their consent—be clearly labeled. Congress too seems convinced of the need for labels to ensure that the public understands when they are viewing AI-generated content. Several bills propose labeling such content.
Notifying people who are interacting with or exposed to decisions that rely on AI is also a key area of concern. The European Parliament draft requires that “where appropriate and relevant,” people must be informed about which functions are AI-enabled, whether there is human oversight of the system, who is responsible for the decision-making process, and their right to seek redress for harm. A similar principle is reflected in the White House Framework for an AI Bill of Rights, as well as other proposed legislation. The Transparent Automated Governance Act, for example, would require government agencies to provide disclosure and opportunity for appeal when using automated systems to interact with the public or make “critical decisions,” such as determinations affecting access to education, employment, financial services, healthcare, and housing.
3. Regulator Required
The certification schemes being considered in Europe are buttressed by strong regulatory authority. The EU AI Act requires member states to establish or designate national authorities to implement the Act and gives those authorities the power to oversee AI systems and access to information on “high-risk” systems. The Parliament version gives this authority more teeth, requiring states to designate one “national supervisory authority” to serve as a national leader in implementation of the Act and allow it to conduct unannounced site visits and remote inspections of “high-risk” systems, obtain evidence to identify non-compliance, and acquire samples of AI systems for reverse engineering.
In hearings, Congress has considered creating a federal agency or commission dedicated to regulating and overseeing AI development and use. Recent bills, like the Digital Platform Commission Act, would create an expert federal agency to promulgate comprehensive regulation of digital platforms, which would presumably include AI systems. The Algorithmic Accountability Act of 2022 proposed giving the Federal Trade Commission—the agency responsible for consumer protection—supervisory authority.
Senator Thune’s proposal seems to put the Commerce Department in charge of supervising AI, but the extent to which companies will have to provide information to the regulator and the robustness of its authority is unclear. Axios reports only that “if noncompliance is discovered by either Commerce or the company and not appropriately remedied, Commerce could take civil action against the company.” Obviously a self-certification scheme without robust regulatory oversight would fall far short of the European model.
4. Liability
The EU AI Act would impose steep administrative fines for violating the Act, with Parliament’s recent version creating a right to judicial remedy against certain decisions by a national supervisory authority. Certain remedies also may be available under other European laws, such as the European Charter of Fundamental Rights, the General Data Protection Regulation (GDPR), the Product Liability Directive, anti-discrimination directives, and consumer law. Violations of the e-Commerce Directive and Digital Services Act (DSA)—which requires reporting on automated content moderation rules and risk assessments and mitigation measures for large platforms and search engines—provides an additional avenue for liability, including fines and compensation for those harmed. And last year, the European Commission released the Proposal for an AI Liability Directive which, if adopted, would dramatically alter civil liability rules by creating a rebuttable presumption of causality regarding claims for damages caused by AI systems. The Commission also has proposed updating the EU’s Product Liability Directive to cover AI software.
While the United States does not have a similar comprehensive data protection framework on which to build, Congress has recently highlighted the need for data privacy laws to restrict AI companies’ use of personal data and provide users the right to delete personal information. The 2022 ADPPA, which passed the House Energy and Commerce Committee (53-2 votes) but has not been reintroduced this year, proposed prohibiting companies from collecting personal data unless necessary to provide a product or service or to effect certain enumerated purposes—such as authenticating users, protecting against spam, completing transactions, or performing system maintenance and diagnostics—with heightened restrictions on the collection, use, and transfer of sensitive personal data such as biometric, genetic, geolocation, and health information.
Agencies such as the FTC, the Justice Department, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission have made it clear they will enforce civil rights, non-discrimination, fair competition, and consumer protection laws on the use of automated systems, although they may face challenges due to the limits of their authority and because existing laws may not be easily applied to AI. So far, Congress has mainly focused on the applicability of Section 230 of the Communications Decency Act—which generally protects online platforms from civil liability for hosting content created by third parties—and has been the target of Congressional interest for years. A bill recently introduced by Senators Josh Hawley and Richard Blumenthal would remove this immunity for platforms that use or provide generative AI systems, meaning that AI companies like OpenAI could be held liable for content generated by users using their tools. These remain unsettled areas of law and the ultimate outcome is quite uncertain.
The European Union and the United States have both committed “to a risk-based approach to AI to advance trustworthy and responsible AI technologies,” however what that means for U.S. law and regulation remains to be seen.