Earlier this month and last month, the National Security Commission on Artificial Intelligence, the Reiss Center on Law and Security at NYU School of Law, the Berkman Klein Center for Internet & Society at Harvard University, and Just Security convened a three-part virtual symposium of experts to debate critical legal issues around the growing use and influence of artificial intelligence (AI). Titled “Security, Privacy and Innovation: Reshaping Law for the AI Era,” the symposium comprised three sessions convening leading scholars, practitioners, and thought leaders on some of the most difficult and urgent facets of the AI era. In case you missed the event, this recap describes highlights from each panel. Further details of the symposium can be accessed here. These descriptions and observations are our own and are not necessarily shared by each of the panelists.
September 17 – Responding to AI-Enabled Surveillance and Digital Authoritarianism
Jonathan Zittrain, Faculty Director of the Berkman Klein Center for Internet & Society, moderated the first panel. The panel featured Olufunmilayo Arewa, Murray H. Shusterman Professor of Transactional and Business Law at Temple University’s Beasley School of Law; Chinmayi Arun, Resident Fellow at Yale Law School; Ronald Deibert, Director of the Citizen Lab at University of Toronto’s Munk School of Global Affairs & Public Policy; and Ambassador Eileen Donahoe, Executive Director of Stanford University’s Global Digital Policy Incubator.
The panelists focused on multiple, interconnected areas of concern, including the quick development of AI technologies paired with the inapt legal safeguards of the past, the use of AI technologies to perpetuate human rights abuses, and the global nature of these issues. As Deibert put it, “we all live in this new kind of global ether of data that is connected to but separate from us.” Each panelist drew out unique angles of the potential harms of AI. Arewa highlighted the potential for abuse due to concentration of power in technology companies like Facebook and Google. Arun spoke about how datafication of people can lead to erasure of certain groups, giving the example of how datafying people as male or female erases those who do not identify along the gender binary. She also spoke about how, when dealing with cross border questions, international law “offers us powerful norm setting,” but largely does not create accountability for powerful technology companies.
Ambassador Donahoe narrowed in on AI technologies deployed by authoritarian regimes to “shape citizen motivation and behavior,” like China’s social credit system. She noted that such technologies “not only violate privacy and civil liberties, but they really undermine human agency and go to the heart of human dignity.” Her biggest concern was the threat of digital authoritarianism as a governance model, especially as it spreads across the world and competes with democracy.
The panelists also provided guidance on how civil society and governments in democratic states can tackle the harmful effects of AI on multiple levels. In the international arena, Ambassador Donahoe argued that “on the democratic side, we have basically failed to provide a compelling alternative” to the digital authoritarian regime. She laid out a three-part geopolitical framework, which she subsequently elaborated on here. The three components were: develop a democratic governance model for digital society, invest in values-based international leadership, and win the technological innovation battle to keep power in democratic states. Deibert suggested building momentum in countering abusive “despotism as a service” practices by enhancing domestic oversight of surveillance companies and technologies, starting with agencies such as the U.S. National Security Agency (NSA) and Canada’s Communications Security Establishment (CSE).
Arewa articulated a framework for regulating private actors based on both transparency and liability, but she also acknowledged the obstacle of regulatory capture, even in countries that respected the rule of law. On the transparency side, she gave the example of how Apple’s app tracking transparency led to many fewer users opting to be tracked. On the liability side, she pointed out that although Mark Zuckerberg relies on the liability limitations embedded in corporate law, that may not be appropriate for someone like him who serves as a company’s controlling shareholder and CEO and also sits on the board.
Arun favored an approach that followed computer scientists’ research to understand how “accountability can be hardwired into the building of these systems.” In addition to anticipating harms, she advocated for monitoring each use of AI and creating mechanisms to walk back any harmful effects.
The panelists concluded by articulating hopes for a future where democratic values are infused into AI technology.
September 24 – Constitutional Values and the Rule of Law in the AI Era: Confronting a Changing Threat Landscape
Julie Owono, the Executive Director of Internet Sans Frontières (Internet Without Borders), a member of the Facebook Oversight Board who is also affiliated with the Berkman Klein Center on Internet & Society, moderated the second panel. The panel featured Glenn Gerstell, a senior advisor on international security with the Center for Strategic & International Studies and the former general counsel for NSA; Aziz Z. Huq, the Frank and Bernice J. Greenberg Professor of Law at the University of Chicago Law School; and Riana Pfefferkorn, a Research Scholar at the Stanford Internet Observatory.
The conversation focused broadly on how the American constitutional system is challenged by many emergent problems with the use, development, and deployment of AI. As a starting point, Gerstell described AI tools as “critical, pervasive, and problematic.”
The panelists discussed how foreign adversaries like China have invested heavily in AI technologies to closely surveil their own populaces, gain a competitive edge in the global marketplace, and quickly sort through intelligence. In order to keep up with technological innovation and protect important national security interests, the United States must continue to develop and rely on AI. But the panelists emphasized that existing legal parameters do not sufficiently protect the privacy interests of everyday Americans or provide adequate protections and remedies for bad actions by governments or private companies.
As for the existing legal structure, the panelists focused on the limited protections offered by the Fourth and Fourteenth Amendments. They agreed that the Fourth Amendment provides a limited guardrail around the use of AI technologies by national security institutions and acknowledged that important questions still exist about whether AI can give rise to probable cause for a warrant. Gerstell pointed out that the most relevant case about the limits of government surveillance under the Fourth Amendment, Carpenter v. United States, provides little on-point guidance about the limits on data the government can collect.
Huq asserted that the Fourteenth Amendment was unable to address the most pressing concerns about government use of AI, such as disproportionately high false positives that negatively impact racial minorities and women. While the Equal Protection Clause prohibits governmental actions based on racially discriminatory intent, AI technologies, he argued, are rarely designed with the intent to discriminate; instead they incorporate biases through negligence or inattention.
Pfefferkorn further explained the numerous challenges posed by AI in a criminal justice context, where prosecutors may be unable to fully explain AI technologies used to collect or analyze evidence against defendants. This may be because the technology is opaque even to its inventors, or because contractual or national security obligations prevent the vendors from disclosing how the tools operate.
The panelists further pointed out that the threat posed by the use of AI comes not only from the government but from companies that are not bound by constitutional limitations. The power and value of AI technologies require gathering vast amounts of data about individuals, and this data often comes from these companies’ consumers. Accordingly, the panelists contended that a rights-focused framework is inadequate in the context of the threats posed by AI.
The panelists stressed the urgent need for legislation that more clearly delineates privacy rights for Americans, defines who can collect their data in public spaces and what that data can be used for, and bans some AI applications in particularly sensitive areas. Pfefferkorn pointed out that privacy legislation and doctrine from the 1960s and 1970s are far behind the current technological capabilities today, and that changing technology may correspond to a change in the definition of reasonable expectation of privacy. Huq advocated for a federal agency similar to the FDA or CDC with administrative authority to regulate the AI industry; however, he cautioned that political will for such an agency does not exist.
October 1 – Protecting and Promoting AI Innovation: Patent Eligibility Reform as an Imperative for National Security and Innovation (Panel 1)
Ruth Okediji, Jeremiah Smith, Jr. Professor of Law at Harvard Law School, moderated the first panel, which featured Paul Michel, former Chief Judge of the Federal Circuit; Andrei Iancu, former Undersecretary of Commerce for Intellectual Property and former Director of the U.S. Patent and Trademark Office (USPTO); and David Jones, Executive Director of the High Tech Inventors Alliance.
Okediji introduced the topic of patent eligibility reform, and noted the National Security Commission on Artificial Intelligence’s final report was released in March 2021. That report includes a non-exhaustive list of 10 intellectual property-related considerations for the United States to assess as part of its national security strategy. One of those considerations is patent eligibility reform.
Judge Michel provided critical background on the issue. Patent eligibility is one of the threshold requirements for a patent to be granted — or for an issued patent to be upheld when challenged in litigation. Under Section 101 of the Patent Act, four broad categories of inventions are patent-eligible: processes, machines, manufactures, and compositions of matter. According to Judge Michel, the Supreme Court’s decisions in Mayo v. Prometheus (2012) and Alice Corp. v. CLS Bank International (2014) changed the patent eligibility landscape. These decisions expanded the scope of three judicial exceptions — laws of nature, products or phenomena of nature, and abstract ideas — to the four statutory patent-eligible categories mentioned above. Judge Michel opined that, prior to 2012, the U.S. patent eligibility regime was clear and consistent, and challenges to eligibility were rare, but eligibility challenges have become commonplace since Mayo and Alice. Meanwhile, 27 European countries and many Asian countries have significantly broadened their patent eligibility criteria, and hundreds of patents deemed ineligible in the United States have been deemed eligible elsewhere. Judge Michel concluded his remarks by calling for congressional reform of the U.S. patent eligibility regime. Reform efforts in 2019 stalled, but some discussions on Capitol Hill are currently underway again.
Iancu and Jones then engaged in a spirited debate. Iancu generally agreed with Judge Michel that the law of patent eligibility is in a state of unpredictability. He argued that the private sector will require greater clarity and certainty from the patent system in order to feel incentivized to innovate and invest in new, disruptive technologies, such as AI. AI-related inventions have often been rejected by the current patent regime, which frequently views them as mathematical formulas and “abstract ideas.” According to Iancu, the procedure for defining “abstract ideas” and determining whether a particular invention should be patent-eligible or not is still unclear. He highlighted new guidelines that the USPTO issued in 2019 to synthesize court decisions and provide an analytical framework for patent eligibility evaluation. But this alone isn’t sufficient, he acknowledged, calling on Congress to reform the eligibility statute itself, which was written in 1790, when technologies such as blockchain, AI, and quantum computing could not have been fathomed.
Jones, on the other hand, argued that the current regime is working well and spurring innovation. He cited, for example, an empirical study that demonstrated that companies increased their research and development (R&D) investments after Alice because they could not simply rely on patents for technologies that were no longer eligible for protection. Limiting the scope of eligibility is helpful, he suggested; patent applicants should not be able to merely add “the magic words, ‘on a computer’” and claim an abstract idea to be patentable. Jones also argued that the post-Mayo/Alice regime has been fairly predictable, and that patent applicants have adapted very quickly to changes in the jurisprudence.
The panelists also discussed U.S. patent eligibility specifically in the context of national competitiveness. Jones explained that under the TRIPS agreement — which has been described by the WTO as the “most comprehensive multilateral agreement on intellectual property” — the signatory countries (which comprise most of the world) are obligated to treat foreign inventors and domestic inventors in the same manner. Thus, companies will not necessarily migrate their R&D efforts away from the United States (if they are seeking U.S. patents), Jones argued. On the other hand, Judge Michel warned that “capital is fleeing the United States and fleeing hard technology for less risky investments.” Iancu said that the United States needs to do more to incentivize startups, small-and-medium enterprises, and venture capital firms to invest in disruptive technologies here at home in order to match competition and innovation from China. He argued that providing adequate protections for patents is a way of creating those incentives. Jones countered that some studies show there has been an increase, not a decrease, in startups’ access to venture capital investment in the aftermath of Alice.
Okediji concluded the first panel by noting the importance of discussing these issues for national competitiveness and considerations of what can be done with patent levers.
October 1 – Protecting and Promoting AI Innovation: Patent Eligibility Reform as an Imperative for National Security and Innovation (Panel 2)
Kristen Jakobsen Osenga, Austin E. Owen Research Scholar & Professor of Law at the University of Richmond School of Law, moderated the second panel. The panel featured Ryan Abbott, Professor of Law and Health Sciences, University of Surrey School of Law, and Adjunct Assistant Professor of Medicine at UCLA’s David Geffen School of Medicine; Drew Hirshfield, who is currently performing the functions and duties of the Undersecretary of Commerce for Intellectual Property and Director of the USPTO; Hans Sauer, Deputy General Counsel and Vice President for IP at the Biotechnology Innovation Organization; and Laura Sheridan, Senior Patent Counsel and Head of Patent Policy at Google.
Osenga opened the discussion by noting that the panel would expand on the first panel and discuss some on-the-ground, practical implications of patent eligibility issues.
In their opening remarks, each panelist shared their initial observations on patent eligibility issues. Hirshfield called for greater predictability in patent eligibility laws, a more efficient process for evaluating patents, and a national strategy for protecting AI. Sheridan opined that the current patent eligibility regime is balanced and supportive of AI innovation. “Any disruption of the balance would actually harm innovation and emerging technologies, not help it … patenting in AI is actually flourishing, despite what the [National Security Commission on AI] report says,” she argued. Sauer noted that countries around the world pay close attention to U.S. patent law, including any systematic divergences in outcomes in the United States versus elsewhere. “We have lived with a disparate state of affairs,” he said, referring to the biotech industry’s challenges in obtaining patents in the United States compared to other countries. Abbott spoke to the differences between AI’s disruptiveness and previous generations of technologies, particularly emphasizing AI’s unique ability to generate its own art, music, and inventions. How the U.S. patent system treats AI-generated inventions (compared to traditional, human-invented IP), Abbot observed, will have important legal and economic ramifications in the years to come.
Commenting on the current landscape of patent applications, Hirshfield noted that 18 to 19 percent of applications to the USPTO now have some form of AI in them. Recognizing the trends, the USPTO is undertaking a range of initiatives related to AI, he said. He also spoke of the challenges emanating from a lack of clarity in patent eligibility jurisprudence — and raised concerns about what that might mean for the AI innovations of tomorrow.
Sheridan added that Google has encouraged the USPTO to provide a robust technical training to its patent examiners, such that examiners can stay up-to-date on emerging technologies. She also mentioned that Google’s decision on whether to keep an invention a “trade secret” is not based on patent eligibility law; rather, it is based on business and product-driven considerations, the nature of the technology, and whether Google is comfortable with disclosure.
Sauer suggested that U.S. patent law, as it stands, could potentially invite copyists given the lack of clear protections, and that certain biotech patents might be better protected in China. He also noted that the higher bar for patentability in the United States is leading the biotech industry — particularly diagnostics companies — to focus its investments more on technologies that can be kept confidential (i.e., trade secrets) or on tools used in the R&D process.
Abbott argued that the law, even as it stands today, should allow for patents to be awarded for AI-generated inventions. He acknowledged that there is currently a split on this question in jurisdictions across the globe. “[T]he Patent Act was designed to encourage technological progress and generating socially valuable activities, and that this is exactly the sort of activity that patent law was meant to accommodate, and reading the law with that purpose in mind, there is no principled reason that an AI couldn’t invent something and that someone couldn’t get a patent on that sort of thing,” he said.
In concluding, Sauer mused that we may someday witness “a battle of AIs,” with AI-generated IP being scrutinized by an AI-driven patent agency evaluation process.