In February, the U.S. Office of Science and Technology Policy (OSTP) issued a Request for Information (RFI) aimed at shaping the Trump administration’s new AI Action Plan. Stakeholders from across industry, academia, civil society, and the media submitted comments before the March 15 deadline, laying out their visions for AI policy under a second Trump term.  Respondents from OpenAI, Anthropic, Google, the Center for Data Innovation, the Center for Democracy & Technology (CDT), the Center for a New American Security (CNAS), Georgetown’s Center for Security and Emerging Technology (CSET), Business Roundtable, News/Media Alliance, MITRE, and other organizations offered perspectives on how the Trump administration can advance U.S. technological leadership without stifling innovation.

While diverse in approach, the submissions converge around several core themes: infrastructure and energy development, federal preemption of state AI laws, export controls to maintain U.S. competitiveness against rivals like China, promoting domestic AI adoption, safeguarding national security, and defining clear copyright and licensing frameworks for AI data. What follows is a thematic roundup of these proposals, culminating in a reference table at the end.

Innovation, Workforce Adoption & Economic Impacts

A dominant concern among commenters is how the federal government can accelerate AI growth and create a robust domestic workforce. Many commenters worry that a patchwork of state laws might create a “fragmented regulatory environment.” Google, OpenAI, and the Business Roundtable — an association representing U.S. CEOs — explicitly support federal preemption of state-level AI regulations that create a fragmented regulatory environment which could “undermine America’s innovation lead.”

The News/Media Alliance, CDT, and CSET, meanwhile, call for measures that support “Little Tech” to avoid a market dominated by a few large AI providers. The latter two, in particular, highlight the importance of supporting open-source models, which can enable “greater participation” in the AI domain by lowering barriers to entry for smaller firms with fewer resources.

Additionally, some organizations (Anthropic, Business Roundtable, Google, Center for Data Innovation, MITRE) spotlight the labor market implications of AI, urging the administration to invest in technical education and workforce training. To address labor shortages in high-demand, AI-related jobs, CNAS and Google both propose leveraging immigration authorities, including expediting visa applications. Anthropic also recommends that the White House monitor and report on how AI reshapes the national economy, including its effect on the tax base and labor composition.

Export Controls and Global AI Leadership

Concern over China’s rapid AI progress permeates nearly every submission. OpenAI cites the rise of DeepSeek as an example of the country’s swiftly narrowing AI gap, warning that the Chinese Community Party’s (CCP) subsidized efforts could undermine U.S. advantages. Anthropic and CNAS similarly stresses the urgency of preventing smuggling of advanced chips to China, recommending new government-to-government agreements to close supply chain loopholes. Most of the submissions include calls to strengthen the U.S. Commerce Department’s Bureau of Industry and Security, such as by increasing its funding, instituting scenario planning assessments prior to implementing export controls, and consulting with the private sector.

On the other hand, the Center for Data Innovation warns that current, often “reactive” U.S. export controls hamper U.S. firms’ global competitiveness “without meaningfully slowing China’s progress.” The Center suggests a pivot toward enhancing domestic AI capabilities, streamlining export licensing, and collaborating with allies to promote a democratic vision for AI standards.

Google’s position is that any new rules — particularly the AI Diffusion Rule, set forth by the Biden administration — should avoid imposing “disproportionate burdens on U.S. cloud service providers” and factor in potential adverse impacts on American market share. Google, OpenAI, and Anthropic underscore that effective export controls should avoid inadvertently accelerating foreign AI development.

CDT, CNAS and CSET underscore the strategic importance of the United States to “remain at the frontier” of open-source models in the AI race against China. They argue that without compelling U.S. alternatives,  China could embed “authoritarian values” in AI models adopted by developing countries. To counter this, CNAS proposes that the United States rapidly release modified versions of open-source Chinese models to “strip away hidden censorship mechanisms” and promote democratic values abroad.

Infrastructure and Energy

Submissions from OpenAI, Anthropic, Google, CNAS, and Business Roundtable eachpress for robust infrastructure and energy reforms to meet AI’s skyrocketing computational demands. OpenAI and CNAS propose establishing special zones to attract massive private investment in new data centers and transmission lines, as well as to “minimize barriers” and “eliminate redundancies” through tax incentives, streamlined permitting, and partial exemptions from the National Environmental Policy Act.

Anthropic proposes a national target of “50 additional gigawatts of power dedicated to the AI industry by 2027,” cautioning that if the United States fails to supply reliable, low-cost energy, domestic AI developers might relocate model training to authoritarian countries, exposing U.S. intellectual property to theft or coercion. Google underscores the need for consistent federal and state incentives to promote grid enhancements, data center resilience, and advanced energy-generation projects.

Government Adoption of AI

The majority of submissions highlight lagging AI adoption by federal agencies. OpenAI characterizes government usage as “unacceptably low,” proposing to waive or streamline certain compliance requirements to accelerate pilot programs. Anthropic goes further, calling for a government-wide audit to “systematically identify” every text, image, audio, and video workflow that could be AI-augmented.

Proposals also emphasize removing procurement barriers in both civilian and national security contexts. Anthropic wants to mobilize the U.S. Department of Defense (DoD) and Intelligence Community (IC) to expedite AI adoption, while Google and CSET urge the government to avoid duplicative or siloed AI compliance rules across agencies. The Center for Data Innovation warns against the government’s “risk-only” mindset, imploring the administration to pivot to an “action” framework that proactively integrates AI where it can transform mission delivery. CNAS also advises that the U.S. military take “full advantage” of AI and autonomous systems, provided that the DoD develops “rigorous and streamlined” testing procedures and “permit warfighters an early and ongoing role.”

On the other hand, CDT’s proposal cautions against the government rushing forward on AI adoption, claiming that it could lead to “wasted” tax dollars on ineffective, “snake oil” AI tools. CDT instead advocates for stronger guardrails on government AI usage, including the establishment of an independent external oversight mechanism to monitor AI deployment in national security and intelligence contexts. It further recommends that agencies expand existing use case inventories to transparently catalogue how AI systems are being utilized. Notably, CDT urges that the Trump administration should “clarify and proactively communicate” how the Department of Government Efficiency (DOGE), an unofficial federal body led by Elon Musk, is reportedly using AI to make high-risk decisions.

AI Security and Safety

Anthropic, Google, and the Center for Data Innovation each underscore the national security implications of frontier AI models. Anthropic’s submission notes that new AI systems are trending toward capabilities that could facilitate the development of biological or cyber weapons, emphasizing the need for “rapidly assessing” advanced models for potential misuse. To mitigate such risks, the Center for AI Policy (CAIP) recommends that the U.S. government establish a clear definition of frontier AI so that national security regulations effectively address the most high-risk models.

Anthropic and others also advocate for keeping the U.S. AI Safety Institute (AISI) intact while bolstering it with statutory authorities and interagency coordination to test AI models for national security risks. The Center for Data Innovation, CNAS, and CSET, meanwhile, propose creating a national AI incident database and a vulnerability database — akin to the National Vulnerability Database for cybersecurity — to track AI failures, identify systemic weaknesses, and coordinate risk mitigation. CAIP takes this a step further, urging the Trump administration to create an “AI Emergency Response Preparedness Program” involving “realistic simulations” of AI-driven threats — including AI-enabled drone and cyber attacks — requiring AI developers to respond to these scenarios.

Google, CNAS, and CSET call for collaboration with private labs and the IC to evaluate and mitigate potential security threats, including espionage and chemical, biological, radiological, and nuclear (CBRN) vulnerabilities. Google also opposes mandated disclosures that could reveal trade secrets or model architecture details, warning that such transparency “could provide a roadmap” for malicious actors to circumvent AI guardrails.

CDT  suggests the U.S. AI Action Plan should incorporate measures to address other AI safety risks, including privacy violations and discrimination concerns. It recommends that the National Institute of Standards and Technology should “holistically and accurately” assess the efficacy and fairness of AI systems, as well as issue guidance on evaluating the validity of measurements used. Lastly, CSET proposes that the Trump administration create “standard pathways” to challenge “adverse” AI-enabled decisions and implement whistleblower protections at frontier AI firms to discourage “dangerous” practices.

Obligations for AI Developers, Deployers, and Users

A recurring theme, particularly in Google’s submission, is the need to clearly delineate liability throughout the AI lifecycle. Google argues that developers cannot be held responsible for every downstream deployment — particularly when they lack control or visibility into final uses. Instead, they advocate “role-based” accountability: Developers should provide transparency around model training, but the ultimate deployers should bear liability for misuse in their applications.

At the same time, Google concedes that certain minimal disclosures (for instance, about synthetic media) may be warranted, but it resists broad, mandatory “AI usage” labels that could inadvertently help adversaries “jailbreak” or circumvent AI security features.

Copyright Issues and Development of High-Quality Datasets

OpenAI, Google, the Center for Data Innovation, and MITRE each argue for policies that expand access to robust, high-quality datasets while preserving fair use protections. OpenAI maintains that applying fair use to AI is a “matter of national security” in the face of Chinese competitors who “enjoy unfettered access” to copyrighted data. The company warns that narrowing data access could give Beijing an irreparable advantage in the race to develop state-of-the-art AI models.

The News/Media Alliance, representing over 2,000 media organizations, focuses on publisher rights. It raises concerns that generative AI models are trained on vast quantities of copyrighted material without permission, threatening traditional news revenue streams. The Alliance proposes collaborative licensing agreements and clearer guidelines about disclosing when and how AI-generated content uses news materials.

Finally, the Center for Data Innovation recommends a National Data Foundation, analogous to the National Science Foundation, that would fund the creation, structuring, and curation of large-scale datasets across both public and private sectors. Business Roundtable also highlights the importance of unlocking access to government datasets from the perspective of producing more representative, less biased AI models.

* * *

The table below highlights key aspects of public submissions responding to the White House’s RFI.

ThemeOrganizationSubmission
Innovation and RegulationBusiness Roundtable• “The Administration should assess regulatory gaps to ensure that any new regulations, if necessary, are appropriately narrowly scoped to address identified gaps without harming U.S. companies’ ability to innovate. Many AI applications are covered under topic-and sector-specific federal statutes. Where regulatory guardrails are deemed necessary, whether in new or existing rules covering AI systems, policymakers should provide clear guidance to businesses, foster U.S. innovation, and adopt a risk-based approach that carefully considers and recognizes the nuances of different use cases, including those that are low-risk and routine. Reporting requirements should be carefully crafted to avoid unnecessary information collection and onerous compliance burdens that slow innovation.”
• “Companies have experienced the challenges of dealing with a fragmented and increasingly complex regulatory landscape due to the patchwork of state data privacy laws, which hinders innovation and the ability to provide consumer services. Federal AI legislation with strong preemption should provide protection for consumers and certainty for businesses developing and deploying AI.”
Center for Data Innovation• “Scientific breakthroughs powered by AI—whether in medicine, climate science, or materials—are critical to progress, however, without AI-driven improvements to the systems that apply these discoveries, even the most advanced innovations risk being trapped in inefficient, outdated structures that fail to serve people effectively.”
• “The administration should preserve but refocus the AI Safety Institute (AISI) to ensure the federal government provides the foundational standards that inform AI governance. While AISI, housed at NIST, does not set laws, it plays a critical role in developing safety standards and working with international partners—functions that are essential for maintaining a coherent federal approach. Without this, AI governance will continue to lack a structured federal foundation, leaving states to introduce their own regulations in response to AI risks without clear federal guidance. This risks creating a fragmented regulatory landscape where businesses must comply with conflicting requirements, and policymakers struggle to craft effective, evidence-based laws.”
Center for Democracy & Technology• "The AI Action Plan should set a course that ensures America remains a home for open model development… Restricting open model development now would not improve public safety or further national security — rather, it would sacrifice the considerable benefits associated with open models and cede leadership in the open model ecosystem to foreign adversaries. Rather than restricting open model development, the AI Action Plan should ensure that open models retain their central position in the American AI ecosystem, while promoting the development of voluntary standards to enable their safe and responsible development and use.”
Center for Security and Emerging Technology• “To promote U.S. AI R&D leadership, the government should incentivize and award projects that take interdisciplinary approaches, encourage research findings to be disseminated openly and widely, and support public sector research in coordination with private sector innovation. Since AI is a general-purpose technology, basic R&D supports downstream model development for commercial use, application, and, eventually, profits.”
• “The U.S. government should support the release of open-source AI models, datasets, and tools that can be used to fuel U.S. AI development, innovation, and economic growth. Open-source models and tools enable greater participation in the AI domain, allowing lower-resource organizations that cannot develop base models themselves to access, experiment, and build upon them.”
Google• “Long-term, sustained investments in foundational domestic R&D and AI-driven scientific discovery have given the U.S. a crucial advantage in the race for global AI leadership. Policymakers should significantly bolster these efforts—with a focus on speeding funding allocations to early-market R&D and ensuring essential compute, high-quality datasets, and advanced AI models are widely available to scientists and institutions.”
• “The Administration should ensure that the U.S. avoids a fragmented regulatory environment that would slow the development of AI, including by supporting federal preemption of state-level laws that affect frontier AI models. Such action is properly a federal prerogative and would ensure a unified national framework for frontier AI models focused on protecting national security while fostering an environment where American AI innovation can thrive. Similarly, the Administration should support a national approach to privacy, as state-level fragmentation is creating compliance uncertainties for companies and can slow innovation in AI and other sectors.”
News/Media Alliance•“The AI Action Plan should support measures to promote competition amongst actors, reduce abusive dominance by Big Tech, and prevent unfair competition in the marketplace. Without transparency and other guardrails to protect the marketplace, AI risks being captured by Big Tech, discouraging competition, reducing investments, undermining innovation and ultimately hurting American consumers.”
OpenAI• “We propose creating a tightly-scoped framework for voluntary partnership between the federal government and the private sector to protect and strengthen American national security… Overseen by the US Department of Commerce and in coordination with the AI Czar, perhaps by reimagining the US AI Safety Institute, this effort would provide domestic AI companies with a single, efficient “front door” to the federal government that would coordinate expertise across the entire national security and economic competitiveness communities.”
Workforce Adoption and Economic Impacts of AIAnthropic• “We anticipate that 2025 will likely mark the beginning of more visible, large-scale economic effects from AI technologies”
• “We believe that computing power will become an increasingly significant driver of economic growth. Accordingly, the White House should track the relationship between investments in AI computational resources and economic performance to inform strategic investments in domestic infrastructure and related supply chains.”
• “The White House should engage with Congress on and task relevant agencies with examining how AI adoption might reshape the composition of the national tax base, and ensure the government maintains visibility into potential structural economic shifts.”
Business Roundtable• “America needs a workforce with the skills and training required for the in-demand jobs of today and tomorrow, including developing AI models, using AI applications and tools, and building and supporting AI infrastructure… Policymakers should complement these private-sector initiatives with reforms to the workforce development system that support employers’ ever-evolving workforce needs and worker advancement in an increasingly technology-based economy.”
Center for Data Innovation• “The U.S. AI Action Plan should make rapid AI adoption across all sectors of the U.S. economy the cornerstone of its policy. It can take a leaf out of the UK’s AI Opportunities Action Plan and, as the UK rightly puts it, “push hard on cross-economy AI adoption.”
Center for a New American Security• To “leverage America’s talent advantage once more,” the U.S. government should add “high-demand AI jobs with demonstrated shortages to the Schedule A list… Employers in Schedule A categories can hire foreign talent while bypassing cumbersome recruitment and labor certifications requirements, filling critical roles more expeditiously.”
• The Trump administration should also “expedite appointments, vetting, and processing for visa applicants with job offers in cutting-edge AI research, development, and innovation.”
Center for Security and Emerging Technology• The U.S. government should “increase funding for the federal National Apprenticeship system, with an emphasis on technical occupations and industry intermediaries,” “fund and reauthorize career and technical education programs,” and “support the creation of an AI scholarship-for-service program.”
• The Trump administration should also “work with Congress to support AI literacy efforts for the American people” and provide them with the “necessary education and information to make informed decisions about their AI use and consumption.
Google• “This moment offers an opportunity to ensure that AI can be integrated as a core component of U.S. education and professional development systems. The Administration and agency stakeholders have an opportunity to ensure that access to technical skilling and career support programs (including investments in K-12 STEM education and retraining for workers) are broadly accessible to U.S. communities to ensure a resilient labor force.”
• “Where practicable, U.S. agencies should use existing immigration authorities to facilitate recruiting and retention of experts in occupations requiring AI-related skills, such as AI development, robotics and automation, and quantum computing.”
Global AI LeadershipBusiness Roundtable• “The domestic AI ecosystem can be further strengthened by U.S. efforts to shape international AI policies, ensuring they promote security and prosperity while avoiding conflicting legal obligations. U.S. leadership helps set global AI standards that align with democratic values, including transparency, fairness and privacy. Without American influence, authoritarian regimes could shape AI development and regulatory structures in ways that undermine human rights and increase surveillance.”
Center for Data Innovation•“AISI should take a leading role in collaborating on open-source AI safety with international partners, industry leaders, and academic experts. 18 While nations may compete aggressively to drive innovation and diffusion of open-source models, they need not compete on developing the foundational safety standards that underpin open-source AI… By aligning on shared protocols for incident reporting, safety benchmarks, and post-deployment evaluations, the United States can support the robust diffusion of open-source AI while mitigating its inherent risks.”
• “The United States is losing ground to China in the race to become Africa’s preferred AI partner. Over the past few years, the U.S. government has only offered vague commitments and diplomatic statements to the continent, while China has taken concrete action…. The United States should get proactive about strengthening strategic ties and better positioning itself as the preferred partner for AI innovation in emerging markets… DeepSeek’s open-source approach has already made it a preferred choice for many developers in Africa. If the United States wants to remain competitive, it should ensure its own AI companies stay at the forefront of open-source innovation. That means continuing to resist undue restrictions on open- source AI and open model weights, ensuring American-developed models remain accessible and widely adopted.”
Center for Democracy & Technology•“If America remains at the frontier of open model development, its models will likely become the basis for AI-based technologies in much of the world. But if the U.S. stifles domestic open model development, the basis for those technologies would likely be models developed by authoritarian governments.”
Center for a New American Security• “DeepSeek-R1 demonstrates China's success in projecting cost-effective, open source AI leadership to the world despite embedding authoritarian values in its AI. The United States can counter this strategy by rapidly releasing modified versions of leading open source Chinese models that strip away hidden censorship mechanisms and the ‘core socialist values’ required by Chinese AI regulation. In doing so, the United States can expose the contradiction in China's approach, erode the appeal of Chinese AI, and position America as the legitimate champion of authentic open source AI.
• “The Biden administration created the U.S.-China AI Working Group but it only convened twice, with few tangible outcomes. The Trump administration’s new AI Action Plan should reframe this group as a technical expert body to tackle shared AI risks and reduce tensions without undermining America's AI lead. This reformulated group would serve as a body to discuss shared AI risks, instead of acting as a forum for comprehensive political changes in the U.S.-China relationships. This means avoiding politically contentious or overly broad areas of discussion, such AI disinformation or its effect on human rights, and focusing instead on narrow, less politically contentious technical problems ripe for scientific collaboration, such as identifying and responding to dangerous behaviors in AI models, including deception, attempted self-replication, or circumventing human control.”
• “Establishing frameworks for international cooperation and discussion channels for emerging AI- accelerated biotech issues remains crucial, despite anticipated Chinese resistance to joining such an initiative. Following the model of the U.S. ‘Political Declaration on Responsible Military Use of AI and Autonomy,’ articulating guiding principles during early development stages can positively influence technological trajectories for both participating and non-participating nations.”
Center for Security and Emerging Technology• The U.S. government should “prioritize, alongside AI capability advancements, the diffusion of American AI models in the U.S. and global AI ecosystem. Adoption of U.S. open models abroad builds reliance on U.S. technology, thereby endowing the U.S. government with soft power, serving as a foundation for stronger relationships and alliances with partners, and encouraging further paid use of related U.S. AI technologies like enterprise subscription services and cloud platforms. Promotion of U.S. AI technology abroad can also combat the growing influence of Chinese models especially in developing and emerging economies, and prevent China from providing the foundation for large parts of the global digital infrastructure, with implications for the diffusion of Chinese ideologies on the world.”
Google• “We encourage the Department of Commerce, and the National Institute of Standards and Technology (NIST) in particular, to continue its engagement on standards and critical frontier security work. Aligning policy with existing, globally recognized standards, such as ISO 42001, will help ensure consistency and predictability across industry.”
• “The U.S. government should work with aligned countries to develop the international standards needed for advanced model capabilities and to drive global alignment around risk thresholds and appropriate security protocols for frontier models. This includes promulgating an international norm of “home government” testing—wherein providers of AI with national security-critical capabilities are able to demonstrate collaboration with their home government on narrowly targeted, scientifically rigorous assessments that provide ‘test once, run everywhere’ assurance.”
• “The U.S. government should oppose mandated disclosures that require divulging trade secrets, allow competitors to duplicate products, or compromise national security by providing a roadmap to adversaries on how to circumvent protections or jailbreak models. Overly broad disclosure requirements (as contemplated in the EU and other jurisdictions) harm both security and innovation while providing little public benefit.”
OpenAI• “As with Huawei, there is significant risk in building on top of DeepSeek models in critical infrastructure and other high-risk use cases given the potential that DeepSeek could be compelled by the CCP to manipulate its models to cause harm. And because DeepSeek is simultaneously state-subsidized, state-controlled, and freely available, the cost to its users is their privacy and security, as DeepSeek faces requirements under Chinese law to comply with demands for user data and uses it to train more capable systems for the CCP’s use.”
• “While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led AI, securing both American leadership on AI and a brighter future for all Americans.”
Export ControlsAnthropic• “We strongly recommend the administration strengthen export controls on computational resources and implement appropriate export restrictions on certain model weights.”
• The U.S. government should require countries “to sign government-to-government agreements outlining measures to prevent smuggling. As a prerequisite for hosting data centers with more than 50,000 chips from U.S. companies, the U.S. should mandate that countries at high-risk for chip smuggling comply with a government-to-government agreement that 1) requires them to align their export control systems with the U.S., 2) takes security measures to address chip smuggling to China, and 3) stops their companies from working with the Chinese military.”
• The U.S. government should also “consider reducing the number of H100s that Tier 2 countries can purchase without review to further mitigate smuggling risks.”
Business Roundtable• “The Administration should collaborate closely with the business community to ensure that all new controls on emerging and foundational technologies effectively advance U.S. national and economic security objectives. Business Roundtable recommends that the White House National Security and Economic Councils create a standing, private-sector Export Control Advisory Board (ECAB) with security clearance to ensure that private sector members understand the national security reasons for contemplated controls and policymakers are appraised of their potential commercial and economic implications.”
• The U.S. Department of Commerce’s Bureau of Industry and Security should analyze the “potential commercial, economic and competitiveness effects” of export controls and consult with potentially affected industries, as well as “advocate that key allies embrace comparable controls to ensure that U.S. companies are not uniquely disadvantaged.”
Center for AI Policy• “Even the best-designed export controls will be porous without adequate staff to enforce them. Smuggling of advanced AI chips is rampant, largely because the BIS is severely under-resourced… To solve this problem, the Trump Administration should work with Congress to ensure that BIS receives the $75 million in additional annual funding it requested to hire an adequate staff, along with a one-time additional payment of $100 million to immediately address information technology issues.”
Center for Data Innovation• “The current reactive, whack-a-mole approach to AI export controls doesn’t meaningfully slow China’s progress, but it does erode the global position of U.S. AI companies. The U.S. government should maintain targeted export restrictions of advanced AI technologies to countries of concern, even if these restrictions act more as hurdles than roadblocks. However, the government’s priority should be to expand the global market share of American AI firms. export controls are misaligned with the realities of market competition. While intended to weaken China’s AI sector, they are increasingly disadvantaging U.S. firms instead. Chinese companies are adept at circumventing these controls by leveraging stockpiles, utilizing inference-optimized chips, and ramping up domestic semiconductor production.”
• “Rather than focusing narrowly on restricting access, U.S. policy should pivot towards bolstering domestic AI capabilities, enhancing global export competitiveness, and advocating for reciprocal market access. If China continues gaining ground despite restrictions while U.S. firms lose opportunities abroad, the current approach will have done more harm than good.”
• “The Bureau of Industry and Security (BIS) should take a more proactive approach by tightening and enforcing export controls. Current export controls focus on restricting finished AI chips, but gaps in the supply chain undermine their effectiveness… To close these gaps, BIS should expand restrictions to cover upstream components and advanced packaging materials, apply U.S. controls to any technology using American IP regardless of where it is manufactured, and strengthen enforcement on suppliers facilitating these workarounds. Without these measures, China will continue stockpiling essential AI hardware while U.S. firms lose market access without achieving meaningful strategic gains.”
Center for a New American Security• “The current approach of annual export control updates fails to keep pace with rapid technological change in AI and emerging new evidence. The Bureau of Industry and Security (BIS) should instead adopt a quarterly review process with the authority to make targeted adjustments to controls as new capabilities emerge.”
• To address chip smuggling into China, Congress should “significantly increase BIS's budget to enhance its monitoring and enforcement capabilities, including hiring additional technical specialists and field investigators.”
Center for Security and Emerging Technology• “The Bureau of Industry and Security (BIS) in the Department of Commerce should institute scenario planning assessments before implementing new export controls and rigorously monitor the effectiveness of current export control policies… BIS should also conduct regular post-implementation assessments that track progress toward stated control objectives, second-order effects, impact on China's semiconductor manufacturing equipment industry, developments in China's semiconductor fabrication capabilities, and advancements in China's AI sector.”
• For the “broader U.S. export strategy to work,” BIS should “clearly articulate and justify the objectives of the export controls to allies.”
Google• “AI export rules imposed under the previous Administration (including the recent Interim Final Rule on AI Diffusion) may undermine economic competitiveness goals the current Administration has set by imposing disproportionate burdens on U.S. cloud service providers. While we support the national security goals at stake, we are concerned that the impacts may be counterproductive.
• “The U.S. government should adequately resource and modernize the Bureau of Industry and Security (BIS), including through BIS’s own adoption of cutting-edge AI tools for supply chain monitoring and counter-smuggling efforts, alongside efforts to streamline export licensing processes and consideration of wider ecosystem issues beyond limits on hardware exports.”
OpenAI• “We propose that the US government consider the Total Addressable Market (TAM), i.e., the entire world less the PRC and its few allies, against the Serviceable Addressable Market (SAM), i.e., those countries who prefer to build AI on democratic rails, and help as many of the latter as possible commit, including by actually committing to deploy AI in line with democratic principles set out by the US government.”
• OpenAI proposes maintaining the three-tiered framework of the AI diffusion rule but expanding the countries in Tier I (countries that commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens could be considered Tier I countries.)
• “This strategy would encourage global adoption of democratic AI principles, promoting the use of democratic AI systems while protecting US advantage. Making sure that open-sourced models are readily available to developers in these countries also will strengthen our advantage. We believe the question of whether AI should be open or closed source is a false choice—we need both, and they can work in a complementary way that encourages the building of AI on American rails.”
Infrastructure and EnergyAnthropic• “The federal government should consider establishing an ambitious national target: build 50 additional gigawatts of power dedicated to the AI industry by 2027.”
• The U.S. government should “task federal agencies with streamlining permitting processes by accelerating reviews, enforcing timelines, and promoting inter-agency coordination to eliminate bureaucratic bottlenecks.”
• “Some authoritarian regimes who do not share our country’s democratic values and may pose security threats are already actively courting American AI companies with promises of abundant, low-cost energy. If U.S. developers migrate model development or storing of model weights to these countries in order to access these energy sources, this could expose sensitive intellectual property to transfer or theft, enable the creation of AI systems without proper security protocols, and potentially subject valuable AI assets to disruption or coercion by foreign powers.”
Business Roundtable• “Business Roundtable supports Administration actions to facilitate investment in data centers, including streamlining permitting processes to expedite project approvals for both new data centers and related infrastructure.”
• “The Administration should work to shorten decision timelines on environmental reviews, provide preliminary feedback on application completion and accuracy, and digitize operations to streamline processes, including application submissions, necessary document uploads, feedback for revisions and status updates.”
Center for a New American Security• “While U.S. energy infrastructure languishes in a quagmire of red tape, China can expeditiously direct large-scale build outs, underscored by its unprecedented speed in nuclear power plant construction. Other nations, such as the United Arab Emirates and Saudi Arabia, also have the capital, energy, and government cut-through to expedite AI and energy infrastructure to meet anticipated demand. Paired with sufficient access to chips, this creates a risk that they could leapfrog U.S. AI leadership with world- leading AI computing infrastructure.”
• The U.S. government should “partner with state and local regulators to create designated special compute zones that aim to—as much as possible—align permitting and regulatory frameworks across jurisdictions and minimize barriers to AI infrastructure development.”
Google• “The U.S. government should adopt policies that ensure the availability of energy for data centers and other growing business applications that are powering the growth of the American economy. This includes transmission and permitting reform to ensure adequate electricity for data centers coupled with federal and state tools for de-risking investments in advanced energy-generation and grid-enhancing technologies.”
OpenAI• “Today, hundreds of billions of dollars in global funds are waiting to be invested in AI infrastructure. If the US doesn't move fast to channel these resources into projects that support democratic AI ecosystems around the world, the funds will flow to projects backed and shaped by the CCP.”
• The U.S. government should adopt a “National Transmission Highway Act” to “expand transmission, fiber connectivity and natural gas pipeline construction” and streamline the processes of planning, permitting and paying to “eliminate redundancies.”
• The U.S. government should also develop a “Compact for AI” among U.S. allies and partners that streamlines access to capital and supply chains to compete with Chinese AI infrastructure alliances, as well as institute “AI Economic Zones” that “speed up permitting for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors.”
Government Adoption of AI Anthropic• “We propose an ambitious initiative: across the whole of government, the Administration should systematically identify every instance where federal employees process text, images, audio, or video data, and augment these workflows with appropriate AI systems.”
• The U.S. government should also “eliminate regulatory and procedural barriers to rapid AI deployment at the federal agencies, for both civilian and national security applications” and “direct the Department of Defense and the Intelligence Community to use the full extent of their existing authorities to accelerate AI research, development, and procurement.”
• “We also encourage the White House to leverage existing frameworks to enhance federal procurement for national security purposes, particularly the directives in the October 2024 National Security Memorandum (NSM) on Artificial Intelligence and the accompanying Framework to Advance AI Governance and Risk Management in National Security.”
• “Additionally, we strongly advocate for the creation of a joint working group between the Department of Defense and the Office of the Director of National Intelligence to develop recommendations for the Federal Acquisition Regulatory Council (FARC) on accelerating procurement processes for AI systems while maintaining rigorous security and reliability standards.”
Center for Data Innovation• “Former President Biden’s 2023 executive order instructed federal agencies to integrate AI, but it was overwhelmingly focused on risk mitigation—requiring oversight boards, governance guidelines, and guardrails against potential pitfalls. The government needs to do more than just play defense. Many U.S. government officials recognize AI’s transformative potential in fields like education, energy, and disaster response, as highlighted at the recent AI Aspirations conference. What’s missing isn’t vision, it’s action.”
• “Agencies should establish clear visions for how AI will be used in sectors and AI adoption “grand challenges” (i.e., highly ambitious and impactful goals for how AI can transform an industry) to accelerate deployment in critical sectors.”
Center for Democracy & Technology• The Trump administration should “ensure compliance with these principles in its existing use of AI, most significantly in DOGE efforts which appear to be leveraging AI without transparency or these other necessary guardrails in place.”
• The AI Action Plan can develop public trust in federal government’s use of AI by “building on agencies’ existing use case inventories – a key channel for the public to learn information about how agencies are using and governing AI systems and for industry to understand AI needs within the public sector – and by requiring agencies to provide public notice and appeal when individuals are affected by AI systems in high-risk settings.”
• “The AI Action Plan should recognize that independent external oversight is also critically important to promote safe, trustworthy, and efficient use of AI in the national security/intelligence arena. Many such uses will be classified and exposure of them could put national security at risk. At the same time, because the risk of abuse and misuse is high when such functions are kept secret, an oversight mechanism with expertise, independence and power to access relevant information (even if classified) should be established in the Executive Branch. CDT has recommended that Congress establish such a body, and the AI Action Plan should support such an approach.”
Center for a New American Security• “The U.S. military can take full advantage of AI and autonomy, but only if DoD develops rigorous and streamlined processes that allow systems to be tested thoroughly and permit warfighters an early and ongoing role. Developing warfighter trust is a complex process and requires their active participation from conception to fielding of an AI-enabled and/or autonomous system.”
• The U.S. military can address the concerns of potential coordination conflicts “by working across services to clarify concepts of employment and identify potential points of conflict between friendly heterogeneous AI and autonomous systems.”
Center for Security and Emerging Technology• “If all federal agencies agree to abide by a unified set of minimum AI standards for purposes of acquisition and deployment, this would greatly reduce the burden on companies offering AI solutions, accelerate the adoption of standard tools and metrics, and reduce inefficiencies caused by the need to repeatedly draft and respond to similar but different requirements in government contracts.”
• The Office of the Secretary of Defense (OSD) has “not empowered a DOD-wide entity to set AI policies for the services. This results in duplication of efforts across the military services, with multiple memos guiding efforts across the DOD in different ways. For example, within each service, different commands have different network ATO standards, which require substantial rework by the government and AI vendors to satisfy before deployment. Continuous ATOs and ATO reciprocity must be enforced across OSD and an entity should be empowered to synchronize policies, rapidly certify reliable AI solutions, and act to stop emerging security issues.”
Google• “The U.S. government, including the defense and intelligence communities, should pursue improved interoperability and data portability between cloud solutions; streamline outdated accreditation, authorization, and procurement practices to enable quicker adoption of AI and cloud solutions; and accelerate digital transformation via greater adoption of machine-readable documents and data.”
• “Federal agencies should avoid implementing unique compliance or procurement requirements just because a system includes AI components. To the extent they are needed, any agency-specific guidelines should focus on unique risks or concerns related to the deployment of the AI for the procured purpose.”
OpenAI• “AI adoption in federal departments and agencies remains unacceptably low, with federal employees, and especially national security sector employees, largely unable to harness the benefits of the technology.”
• The U.S. government should establish a “faster, criteria-based path for approval of AI tools” and “allow federal agencies to test and experiment with real data using commercial-standard practices—such as SOC 2 or International Organization for Standardization (ISO) audit reports—and potentially grant a temporary waiver for FedRAMP. AI vendors would still be required to meet FedRAMP continuous monitoring requirements while awaiting full accreditation.”
AI Security and SafetyAnthropic• “Our most recent system, Claude 3.7 Sonnet, demonstrates concerning improvements in its capacity to support aspects of biological weapons development—insights we uncovered through our internal testing protocols and validated through voluntary security exercises conducted in partnership with the U.S. and U.K. AI Safety and Security Institutes. This trajectory suggests, consistent with scaling laws research, that numerous AI systems will increasingly embody significant national security implications in the coming years.”
• The U.S. government should “preserve the AI Safety Institute in the Department of Commerce and build on the MOUs it has signed with U.S. AI companies—including Anthropic—to advance the state of the art in third-party testing of AI systems for national security risks.”
• The White House should also “direct the National Institutes of Standards and Technology (NIST), in consultation with the Intelligence Community, Department of Defense, Department of Homeland Security, and other relevant agencies, to develop comprehensive national security evaluations for powerful AI models, in partnership with frontier AI developers, and develop a protocol for systematically testing powerful AI models for these vulnerabilities.”
• “To mitigate these risks, the federal government should partner with industry leaders to substantially enhance security protocols at frontier AI laboratories to prevent adversarial misuse and abuse of powerful AI technologies.”
Center for AI Policy• The U.S. government should “develop and apply a practical definition of frontier AI so that national security regulations target only the largest and most dangerous AI models. Most AI systems – especially the smaller systems that are more likely to be developed by startups, academics, and small businesses – are relatively benign and do not pose major national security risks.”
• “AI systems are advancing at an unprecedented pace, and it’s only a matter of time before intentional or inadvertent harm from AI threatens U.S. national security, economic stability, or public safety. The U.S. government must act now to ensure it has insights into the capabilities of frontier AI models before they are deployed and that it has response plans in place for when failures inevitably occur. To fill this critical preparedness gap, President Trump should immediately direct the Department of Homeland Security (DHS) to establish an AI Emergency Response Program as a public-private partnership. Under this program, frontier AI developers like OpenAI, Anthropic, DeepMind, Meta, and xAI would participate in emergency preparedness exercises.”
• “These preparedness exercises would involve realistic simulations of AI-driven threats, explicitly requiring participants to actively demonstrate their responses to unfolding scenarios. Similar to the DHS-led ‘Cyber Storm’ exercises, which rigorously simulate cyberattacks and test real-time interagency and private-sector coordination, these AI-focused simulations should clearly define roles and responsibilities, ensure swift and effective communication between federal agencies and frontier AI companies, and systematically identify critical gaps in existing response protocols… Most frontier AI developers have already made voluntary commitments to share the information needed to create these exercises. To encourage additional companies to participate, this type of cooperation should be treated as a prerequisite for federal contracts, grants, or other agreements involving advanced AI.”
• “In the near future, small autonomous drones will pose a threat to U.S. civilians on par with large strategic missiles. To meet this threat, the Administration should procure and distribute equipment for disabling unauthorized drones, and ensure that there are clear lines of legal authority for civilian law enforcement to deploy this equipment.”
Center for Data Innovation• “The administration should direct AISI to establish a national AI incident database and an AI vulnerability database, creating essential infrastructure for structured reporting and proactive risk management. AI failures and vulnerabilities are currently tracked inconsistently across different sectors, making it difficult to identify trends, address systemic weaknesses, or prevent recurring Issues… Additionally, an AI vulnerability database—similar to the National Vulnerability Database used for cybersecurity—would catalog weaknesses in AI models, helping organizations mitigate risks before they escalate.”
Center for Democracy & Technology• “The AI Action Plan should direct NIST to continue building on the foundation it set with the AI RMF and subsequent work… The standards-development process should center not only the prospective security risks arising from capabilities related to chemical, biological, and radiological weapons and dual-use foundation models, but also the current, ongoing risks of AI such as privacy harms, ineffectiveness of the system, lack of fitness for purpose, and discrimination. NIST’s standards should also include a multifaceted approach for holistically and accurately measuring different qualities of an AI system, such as safety, efficacy, or fairness, and provide guidance on determining the validity and reliability of the measurements used.”
• “Federal agencies should take steps to align all AI uses with existing privacy and cybersecurity requirements – such as requirements for agencies to conduct privacy impact assessments – and to proactively guard against novel privacy and security risks introduced by AI.”
Center for a New American Security• “AI datacenters and companies will become increasingly attractive targets for adversarial nations seeking to steal advanced models or sabotage critical systems. The private sector alone is neither equipped nor incentivized to effectively counter sophisticated state actors. The federal government must deploy its security expertise to protect this critical technology and infrastructure. As an immediate priority, the National Security Agency and broader national security community should partner with leading labs and AI datacenters to build resilience against espionage and attacks. The National Institute of Standards and Technology should also play an active role in co-developing best practice security standards for model weights—the sensitive intellectual property that encapsulates the capability of an AI model.”
• “The administration should empower the AISI as a hub of AI expertise for the broader federal government to ensure AI strengthens rather than undermines U.S. national security. The administration could further support this AI hub of expertise with continued implementation of the AI National Security Memorandum, which strengthens engagement with national security agencies to better integrate expertise across classified and non-classified domains.
• “The federal government needs a systematic way to track and learn from real-world incidents. A central reporting system for AI-related incidents would allow the government to investigate and update its approach to evaluations where appropriate.”
Center for Security and Emerging Technology• The U.S. government should “significantly expand open-source intelligence (OSINT) gathering and analysis on AI. This work is particularly neglected in the intelligence community, which remains focused on classified sources.”
• “The federal government should significantly ramp up efforts to monitor China's AI ecosystem, including the Chinese government itself (at all relevant levels and organizations), related actors such as state-owned enterprises, state research labs, and state-sponsored technology investment funds, and other actors, such as universities and tech companies.”
• “The U.S. government should partner with AI companies to share suspicious patterns of user behavior and other types of threat intelligence. In particular, the Intelligence Community and the Department of Homeland Security should partner with AI companies to share cyber threat intelligence, and the Department of Homeland Security should partner with AI companies to prepare for potential emergencies caused by malicious use or loss of control over AI systems. In addition, the Department of Commerce should receive, triage, and distribute reports on CBRN and cyber capabilities of frontier AI models to support classified evaluations of novel AI-enabled threats, building on a 2024 Memorandum of Understanding between the Departments of Energy and Commerce.”
• The Trump administration should “implement a mandatory AI incident reporting regime for sensitive applications across federal agencies. Federal agencies deploy AI systems for a wide range of safety- and rights-impacting use cases, such as using AI to deliver government services or predict criminal recidivism. AI failures, malfunctions, and other incidents in these contexts should be tracked and investigated to determine their root cause, inform risk management practices, and reduce the risk of recurrence.”
• “The Trump administration should establish a secure line for employees to report problematic company practices, such as failure to report system capabilities that threaten national security.”
• The U.S. government should “Define capabilities of concern and support the creation of threat profiles for different types of AI models… . A coalition of government agencies should develop frameworks that clearly define risky capabilities, including chem-bio capabilities of concern, so evaluators know what risks to test for. These frameworks could draw upon Appendix D of the National Institute of Standards and Technology’s (NIST) draft Managing Misuse Risk for Dual-Use Foundation Models. In addition, government agencies should build threat profiles that consider different combinations of users, AI tools, and intended outcomes, and design targeted policy solutions for these highly variable scenarios.
• “The Trump administration should empower AISI to develop quantitative benchmarks for AI, including benchmarks that test a model’s resistance to jailbreaks, usefulness for making CBRN weapons, and capacity for deception… AISI should develop standards that cover topics including model training, pre-release internal and external security testing, cybersecurity practices, if-then commitments, AI risk assessments, and processes for testing and re-testing systems as they change over time.”
Google• “Policymakers should also consider measures to safeguard critical infrastructure and cybersecurity, including by partnering with the private sector. For example, pilots that build on the Defense Advanced Research Projects Agency’s AI Cyber Challenge and joint R&D activities can help develop breakthroughs in areas such as data center security, chip security, confidential computing, and more. Expanded threat sharing with industry will similarly help identify and disrupt both security threats to AI and threat actor use of AI.”
• “It is particularly valuable for the U.S. government to develop and maintain an ability to evaluate the capabilities of frontier models in areas where it has unique expertise, such as national security, CBRN issues, and cybersecurity threats. The Department of Commerce and NIST can lead on: (1) creating voluntary technical evaluations for major AI risks; (2) developing guidelines for responsible scaling and security protocols; (3) researching and developing safety benchmarks and mitigations (like tamper-proofing); and (4) assisting in building a private-sector AI evaluation ecosystem.”
Obligations for AI Developers, Deployers, and UsersGoogle• “To the extent a government imposes specific legal obligations around high-risk AI systems, it should clearly delineate the roles and responsibilities of AI developers, deployers, and end users. The actor with the most control over a specific step in the AI lifecycle should bear responsibility (and any associated liability) for that step. In many instances, the original developer of an AI model has little to no visibility or control over how it is being used by a deployer and may not interact with end users. Even in cases where a developer provides a model directly to deployers, deployers will often be best placed to understand the risks of downstream uses, implement effective risk management, and conduct post-market monitoring and logging. Nor should developers bear responsibility for misuse by customers or end users. Rather, developers should provide information and documentation to the deployers, such as documentation of how the models were trained or mechanisms for human oversight, as needed to allow deployers to comply with regulatory requirements.”
• “The U.S. government should support the further development and broad uptake of evolving multistakeholder standards and best practices around disclosure of synthetic media—such as the use of C2PA protocols, Google’s industry-leading SynthID watermarking, and other watermarking/provenance technologies, including best practices around when to apply watermarks and when to notify users that they are interacting with AI-generated content.”
Copyright Issues and Development of High-Quality DatasetsBusiness Roundtable• “An important technical resource for AI innovation is government datasets, which are typically much larger in size and scope and more representative of diverse populations than non-governmental datasets. This makes them uniquely valuable for conducting research, testing, reducing bias and producing better AI models. But while open data is encouraged and often required in government, federal agencies do not prioritize publishing high-impact unclassified datasets. Increasing access to advanced computing resources and tools empowers more organizations to engage in AI research and development by reducing barriers to entry.”
Center for Data Innovation• “Unlike other foundational inputs to AI, such as physical infrastructure or scientific research, the United States treats data more as a regulatory challenge than a national asset. The result is an AI ecosystem constrained by gaps, inconsistencies, and bottlenecks, leaving businesses and researchers struggling to find and use the data they need. The AI Action Plan should correct this by establishing a National Data Foundation (NDF), an institution dedicated to funding and facilitating the production, structuring, and responsible sharing of high-quality datasets. An NDF would do for data what the National Science Foundation (NSF) does for research—ensuring the United States isn’t just competing on AI models but on the quality and availability of the data that powers them. It could fund data generation, creating large-scale, machine-readable datasets”
• “In contrast, an NDF recognizes that in many critical areas, the U.S. lacks the necessary high-quality, AI-ready data not just in the public sector, but also in key private-sector domains. Rather than just improving discoverability, the NDF would fund the creation, structuring, and strategic enhancement of both public and private-sector datasets”
Google• “Policymakers should move quickly to further incentivize partnerships with national labs to advance research in science, cybersecurity, and chemical, biological, radiological, and nuclear (CBRN) risks. The U.S. government should make it easier for national security agencies and their partners to use commercial, unclassified storage and compute capabilities, and should take steps to release government datasets, which can be helpful for commercial training.”
• “Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances. These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rights holders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation.”
OpenAI• “Applying the fair use doctrine to AI is not only a matter of American competitiveness —it’s a matter of national security. The rapid advances seen with the PRC’s DeepSeek, among other recent developments, show that America’s lead on frontier AI is far from guaranteed. Given concerted state support for critical industries and infrastructure projects, there’s little doubt that the PRC’s AI developers will enjoy unfettered access to data—including copyrighted data—that will improve their models. If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over.
• To ensure the copyright system “continues to support American AI leadership,” the U.S. government should work to “prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress” and encourage “more access to government-held or government-supported data. This would boost AI development in any case, but would be particularly important if shifting copyright rules restrict American companies’ access to training data.”
• The U.S. government should also partner with industry to “develop custom models for national security. The government needs models trained on classified datasets that are fine-tuned to be exceptional at national security tasks for which there is no commercial market—such as geospatial intelligence or classified nuclear tasks. This will likely require on-premises deployment of model weights and access to significant compute, given the security requirements of many national security agencies.”
News/Media Alliance• “Publishers should not be forced to subsidize the development of AI models and commercial products without a fair return for their own investments, no more than cloud providers would be expected to bear the costs of compute without payment for their input. The future of generative AI requires sustaining the incentives for the continued production of news and other quality content that, in turn, builds and powers generative AI models and products. Without high-quality, reliable materials, these tools will become less useful to consumers, and may jeopardize our country’s leadership in the sector. IP laws also protect AI companies, including when their original creations are misappropriated by foreign companies.5 We are committed to establishing a symbiotic, mutually beneficial framework between content production and AI development that respects intellectual property, facilitates technological development, and takes a balanced, market-based approach to AI innovation and regulation.”
• “The sufficiency of existing copyright law notwithstanding, we remain concerned that many AI stakeholders have used copyright protected material to build and operationalize their models without consent, in ways damaging to publishers. While the legality of such activities are the subject of litigation, there is a danger that it will not be possible to undo the damage before a judicial resolution can occur. The AI Action Plan should therefore encourage AI developers to engage more collaboratively with content industries in a manner that serves the broader national interest and a win-win result for our global aspirations.”
• “The Administration should push back on the flawed text and data mining (TDM) opt-out frameworks being considered or recently adopted in various countries. These opt-out policies do not work, have the potential to harm American creators and businesses through the uncompensated taking of their property, overregulate content licensing, and turn copyright law and free market licensing upside down.”
IMAGE: Visualization of an AI chip (via Getty Images)