The National AI Policy Framework is a set of legislative recommendations and policy guidelines released by the Trump Administration on March 20, 2026. [1] [2] The framework serves as a blueprint for the U.S. Congress to draft and enact a unified national law governing artificial intelligence. Its primary objectives are to foster American innovation, ensure U.S. leadership in the global AI field, and establish safeguards for safety, intellectual property, and individual rights. [3]
A central tenet of the policy is the preemption of state-level AI laws to prevent what the administration calls a "fragmented patchwork" of conflicting regulations. [3] [2] The framework advocates for a "light-touch" and "minimally burdensome" national standard, arguing that consistency is essential for national competitiveness, particularly in the "global AI race" against competitors like China. [4] [1]
History and Background
The push for a national AI standard began prior to the framework's release. In December 2025, President Donald Trump signed an executive order that called for the establishment of a single national regulatory standard for the AI industry and barred states from enacting new laws that would impose limits on AI companies. [2] [4] This executive action set the stage for a federally-led approach to AI regulation. The framework was co-created by White House AI czar David Sacks and Office of Science and Technology Policy (OSTP) Director Michael Kratsios. [5]
Previous attempts to pass federal preemption language in Congress, including efforts related to the National Defense Authorization Act (NDAA), had failed, highlighting the political challenges of the approach. [4] The framework was officially unveiled on March 20, 2026, as the administration's formal legislative wishlist for Congress. Following the release, the administration stated its intention to work with Congress in the coming months to transform the proposals into a formal bill, with a stated goal of having it codified into law within the year 2026. [2]
- The Trump Administration: The executive branch responsible for creating and releasing the framework. [3]
- President Donald J. Trump: The President under whose administration the framework was developed and who initiated the policy with a December 2025 executive order. [2]
- Michael Kratsios: Director of the White House Office of Science and Technology Policy (OSTP) and a co-creator of the framework. He served as a key spokesperson for its release. [2] [5]
- David Sacks: The White House's AI czar, who was directed to co-create the draft framework alongside Kratsios. [5]
- U.S. Congress: The intended recipient of the framework's recommendations, tasked with drafting and enacting federal AI legislation. [3]
- First Lady Melania Trump: Her "Take It Down Act" initiative is cited as a foundation for the framework's child safety provisions. [3]
The framework is structured around seven pillars, each outlining specific policy recommendations for Congress. [3]
This pillar focuses on ensuring the safety of minors interacting with AI systems. It recommends legislation that would:
- Empower parents with tools to control their children's privacy settings, screen time, and content exposure on AI-driven platforms.
- Establish privacy-protective and commercially reasonable age-assurance requirements for platforms likely to be accessed by minors.
- Mandate that AI platforms implement features to mitigate risks of sexual exploitation and self-harm among young users.
- Reinforce the application of existing child privacy laws, such as those limiting data collection for training AI models, to AI systems.
- Build upon pre-existing initiatives like the "Take It Down Act" to combat deepfake abuse and the creation of AI-generated child sexual abuse material (CSAM). [3] [4]
This section addresses broader societal, economic, and security impacts of AI. Key recommendations include:
- Infrastructure & Energy: Streamlining federal permits for AI infrastructure and on-site power generation for data centers. It also aligns with a "Ratepayer Protection Pledge" to prevent residential electricity costs from rising due to the energy demands of new data center construction. [3]
- Fraud Prevention: Increasing resources for law enforcement to combat AI-enabled fraud, with a particular focus on scams targeting senior citizens. [3]
- National Security: Requiring that national security agencies, in consultation with AI developers, have the capacity to assess and mitigate risks from advanced frontier AI models. [3]
- Small Business Support: Providing grants, tax incentives, and technical assistance to help small businesses adopt and utilize AI tools. [3]
This pillar outlines the administration's proposed approach to copyright and digital identity, which has generated significant discussion.
- AI Training and Copyright: The framework asserts that training AI models on copyrighted material does not inherently violate copyright law. It recommends that the legal question of whether AI training constitutes "fair use" should be resolved by the courts, not through new legislation from Congress. [3]
- Licensing Frameworks: It suggests that Congress should enable the creation of voluntary "collective rights systems." These systems would allow creators to negotiate compensation from AI companies for the use of their work without facing antitrust liability, but it does not mandate when such licensing is required. [3]
- Digital Replicas: The framework proposes a new federal law to protect individuals from the unauthorized commercial use of their AI-generated voice or likeness. This law would include clear exceptions for First Amendment-protected activities such as parody, satire, and news reporting. [3]
This section focuses on protecting First Amendment rights in the context of AI-driven platforms and reflects the administration's concerns about content moderation.
- Government Overreach: The framework recommends prohibiting the federal government from coercing or pressuring AI providers to censor, remove, or alter content for partisan or ideological reasons. This has been linked to the administration's previous disputes with tech companies over content policies. [3] [5]
- Legal Redress: It proposes creating a legal pathway for individuals to seek redress if they believe a federal agency has attempted to censor their lawful expression on an AI platform. [3]
This pillar details a pro-innovation and limited-regulation approach to AI development.
- Regulatory Philosophy: The framework explicitly opposes the creation of a new federal rulemaking body for AI. Instead, it advocates for leveraging the authority of existing sector-specific regulators and promoting the development of industry-led standards. [3] [4]
- Innovation Tools: To accelerate development, it recommends establishing regulatory sandboxes for rapid testing and deployment of new AI applications. It also calls for making non-sensitive federal datasets accessible in AI-ready formats to support model training by both industry and academia. [3]
This section addresses the labor and education challenges posed by AI's integration into the economy.
- Skills Development: It recommends using non-regulatory methods to integrate AI training into existing education, apprenticeship, and workforce support programs. [3]
- Workforce Research: The framework calls for expanding federal research to understand how AI is changing job tasks, with the goal of better informing policies to support workers. [3]
- Land-Grant Institutions: It proposes enhancing the ability of land-grant institutions to provide AI technical assistance and develop youth programs related to AI. [3]
This pillar defines the proposed balance of power between federal and state governments in regulating AI and is one of the framework's most significant components.
-
National Standard: It advocates for a "minimally burdensome national standard" to preempt a "patchwork" of conflicting state laws, arguing that a single standard is a matter of interstate commerce and national security. [3] [2]
-
Areas of Federal Preemption: The framework proposes that a federal law should prevent states from regulating:
- AI Development: The process of designing, training, and building AI models.
- Developer Liability: States would be restricted from holding AI developers liable for the unlawful actions of third-party users, creating a "safe harbor" provision. [4] [5]
-
Preserved State Powers: The framework clarifies that states would retain their traditional authority in several areas:
- General Laws: Enforcement of existing laws regarding consumer protection, fraud, and child safety. [5]
- Zoning: Use of local zoning laws to site AI data centers and other infrastructure.
- State Government Use: Regulation of a state's own procurement and use of AI technology in areas like law enforcement and public schools. [3]
Political Context and Reception
The National AI Policy Framework was released into a complex political environment. The administration positioned it as a proposal that could garner bipartisan support in a divided Congress. [2] However, its strong stance on federal preemption has been a point of contention. The framework directly challenges existing or pending AI laws in states like California (SB 53) and New York (RAISE Act), which impose mandates on AI firms for safety reporting and risk disclosure. [5]
Even within the Republican party, the preemption proposal faced resistance. In early March 2026, over 50 Republican members of Congress signed a letter opposing federal efforts to halt or override state-level AI legislation, citing concerns over states' rights. [5]
Following the framework's release, Republican leadership, including Senate Majority Leader John Thune and Commerce Committee Chair Ted Cruz, aimed to produce a draft bill based on the recommendations by late April 2026. A reported legislative strategy to garner potential Democratic support involves merging a bill based on the framework with a version of the Kids Online Safety Act (KOSA). [4]