Back to Posts

Trump Admin Approach to Federal AI Policy

May 9, 2026

Legislature with AI swirls over it

Last Updated: May 9, 2026

While states have raced to fill the regulatory void on artificial intelligence, the federal government has pursued a very different strategy -- one focused on deregulation, preemption of state laws, and positioning the United States to win what the Trump administration calls the global AI race. The result is a direct collision between state and federal authority that is now playing out in the courts, in Congress, and in federal funding decisions.

This page covers the federal AI policy landscape, including the failed moratorium, the TAKE IT DOWN Act, America's AI Action Plan, the December 2025 executive order, the NIST standards approach, and the international regulatory context that shapes compliance obligations for U.S. businesses operating abroad.

For state-by-state legislation, see our U.S. AI State Laws Tracker.

Quick Navigation

The Federal Moratorium Debate

A particularly contentious proposal that dominated AI policy discussions in 2025 was an attempt to impose a 10-year moratorium on state AI regulations. Initially introduced as part of President Trump's One Big Beautiful Bill budget reconciliation package, the provision would have prevented states from enforcing their own AI laws for a decade.

The moratorium, championed by Republican Sen. Ted Cruz of Texas, was designed to prevent what supporters called a "regulatory cacophony" of conflicting state policies. Proponents argued that navigating 50 different regulatory frameworks would stifle innovation, create compliance burdens particularly harmful to smaller companies, and potentially hamper America's competitive position against China in AI development. Tech industry leaders, including OpenAI CEO Sam Altman, expressed support for federal preemption, with Altman noting it would be "very difficult to imagine us figuring out how to comply with 50 different sets of regulation."

However, the proposal faced overwhelming bipartisan opposition from state officials. In a remarkable display of unity, a coalition of 17 Republican governors led by Arkansas Gov. Sarah Huckabee Sanders sent a letter to congressional leadership opposing the moratorium, arguing that "AI is already deeply entrenched in American industry and society; people will be at risk until basic rules ensuring safety and fairness can go into effect." A bipartisan group of 40 state attorneys general also sent a letter to Congress voicing objections to the proposal as violative of state sovereignty and their efforts to protect consumers.

After significant pushback, lawmakers attempted to modify the proposal by shortening the timeframe to five years and exempting certain categories of state laws. Despite these concessions, the revised version still faced criticism for containing language that would undermine state laws deemed to place an "undue or disproportionate burden" on AI systems. Ultimately, in a decisive 99-1 Senate vote, the moratorium was stripped from the budget bill, with even Sen. Cruz joining the overwhelming majority. The outcome was a significant victory for states' rights advocates but left unresolved the question of how to balance national interests in AI development with state concerns about protecting citizens.

The TAKE IT DOWN Act: First Federal AI Harm Law

While the moratorium debate played out, Congress managed to pass one significant piece of AI-related legislation with near-unanimous support. The TAKE IT DOWN Act -- Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act -- was signed into law on May 19, 2025, after passing the House 409-2 and clearing the Senate unanimously. It stands as the first federal law directly addressing harm caused by AI-generated content.

The law criminalizes the knowing publication of nonconsensual intimate images, including AI-generated deepfakes, and requires covered platforms to establish notice-and-removal processes allowing victims to request takedowns. The platform compliance deadline is May 19, 2026. Upon receiving a valid request, a platform must remove the content within 48 hours. Enforcement falls to the Federal Trade Commission, which may pursue civil penalties for noncompliance. The law also extends FTC enforcement authority to nonprofit entities, which are ordinarily outside the FTC Act's scope.

For a full breakdown of federal and state deepfake laws and compliance requirements, see our Deepfake Legislation Tracker.

America's AI Action Plan

On July 23, 2025, the White House released "Winning the Race: America's AI Action Plan," fulfilling a mandate from Trump's January 2025 executive order on removing barriers to American AI leadership. The Plan outlines more than 90 federal policy actions organized around three pillars: accelerating innovation, building American AI infrastructure, and leading in international AI diplomacy and security.

The Plan's deregulatory posture signals the administration's broader approach to AI governance, prioritizing speed and competitive dominance over precautionary regulation. Accompanying the Plan, Trump signed three executive orders on the same day: one promoting the export of American AI technology, one streamlining federal permitting for data center construction, and one directing federal agencies to procure only AI systems deemed ideologically neutral and "truth-seeking."

One provision carries direct implications for state-level compliance. The Plan recommends that OMB assess states' AI regulatory environments when making federal funding decisions, effectively creating a financial incentive for states to align with federal policy rather than maintain stricter local requirements. For defense contractors and other businesses that rely on federal funding, this adds another layer of uncertainty to an already complex compliance picture.

Trump Administration Executive Order

On Dec. 11, 2025, President Trump signed an executive order aimed at blocking state AI laws, arguing that state-by-state regulation creates a burdensome patchwork that threatens American AI leadership. The order encourages the attorney general to challenge what it characterizes as onerous and excessive state laws and calls for development of a national AI framework.

The administration specifically criticized laws like Colorado's anti-discrimination requirements, claiming they may force AI models to produce false results to avoid differential impact on protected groups. The executive order states that state-by-state regulation "by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups." It also claims that some state laws impermissibly regulate beyond state borders, impinging on interstate commerce.

The order establishes an AI Litigation Task Force to coordinate federal challenges to state laws and directed the Secretary of Commerce to publish an evaluation identifying burdensome state AI laws by March 11, 2026. The DOJ's subsequent intervention in xAI's lawsuit against Colorado's AI Act represents the task force's first major enforcement action. On March 20, 2026, the White House released a broader legislative framework urging Congress to formally preempt conflicting state AI laws while preserving state authority over child safety, consumer protection, and government AI procurement.

New York Assemblymember Alex Bores, sponsor of the RAISE Act, argues his bill does not fit the category of onerous regulation. He notes the RAISE Act was largely based on voluntary commitments that AI companies already made and pledged to follow, simply ensuring those rules stayed in law and that companies couldn't backslide. "I don't think it's onerous to require companies to do the things that they're already saying they're going to do," Bores said in a recent NPR interview.

Several Democratic governors, including Colorado's Jared Polis, Connecticut's Ned Lamont, and New York's Kathy Hochul, have expressed concern about the challenges posed by varying state regulations. As Governor Lamont noted, "I just worry about every state going out and doing their own thing, a patchwork quilt of regulations," and the potential burden this creates for AI development. Republican Senator Marsha Blackburn of Tennessee has taken a different view, arguing that states must retain their ability to protect citizens until comprehensive federal legislation is in place: "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens."

On the legislative front, Sen. Blackburn introduced the TRUMP AMERICA AI Act in December 2025 -- formally, the Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act -- which would codify the executive order into statute and establish a single federal rulebook for AI. Additional congressional proposals are expected throughout 2026 as lawmakers seek to formalize the administration's deregulatory approach. The FTC faced a March 11, 2026, deadline to publish a policy statement describing how the FTC Act applies to AI and when state laws that require alteration of AI outputs may be preempted under federal law, a ruling that could have immediate consequences for several state transparency mandates.

The NIST Standards Approach

A more promising federal pathway has emerged in recent congressional hearings, with growing bipartisan support for leveraging the National Institute of Standards and Technology (NIST) to develop technical standards for AI systems. This approach has proven successful in adjacent domains like cybersecurity and privacy, where the NIST Cybersecurity Framework has achieved widespread voluntary adoption across industries without imposing heavy-handed regulation.

NIST has already developed the AI Risk Management Framework (AI RMF 1.0), which provides a common vocabulary and methodology for identifying, assessing, and mitigating AI risks. The framework emphasizes a flexible, context-specific approach that can accommodate rapid technological changes while still establishing important guardrails. NIST also released a preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (NISTIR 8596) in December 2025, offering guidelines for using the NIST Cybersecurity Framework (CSF 2.0) to accelerate secure AI adoption.

For companies already familiar with NIST frameworks for cybersecurity and privacy compliance, particularly in regulated sectors like healthcare, defense, and financial services, this approach provides natural continuity with existing governance structures. Defense contractors subject to CMMC requirements will find the NIST AI RMF integrates directly with their current programs.

The Federalism Debate: National Strategy or Californication?

A House Judiciary Subcommittee hearing on Sept. 18, 2025, AI at a Crossroads: A Nationwide Strategy or Californication? examined how the current patchwork of state regulations might impact innovation and impose costs on the AI industry. The state-by-state approach, sometimes called "Californication" due to California's outsized influence, raises questions about whether companies operating nationally will effectively be forced to comply with the strictest state standards regardless of where they are based.

The Artificial Intelligence Research, Innovation, and Accountability Act of 2024 addresses governance frameworks for high-impact AI systems, requiring risk assessments and management practices. This bipartisan effort signals that despite the state patchwork, there is movement at the federal level toward establishing baseline standards. Many industry stakeholders have implemented voluntary commitments and self-regulation frameworks that complement formal regulations, showing that governance can advance even without formal mandates.

Whether federal courts will ultimately allow the Trump administration to preempt state laws remains uncertain and will likely be determined through litigation in 2026 and beyond. The DOJ's intervention in the Colorado xAI lawsuit is a preview of that legal fight. Until comprehensive federal legislation emerges, states will continue serving as laboratories of democracy, testing different approaches to managing AI's risks while fostering its benefits.

International Approaches to AI Regulation

While U.S. states navigate their regulatory approaches, other nations have taken more coordinated action. The contrast highlights fundamental differences in regulatory philosophy that carry direct compliance implications for U.S. businesses operating internationally.

European Union: The AI Act

The EU has established the most comprehensive regulatory framework globally through its AI Act, which became legally binding on Aug. 1, 2024. The legislation takes a risk-based approach, categorizing AI systems into four tiers: unacceptable risk (prohibited), high risk (stringent requirements), limited risk (transparency obligations), and minimal risk (no additional obligations).

Key compliance milestones include Feb. 2, 2025, when prohibitions on certain AI practices took effect; Aug. 2, 2025, when rules for general-purpose AI models became applicable; Aug. 2, 2026, when the core framework becomes broadly operational for high-risk systems; and Aug. 2, 2027, when remaining provisions take full effect. Penalties for noncompliance are substantial: up to EUR 35 million or 7% of worldwide annual turnover for prohibited practices, up to EUR 15 million or 3% for other violations, and up to EUR 7.5 million or 1% for supplying incorrect information to authorities.

The Act's extraterritorial reach means American businesses must comply with EU requirements regardless of where they are headquartered. For U.S. companies serving European customers, partnering with EU-based firms, or operating within the European market, compliance planning should be underway now rather than waiting for U.S. federal action. Companies already familiar with GDPR compliance will find some conceptual overlap, though the AI Act introduces entirely new categories of requirements around risk management, transparency, and prohibited uses.

For a detailed compliance guide including high-risk system requirements, ISO 42001 alignment, and practical steps for U.S. companies, see our EU AI Act Compliance Guide for U.S. Businesses.

Other International Approaches

The United Kingdom has pursued a more flexible approach with its National AI Strategy and a policy white paper titled "AI Regulation: A Pro-Innovation Approach," emphasizing sector-specific regulation through existing regulators rather than creating new AI-specific laws. Canada has implemented the Artificial Intelligence and Data Act (AIDA), which takes a similar risk-based approach to the EU but with lighter requirements for lower-risk systems. China has taken a more assertive regulatory stance, requiring AI service providers to register algorithms with the government and conduct security assessments before deployment. Japan has pursued voluntary guidelines and principles, emphasizing human-centric AI development while monitoring developments for potential future intervention.

The EU AI Act creates a unified framework across 27 member states with clear categories, timelines, and enforcement mechanisms. That model stands in direct contrast to the U.S. state-by-state patchwork -- and the federal government's ongoing effort to replace it with a single national standard makes the international comparison an increasingly relevant data point in the domestic debate.

Federal and International Considerations for Michigan Businesses

For Michigan-based companies, particularly those in defense contracting and healthcare, several federal and international considerations warrant attention alongside the state compliance picture.

Federal funding implications are an emerging concern. America's AI Action Plan directs OMB to factor states' AI regulatory environments into federal funding decisions. Defense contractors and other federally funded businesses should monitor how this recommendation develops into formal guidance, as it could affect grant and contract eligibility. The DOJ's active challenge to Colorado's AI law signals how seriously the administration is pursuing this strategy.

CMMC compliance intersections exist with AI governance. Defense contractors already familiar with NIST frameworks will find the NIST AI Risk Management Framework integrates naturally with existing cybersecurity and compliance programs. As AI tools become more embedded in defense contractor workflows, expect CMMC assessors to scrutinize AI governance practices more closely.

EU AI Act considerations apply if you serve European customers, have EU operations, or partner with NATO allies. Defense contractors and healthcare providers with international reach should assess whether their AI systems fall under EU jurisdiction. The Aug. 2, 2026, effective date for high-risk system requirements is approaching. See our EU AI Act Compliance Guide for detailed requirements and preparation steps.

For assistance navigating AI compliance requirements and developing governance frameworks aligned with both state regulations and cybersecurity best practices like NIST and CMMC, contact STACK Cybersecurity.

Cybersecurity Consultation

Do you know if your company is secure against cyber threats? Do you have the right security policies, tools, and practices in place to protect your data, reputation, and productivity? If you're not sure, it's time for a cybersecurity risk assessment (CSRA). STACK Cybersecurity's CSRA will meticulously identify and evaluate vulnerabilities and risks within your IT environment. We'll assess your network, systems, applications, and devices, and provide you a detailed report and action plan to improve your security posture. Don't wait until it's too late.

Schedule a Consultation Explore our Risk Assessment