Back to Posts Michigan Capitol Rotunda

Deep Dive Into State Artificial Intelligence Laws

Sept. 21, 2025

Across the United States, state legislatures have moved decisively to fill the regulatory void left by limited federal action on artificial intelligence. These state-level initiatives reveal several distinct approaches to AI governance, with frameworks that range from comprehensive to narrowly targeted regulations.

Existential Risk Concerns

The regulatory landscape has been influenced by growing concerns about AI's potential long-term risks. In May 2023, over 350 AI executives, researchers, and engineers signed a statement warning that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Signers included leaders from OpenAI, Google DeepMind, and Anthropic. This unprecedented warning from the very architects of advanced AI systems has added urgency to regulatory discussions at both state and federal levels.

More recently, a 2024 report commissioned by the U.S. State Department concluded that advanced AI systems could, in a worst-case scenario, "pose an extinction-level threat to the human species," based on interviews with executives from leading AI companies, cybersecurity researchers, and national security officials. These high-profile warnings have accelerated debate about appropriate governance frameworks to address both near-term harms and long-term safety concerns.

State Leadership in AI Regulation

California

California remains a major influencer in AI regulation, passing 17 AI-related bills in 2024 that Governor Newsom signed into law. While the most comprehensive bill, SB 1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), was vetoed by Governor Newsom in September 2024 despite passing both chambers of the state legislature, California has taken a targeted approach focusing on specific areas of AI regulation.

California has approved numerous regulations addressing AI-generated deepfakes and explicit content. SB 926 criminalizes the creation or distribution of AI-generated sexually explicit images with the intent to cause serious emotional distress. SB 981 requires social media platforms to establish reporting mechanisms for users to flag deepfake nudes, with requirements to temporarily block such content during investigation and permanently remove it if confirmed. AB 1831 expands existing child pornography laws to include content generated by AI systems.

For election integrity, AB 2655 (the Defending Democracy from Deepfake Deception Act) requires large online platforms to block or label deceptive AI-generated content related to elections, while AB 2839 prohibits the distribution of materially deceptive election content. AB 2355 mandates that political advertisements using AI-generated content include clear disclosures.

Colorado

Colorado has been at the forefront of AI regulation with its landmark Colorado Anti-Discrimination in AI Law (SB 24-205), enacted on May 17, 2024. This comprehensive law focuses on protecting consumers from algorithmic discrimination in high-risk AI systems that make consequential decisions affecting areas such as employment, housing, education, and healthcare.

The law imposes a duty of reasonable care on both developers and deployers of high-risk AI systems, requiring them to take steps to protect against algorithmic discrimination based on protected characteristics. Developers must provide documentation about data sources, limitations, and risk mitigation strategies, while deployers must conduct impact assessments, provide notice to consumers, and establish appeal processes for adverse decisions.

Previously, Colorado passed SB 21-169 in 2021, which specifically addresses the use of AI in insurance underwriting, prohibiting insurers from using external consumer data and algorithms in ways that unfairly discriminate based on protected characteristics.

Illinois

Illinois has been an early mover in AI regulation, particularly in employment contexts. The Illinois Artificial Intelligence Video Interview Act (820 ILCS 42), effective since January 2020, requires employers who use AI to analyze video interviews to: (1) notify applicants before the interview that AI may be used; (2) explain how the AI works and what characteristics it evaluates; (3) obtain consent from applicants; (4) limit sharing of videos; and (5) delete videos upon request within 30 days.

In August 2024, Illinois enacted HB 3773, which amends the Illinois Human Rights Act to regulate AI in employment more broadly. Effective January 1, 2026, the law prohibits employers from using AI that has a discriminatory effect on employees based on protected characteristics and requires notice to employees when AI is used for recruitment, hiring, promotion, and other employment decisions.

New York

New York City has pioneered regulation of AI in hiring with its Automated Employment Decision Tools (AEDT) law (Local Law 144), which began enforcement on July 5, 2023. The law requires employers and employment agencies using AI tools for hiring or promotion decisions to:

  1. Conduct an annual bias audit of the tool by an independent auditor
  2. Publish a summary of the results on their website
  3. Notify job candidates and employees about the use of AI tools at least 10 business days before use
  4. Disclose the job qualifications and characteristics being evaluated

The law applies to computational processes derived from machine learning, statistical modeling, or data analytics that substantially assist or replace human decision-making in employment contexts.

Washington

Washington's SB 5827, enacted in 2023, addresses algorithmic discrimination by prohibiting covered entities from discriminating against individuals through automated decision systems on the basis of protected characteristics. The law requires reasonable efforts to test automated systems for algorithmic discrimination and establishes frameworks for transparency and accountability.

Emerging State AI Regulations

Michigan

Michigan has begun to address AI regulation, with a focus on election integrity. In 2023, the state enacted laws prohibiting AI-generated deepfake political ads within 90 days of an election unless they clearly disclose the use of AI. The Michigan Campaign Finance Act (Section 169.259) now requires that any qualified political advertisement created, published, or distributed using AI must include a clear statement about its AI-generated nature.

In October 2024, the Michigan Civil Rights Commission passed a resolution establishing guiding principles for AI use in the state, which calls for legislation to prevent algorithmic discrimination, protect privacy, and create a task force to monitor data collection practices. Additionally, the Michigan legislature is considering bills that would criminalize the development or use of AI to commit crimes.

Other State Initiatives

Several other states have taken steps toward AI regulation:

  • Texas has established an AI advisory council through SB 1893 to study the impacts of AI and make policy recommendations. The state has also enacted HB 2060, which requires impact assessments for AI systems used in public services.
  • Utah's Consumer Privacy Act includes provisions for automated decision systems, and SB 149 establishes AI standards for government agencies.
  • Vermont's S.197 created an AI commission tasked with studying impacts and developing policy recommendations. Act 89 specifically addresses AI use in insurance underwriting.
  • Virginia established an AI advisory council through HB 2360 to develop governance frameworks and incorporated AI-related protections into its Consumer Data Protection Act.
  • Connecticut has enacted SB 1103, which regulates AI use by insurance companies, requiring transparency and regular testing, and SB 2, which establishes regulations for generative AI use in education.
  • Massachusetts has passed H.5163, which requires disclosures when AI is used in hiring decisions.

Federal Efforts and Proposed Legislation

The Artificial Intelligence Research, Innovation, and Accountability Act of 2024 addresses governance frameworks for high-impact AI systems, requiring risk assessments and management practices. This bipartisan effort signals that despite the state patchwork, there is movement at the federal level toward establishing baseline standards.

Many industry stakeholders have implemented voluntary commitments and self-regulation frameworks that complement formal regulations. Leading tech firms have established internal review processes for high-risk AI applications, showing that governance can advance even without formal mandates.

The Federal Moratorium Debate

A particularly contentious proposal that dominated recent AI policy discussions was an attempt to impose a 10-year moratorium on state AI regulations. Initially introduced as part of President Trump's One Big Beautiful Bill budget reconciliation package, the provision would have prevented states from enforcing their own AI laws for a decade.

The moratorium, championed by Republican Senator Ted Cruz of Texas, was designed to prevent what supporters called a "regulatory cacophony" of conflicting state policies. Proponents argued that navigating 50 different regulatory frameworks would stifle innovation, create compliance burdens particularly harmful to smaller companies, and potentially hamper America's competitive position against China in AI development.

Tech industry leaders, including OpenAI CEO Sam Altman, had expressed support for federal preemption, with Altman noting it would be "very difficult to imagine us figuring out how to comply with 50 different sets of regulation."

However, the proposal faced overwhelming bipartisan opposition from state officials. In a remarkable display of unity, a coalition of 17 Republican governors led by Arkansas Governor Sarah Huckabee Sanders sent a letter to congressional leadership opposing the moratorium. The governors argued that "AI is already deeply entrenched in American industry and society; people will be at risk until basic rules ensuring safety and fairness can go into effect."

After significant pushback, lawmakers attempted to modify the proposal by shortening the timeframe to five years and exempting certain categories of state laws. Despite these concessions, the revised proposal still faced criticism for containing language that would undermine state laws deemed to place an "undue or disproportionate burden" on AI systems.

Ultimately, in a decisive 99-1 Senate vote, the moratorium was stripped from the budget bill, with even Sen. Cruz joining the overwhelming majority. This outcome represented a significant victory for states' rights advocates but leaves unresolved the question of how to balance national interests in AI development with legitimate state concerns about protecting citizens.

The NIST Standards Approach

A more promising federal pathway has emerged in recent congressional hearings, with growing bipartisan support for leveraging the National Institute of Standards and Technology (NIST) to develop technical standards for AI systems. This approach has proven successful in adjacent domains like cybersecurity and privacy, where the NIST Cybersecurity Framework has achieved widespread voluntary adoption across industries without imposing heavy-handed regulation.

NIST has already developed the AI Risk Management Framework (AI RMF 1.0), which provides a common vocabulary and methodology for identifying, assessing, and mitigating AI risks. The framework emphasizes a flexible, context-specific approach that can accommodate rapid technological changes while still establishing important guardrails.

The NIST standards approach offers several advantages: it leverages multi-stakeholder input from industry, academia, civil society, and government; creates technically sound, practical guidelines; and balances innovation with protection. For organizations already familiar with NIST frameworks for cybersecurity and privacy compliance—particularly in regulated sectors like healthcare, defense, and financial services—this approach provides continuity and integration with existing governance structures.

The Federalism Debate: National Strategy or 'Californication?'

A growing debate has emerged in Congress around whether the U.S. needs a unified national approach to AI regulation. A House Judiciary Subcommittee hearing on Sept. 18, 2025, titled "AI at a Crossroads: A Nationwide Strategy or Californication?" examined how the current patchwork of state regulations might impact innovation and impose costs on the AI industry.

The hearing highlighted a pivotal moment for AI regulation in the United States. Without a unified national strategy, fragmented state laws risk hindering innovation and slowing economic progress. As the debate over AI evolves, the hearing’s outcomes could shape the direction of future technology and policy decisions across the country.

Several Democratic governors, including Colorado's Jared Polis, Connecticut's Ned Lamont, and New York's Kathy Hochul, have expressed concern about the challenges posed by varying state regulations. As Governor Lamont noted, "I just worry about every state going out and doing their own thing, a patchwork quilt of regulations," and the potential burden this creates for AI development.

Republican leaders have been equally vocal about the issue, though often divided on the approach. Republican Senator Marsha Blackburn of Tennessee has emphasized that states must retain their ability to protect citizens until comprehensive federal legislation is in place, stating, "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens."

International Approaches

While U.S. states navigate their regulatory approaches, other nations have taken more coordinated action:

  • European Union: The EU has established the most comprehensive regulatory framework globally through its AI Act, which takes a risk-based approach. The legislation categorizes AI systems based on risk levels, from minimal to unacceptable risk, with corresponding requirements and prohibitions.
  • United Kingdom: The UK has pursued a more flexible approach with its National AI Strategy, followed by a policy white paper titled "AI Regulation: A Pro-Innovation Approach." This framework is designed to be agile and iterative, recognizing the rapid evolution of AI technologies.
  • Canada: Canada has implemented the Artificial Intelligence and Data Act (AIDA), which focuses on managing risks associated with high-impact AI systems while supporting innovation.
  • China: China has taken a more assertive regulatory stance with requirements for security assessments and algorithm registrations, particularly for generative AI.
  • Japan: Japan has pursued a less restrictive approach focused on voluntary guidelines and principles, emphasizing human-centric AI development.

The Path Forward

The contrasting approaches between state-level regulation in the U.S. and more unified frameworks adopted by other nations highlights the fundamental tension between fostering innovation and ensuring responsible AI development. The state-by-state approach—sometimes called "Californication" due to California's outsized influence—raises questions about whether companies will effectively be forced to comply with the strictest state standards nationwide.

As the debate continues, one thing is clear: effective AI governance requires balancing innovation with accountability, providing appropriate protections without stifling technological progress. With dozens more bills pending across legislatures nationwide, the AI regulatory landscape will continue to evolve rapidly in the coming years.

Related Resources

Cybersecurity Risk Assessment

Do you know if your company is secure against cyber threats? Do you have the right security policies, tools, and practices in place to protect your data, reputation, and productivity? If you're not sure, it's time for a cybersecurity risk assessment (CSRA). STACK Cyber's CSRA will meticulously identify and evaluate vulnerabilities and risks within your IT environment. We'll assess your network, systems, applications, and devices, and provide you a detailed report and action plan to improve your security posture. Don't wait until it's too late.

Schedule a Consultation Learn More