Back to Posts

Deepfake Legislation Tracker

Feb. 1, 2026

Man taking glasses off like he does not belive what he sees

Last Updated: Jan. 31, 2026

Deepfake legislation has exploded across the United States, with 47 states enacting laws targeting AI-generated synthetic media as of mid-2025. The regulatory landscape addresses two primary concerns: election manipulation through deceptive political content and non-consensual intimate imagery (NCII).

The federal TAKE IT DOWN Act, signed into law in May 2025, provides the first nationwide framework for addressing intimate deepfakes, while states continue to lead on election-related restrictions.

This tracker provides an overview of deepfake legislation at both federal and state levels, organized by category and compliance requirements.

Federal Law: TAKE IT DOWN Act

President Trump signed the TAKE IT DOWN Act (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act) on May 19, 2025, following near-unanimous congressional support (409-2 in the House, unanimous in the Senate). The law represents the first major federal response to AI-generated intimate imagery.

Provisions

Criminal Prohibition: The law makes it a federal crime to knowingly publish or threaten to publish non-consensual intimate imagery using an interactive computer service, regardless of whether the content is authentic or AI-generated. Penalties include up to two years imprisonment for adult victims and up to three years for minors.

Platform Requirements: Covered platforms must implement a notice-and-takedown process by May 19, 2026. Upon receiving a valid takedown request, platforms must remove the content within 48 hours and make reasonable efforts to remove known copies. A "covered platform" includes any website, online service, or mobile application that provides a forum for user-generated content or regularly deals with intimate imagery as part of its business.

Enforcement: The Federal Trade Commission oversees platform compliance. Failure to reasonably comply with takedown requests constitutes an unfair or deceptive trade practice under the FTC Act. Individual criminal enforcement is immediate; platform compliance requirements take effect May 2026.

Consent Clarification: The law explicitly states that prior consent to create an image or share it with another person does not constitute consent for publication.

First Amendment Concerns

Several civil liberties groups, including the Electronic Frontier Foundation and the Center for Democracy & Technology, have raised concerns about the law's vague language and potential for abuse. Critics note the takedown mechanism could be exploited by bad-faith actors to remove legitimate content, similar to problems observed with Digital Millennium Copyright Act (DMCA) enforcement. The law requires takedown requesters to act in "good faith" but provides limited mechanisms to challenge improper requests. Enacted in 1998, the DMCA provides a framework for copyright holders to protect their works from unauthorized use online.

Election Deepfakes by State

As of January 2026, 28 states have enacted laws specifically addressing deepfakes in political communications. Most laws focus on disclosure requirements rather than outright bans, requiring political advertisements containing AI-generated content to include clear disclaimers. Typical requirements include statements such as "This ad was generated or substantially altered using artificial intelligence."

California's AB 2839, enacted in September 2024, faced immediate legal challenges. A federal judge struck down portions of the law in August 2025, finding that key provisions conflicted with Section 230 of the Communications Decency Act and were likely unconstitutional under the First Amendment. Minnesota's similar law has also been challenged by X (formerly Twitter), with early rulings suggesting courts remain skeptical of sweeping prohibitions on political deepfakes.

States With Election Deepfake Laws

State Law Enacted Key Requirements
Alabama HB 172 May 2024 Prohibits deceptive political synthetic media during campaign periods
Arizona HB 2394, SB 1359 May 2024 Disclosure requirements for AI-generated election content
California AB 2355, AB 2839 Sept. 2024 Disclaimer requirements; portions struck down Aug. 2025
Colorado HB 1147 May 2024 Disclosure requirements for synthetic media in campaigns
Delaware HB 316 Oct. 2024 Labeling requirements for AI political content
Florida HB 919 April 2024 Disclosure requirements for political deepfakes
Hawaii SB 2687 July 2024 Broad enforcement standing including local prosecutors
Idaho HB 664 March 2024 Disclosure requirements for AI election communications
Indiana HB 1133 March 2024 Transparency requirements for synthetic media
Kentucky SB 4 March 2025 Disclosure and civil/criminal penalties
Michigan HB 5144 Nov. 2023 Early adopter with disclosure requirements
Minnesota HF 1370, HF 4772 May 2023/2024 Prohibits misleading deepfakes; under legal challenge
Mississippi SB 2577 April 2024 Disclosure requirements for election deepfakes
Montana SB 25 2025 Injunction, civil fines ($500), criminal referral for repeat offenders
New Mexico HB 182 2024 Disclosure requirements
South Dakota 2025 law 2025 Disclosure requirements for 2026 midterms
Texas SB 751 2023 Early adopter; prohibits deceptive political deepfakes
Utah SB 131 2024 Disclosure requirements
Washington SB 5152 2023 Early adopter; disclosure requirements
Wisconsin 2024 law 2024 Disclosure requirements

Note: This table includes major enacted laws. Additional states have pending legislation or laws addressing related issues. Visit Public Citizen's tracker for real-time updates.

Non-Consensual Intimate Imagery (NCII)

As of mid-2025, 45 states had enacted laws addressing sexually explicit deepfakes, up from 32 states at the start of the year. All 50 states and the District of Columbia have some form of NCII protection, though many older laws were written before AI-generated content became prevalent and may not explicitly cover synthetic media.

The federal TAKE IT DOWN Act now provides a nationwide baseline, but state laws often provide additional remedies including civil causes of action that allow victims to sue for damages.

State NCII Deepfake Laws

State Law Criminal/Civil Key Features
Alabama HB 161 Criminal Clarifies AI-generated content falls within privacy-harm framework
California SB 926, AB 1831 Both Civil remedies and criminal penalties; specific AI provisions
Georgia SB 9 Both Passed March 2025; comprehensive deepfake protections
New Jersey April 2025 law Both Third-degree crime; up to $30,000 fine; civil damages
New York A02249 Civil Enhanced publicity rights; registration requirements
Oklahoma HB 1364 Criminal Strengthened penalties for synthetic intimate content
Oregon HB 2299 Criminal Unlawful dissemination of synthetic intimate imagery
Pennsylvania Act 35 (SB 649) Criminal Effective Sept. 2025; misdemeanor to felony; satire carve-out
Tennessee ELVIS Act Both Voice/likeness protections; applies to AI replication
Washington HB 1205 Criminal Effective July 2025; "forged digital likeness" prohibition

Note: This table highlights select state laws with explicit AI/deepfake provisions. Most states have general NCII laws that may also apply. Visit Public Citizen's intimate deepfakes tracker for comprehensive coverage.

Right of Publicity and Voice Protection

Several states have expanded traditional right of publicity laws to address AI-generated content. Tennessee's ELVIS Act (Ensuring Likeness Voice and Image Security Act), enacted in 2024, specifically prohibits using AI to mimic a person's voice without permission. New York's 2025 legislation added new civil remedies and registration requirements for protecting individuals from unauthorized AI replication.

These laws are particularly relevant for entertainment industry concerns about AI replication of performers' voices and likenesses, but they also provide broader protections against fraud and identity theft using synthetic media.

Federal Preemption Debate

The "One Big Beautiful" bill passed by the House of Representatives in May 2025 includes a provision for a 10-year moratorium on state-level AI regulation. Supporters argue federal preemption is necessary to avoid a patchwork of conflicting state requirements, while critics contend it would leave elections and individuals vulnerable in the absence of comprehensive federal protections.

President Trump's December 2025 executive order on AI directs federal agencies to challenge state AI laws deemed to impede a "minimally burdensome national standard." The order authorizes challenges under the dormant Commerce Clause, First Amendment, and federal preemption doctrines. These efforts face likely legal challenges and create uncertainty for both state enforcement and compliance planning.

Business Compliance Considerations

For Online Platforms

TAKE IT DOWN Act Compliance (by May 2026): Implement a notice-and-takedown process for NCII. Create clear reporting mechanisms for users. Establish 48-hour removal workflows. Document good-faith compliance efforts. Provide conspicuous notice of the removal process.

For Political Advertisers and Campaigns

Disclosure Requirements: Inventory all AI-generated or AI-modified content. Add required disclaimers to political communications. Track state-specific timeframes (typically 60-90 days before elections). Train staff on synthetic media identification and labeling. Document compliance efforts.

For All Companies

Deepfake Detection: Implement detection tools for synthetic media targeting employees or executives. Train staff to recognize deepfake audio and video in business communications. Establish verification protocols for high-value transactions or sensitive requests. Consider voice authentication safeguards for wire transfers and similar approvals.

Vendor Assessment: Evaluate AI vendors' compliance with applicable deepfake laws. Review contracts for liability allocation related to synthetic media. Assess content moderation practices for user-generated platforms.

Key Dates

Date Event
May 19, 2025 TAKE IT DOWN Act signed; criminal provisions effective immediately
July 27, 2025 Pennsylvania Act 35 and Washington HB 1205 take effect
Sept. 5, 2025 Pennsylvania deepfake law effective
Nov. 2026 2026 midterm elections (state disclosure laws apply)
May 19, 2026 TAKE IT DOWN Act platform compliance deadline

Additional Resources

For real-time tracking of state deepfake legislation, see Public Citizen's Election Deepfakes Tracker and Intimate Deepfakes Tracker. Ballotpedia maintains the Artificial Intelligence Deepfake Legislation Tracker with comprehensive state-by-state data.

For guidance on detecting deepfakes in your enterprise, see our Deepfake Detection Guide and contact STACK Cybersecurity for customized security awareness training.