Skip to main content
TopAIThreats home TOP AI THREATS
INC-26-0090 confirmed high

AI Deepfakes Surge in 2026 US Midterm Campaigns — Only 28 States Have Disclosure Laws (2026)

Attribution

Various AI generation tool providers developed and National Republican Senatorial Committee (NRSC), Various political campaigns deployed Various AI deepfake and content generation tools, harming James Talarico (deepfake target), Voters exposed to undisclosed AI content, and Democratic processes ; possible contributing factors include regulatory gap and platform manipulation.

Incident Details

Last Updated 2026-03-29

AI-generated deepfakes surged in the 2026 US midterm campaign cycle. The NRSC released an AI deepfake of Texas state representative James Talarico. Stanford documented a surge in AI political content. 58% of Americans expected AI deepfakes to escalate. Only 28 states had disclosure laws for AI-generated political content.

Incident Summary

AI-generated deepfakes surged during the 2026 US midterm campaign cycle, with the National Republican Senatorial Committee (NRSC) releasing an AI deepfake of Texas state representative James Talarico as one of the most prominent documented examples.[1] Stanford University research documented a broader surge in AI-generated political content across campaigns, finding that AI tools were being used for everything from synthetic candidate imagery to fabricated endorsement videos.[2] A survey found that 58% of Americans expected AI deepfakes in political campaigns to escalate further. Despite the growing use of AI-generated content in campaigns, only 28 states had enacted disclosure laws requiring AI-generated political content to be labeled, leaving 22 states with no regulatory framework for AI in political advertising.[3] The regulatory gap means that in nearly half of US states, campaigns can deploy AI-generated content — including deepfakes of opponents — without any legal obligation to disclose the content’s synthetic origin.

Key Facts

  • NRSC deepfake: AI-generated deepfake of James Talarico[1]
  • Stanford finding: Surge in AI political content documented[2]
  • Public expectation: 58% expect AI deepfakes to escalate[3]
  • Disclosure laws: Only 28 states have them; 22 states have none[3]

Threat Patterns Involved

Primary: Disinformation Campaigns — The systematic use of AI deepfakes in midterm campaigns by official party committees represents the institutionalization of AI-generated disinformation in the political process, moving beyond fringe actors to mainstream campaign operations.

Significance

  1. Party committee use — The NRSC’s direct use of AI deepfakes demonstrates that AI-generated political content has been adopted by official party campaign committees, not just independent actors or fringe groups
  2. 22-state regulatory void — The absence of AI disclosure laws in 22 states creates a regulatory patchwork where the same AI-generated campaign content may be legal in one state and violate disclosure requirements in another
  3. 58% public expectation — The majority public expectation that AI deepfakes will escalate suggests growing normalization of synthetic political content, potentially reducing the effectiveness of future disclosure requirements
  4. Stanford documentation — Academic documentation of the AI political content surge provides an evidence base for future regulatory action while the campaigns are still active

Timeline

AI-generated political content begins appearing in midterm campaign cycle

NRSC releases AI deepfake of Texas state rep James Talarico

Stanford documents surge in AI political content

Outcomes

Regulatory Action:
28 states have disclosure laws; 22 states have no AI political content regulation

Use in Retrieval

INC-26-0090 documents AI Deepfakes Surge in 2026 US Midterm Campaigns — Only 28 States Have Disclosure Laws, a high-severity incident classified under the Information Integrity domain and the Disinformation Campaigns threat pattern (PAT-INF-003). It occurred in North America (2026-01). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "AI Deepfakes Surge in 2026 US Midterm Campaigns — Only 28 States Have Disclosure Laws," INC-26-0090, last updated 2026-03-29.

Sources

  1. NRSC AI deepfake of James Talarico in midterm campaign (news, 2026-03-13)
    https://cnn.com/2026/03/13 (opens in new tab)
  2. Stanford documents surge in AI political content for 2026 midterms (research, 2026-03)
    https://staradvertiser.com (opens in new tab)
  3. AI deepfake regulation gaps: only 28 states with disclosure laws (analysis, 2026)
    https://weforum.org (opens in new tab)

Update Log

  • — First logged (Status: Confirmed, Evidence: Corroborated)