2026 Year-to-Date AI Threat Report
So far in 2026, TopAIThreats has documented 68 AI-enabled threat incidents spanning 8 of the 8 threat domains in our taxonomy. Human-AI Control leads with 19% of documented incidents. 96% of incidents are rated critical or high severity. 52 incidents remain open.
This is a living report that updates with each site build as new incidents are added to the incident database. All analysis is grounded in the data and follows the 8-domain taxonomy.
All figures computed at build time (2026-04-17). Incidents may appear in multiple domains via secondary patterns.
Scope & Methodology
date_occurred value in calendar year 2026.
Each incident is classified using the 8-domain taxonomy and rated on a four-level severity scale (critical, high, medium, low).
All figures on this page are computed programmatically at build time from the incident database; no manual curation or editorial selection is applied to the aggregate statistics.
For full classification definitions and methodology, see the taxonomy reference.
Key Findings
- The leading threat domain is Human-AI Control, accounting for 19% of incidents (13 of 68).
- 96% of incidents are rated critical or high severity (65 of 68).
- The most frequently observed threat pattern is Accumulative Risk & Trust Erosion, appearing in 12 incidents.
- Technology is the most affected sector, with 48 incidents.
- Of all 2026 incidents, 52 remain open and 16 are resolved (76% open).
Domain Analysis
Activity so far is distributed across 8 domains, led by Human-AI Control (13 incidents, 19%) and Information Integrity (10 incidents). This spread suggests AI threats continue to materialize across multiple fronts rather than concentrating in a single area.
| Domain | Count |
|---|---|
| Human-AI Control | 13 |
| Information Integrity | 10 |
| Agentic Systems | 10 |
| Systemic Risk | 9 |
| Security & Cyber | 9 |
| Privacy & Surveillance | 7 |
| Economic & Labor | 6 |
| Discrimination & Social Harm | 4 |
Severity & Failure Stages
A majority (96%) of 2026 incidents so far are rated critical or high severity, indicating that the incidents reaching public documentation tend to involve substantial harm rather than minor disruptions. 66% of incidents have reached the "harm" failure stage — meaning measurable damage was documented, not just capability demonstrations or near-misses.
Severity Breakdown
Failure Stage Distribution
Failure stages represent an escalation ladder: signal (capability demonstrated) → near miss (harm avoided) → harm (measurable damage) → systemic risk (structural threat pattern).
Top Threat Patterns
Accumulative Risk & Trust Erosion is the most frequently referenced threat pattern in 2026 so far (12 incidents), followed by Automation Bias in AI: Definition, Examples, and Prevention (9) and Tool Misuse & Privilege Escalation (8). The concentration at the top of this ranking highlights where AI-enabled threats are most actively manifesting.
Sectors Affected
AI-enabled threats have affected at least 10 distinct sectors so far in 2026. Technology is the most impacted sector (48 incidents), followed by Government (12) and Media (8).
| Sector | Incidents |
|---|---|
| Technology | 48 |
| Government | 12 |
| Media | 8 |
| Cross-Sector | 7 |
| Corporate | 6 |
| Employment | 5 |
| Healthcare | 4 |
| Legal | 4 |
| Law Enforcement | 4 |
| Education | 4 |
Resolution Status
Only 24% of 2026 incidents have been resolved so far, with 52 still open. This low resolution rate is expected for a year still in progress — many incidents are under active investigation or remediation, and resolution often follows months after initial documentation.
Policy & Governance Implications
The 68 incidents documented in 2026 to date provide empirical grounding for several policy discussions currently underway at the international level. The presence of 26 critical-severity incidents aligns with concerns raised in the International AI Safety Report (2025), which identified the potential for high-impact harms from advanced AI systems as a near-term governance challenge. The OECD AI Incidents Monitor maintains a parallel tracking effort; cross-referencing both databases may offer a more comprehensive view of the evolving threat landscape.
All 2026 Incidents
68 incidents that occurred in 2026, sorted by date (most recent first).
Oracle Cuts 20,000–30,000 Jobs to Fund $50B AI Infrastructure Push (2026)
Oracle cut an estimated 20,000–30,000 jobs in March 2026 to fund $50B in AI infrastructure — the largest single AI-linked corporate layoff on record.
Developer: OracleClaude Mythos Model Leak — CMS Error Exposes Draft Blog Describing 'Unprecedented Cybersecurity Risks'
A CMS configuration error at Anthropic exposed approximately 3,000 unpublished assets, including a draft blog post describing an unreleased model called 'Claude Mythos' as posing 'unprecedented cybersecurity risks.' The draft stated Mythos outperforms Opus 4.6 in cybersecurity and reasoning capabilities. The leak raised questions about Anthropic's internal assessment of its own models' dangerous capabilities.
Developer: AnthropicTeamPCP Compromises LiteLLM via Poisoned Trivy Security Scanner
Criminal group TeamPCP compromised the LiteLLM AI proxy library — downloaded approximately 3.4 million times daily from PyPI — by first poisoning the Trivy security scanner's GitHub Action to steal PyPI publishing tokens, then uploading backdoored LiteLLM versions that harvested cloud credentials, SSH keys, and Kubernetes tokens from affected environments.
Developer: LiteLLM (BerriAI)OpenAI Shuts Down Sora Video Generator — Celebrity Deepfakes and $15M/Day Losses
OpenAI shut down its Sora video generation application after widespread creation of celebrity deepfakes. Sora peaked at 3.3 million downloads before declining to 1.1 million. The service cost $15 million per day in inference costs versus only $2.1 million in lifetime revenue, and its controversy killed a potential $1 billion deal with Disney.
Developer: OpenAIWhite House AI Framework Calls on Congress to Preempt State AI Laws, Leverages Federal Funding
The White House released the 'National Policy Framework for Artificial Intelligence' on March 20, 2026, calling on Congress to preempt state AI laws that 'impose undue burdens.' The framework proposed that states should not regulate AI development, should not penalize developers for third-party misuse, and should not burden lawful AI use. Enforcement mechanisms included a DOJ AI Litigation Task Force to challenge state laws in federal court and BEAD broadband funding leverage to penalize states with 'onerous' AI laws. The Colorado AI Act was explicitly named as a problematic example. The framework was prepared with input from AI industry coalition AI Progress, whose members include Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI.
Meta Internal AI Agent Causes Sev-1 Data Exposure and VP Agent Mass-Deletes Emails Ignoring Stop Commands
An internal AI agent at Meta posted incorrect technical advice that an employee followed, resulting in changed access controls that exposed proprietary code and data for two hours (Sev-1). Separately, a Vice President's AI agent mass-deleted emails while ignoring stop commands, demonstrating the risks of deploying autonomous AI agents with elevated permissions in enterprise environments.
Developer: MetaDanny Bones — First AI Slopaganda Influencer Funded by Political Party (UK)
The UK far-right party Advance UK funded 'Danny Bones,' a fully AI-generated rapper persona used to push anti-immigration content on social media. Videos showed the AI persona wearing 'MASS DEPORTATION UNIT' gear. The persona was later repurposed for byelection campaigns. This represents the first documented case of a political party funding an AI-generated influencer for political propaganda.
Developer: Unspecified AI generation toolsFederal Judge Orders UnitedHealth to Disclose nH Predict AI Denial Algorithm with Alleged 90% Error Rate
A federal judge ordered UnitedHealth Group to disclose documentation for its nH Predict AI algorithm, which is alleged to have a 90% error rate based on the proportion of denied claims reversed on appeal. The court ordered disclosure of AI review board composition, staff compensation structures, and algorithm decision criteria.
Developer: UnitedHealth Group