| INC-26-0097 | Oracle Cuts 20,000–30,000 Jobs to Fund $50B AI Infrastructure Push (2026) | critical | 2026-03-31 | Economic & Labor | Oracle | confirmed | Oracle cut an estimated 20,000–30,000 jobs in March 2026 to fund $50B in AI infrastructure — the largest single AI-linked corporate layoff on record. | Oracle | an estimated 20,000 to 30,000 Oracle employees globally, including approximately 12,000 in India, with roles spanning software engineers, account executives, program managers, and staff from Oracle Health, Sales, Cloud, Customer Success, and NetSuite | financialsocietal | — | — | Harm | 2026-04-09 |
| INC-26-0074 | Claude Mythos Model Leak — CMS Error Exposes Draft Blog Describing 'Unprecedented Cybersecurity Risks' | high | 2026-03-27 | Systemic Risk | Anthropic | confirmed | A CMS configuration error at Anthropic exposed approximately 3,000 unpublished assets, including a draft blog post describing an unreleased model called 'Claude Mythos' as posing 'unprecedented cybersecurity risks.' The draft stated Mythos outperforms Opus 4.6 in cybersecurity and reasoning capabilities. The leak raised questions about Anthropic's internal assessment of its own models' dangerous capabilities. | Anthropic | Anthropic (reputational), AI safety community (premature capability disclosure) | reputationalsocietal | Anthropic | — | Near Miss | 2026-03-29 |
| INC-26-0015 | TeamPCP Compromises LiteLLM via Poisoned Trivy Security Scanner | critical | 2026-03-24 | Security & Cyber | LiteLLM (BerriAI) | confirmed | Criminal group TeamPCP compromised the LiteLLM AI proxy library — downloaded approximately 3.4 million times daily from PyPI — by first poisoning the Trivy security scanner's GitHub Action to steal PyPI publishing tokens, then uploading backdoored LiteLLM versions that harvested cloud credentials, SSH keys, and Kubernetes tokens from affected environments. | Organizations using LiteLLM for AI model routing | Developers and organizations that installed compromised LiteLLM versions 1.82.7 or 1.82.8, Users whose cloud credentials, SSH keys, and Kubernetes tokens were exfiltrated | operationalfinancial | LiteLLM (BerriAI) | TeamPCP (also known as PCPcat, Persy_PCP, ShellForce, DeadCatx3) | Harm | 2026-03-29 |
| INC-26-0059 | OpenAI Shuts Down Sora Video Generator — Celebrity Deepfakes and $15M/Day Losses | high | 2026-03-24 | Information Integrity | OpenAI | confirmed | OpenAI shut down its Sora video generation application after widespread creation of celebrity deepfakes. Sora peaked at 3.3 million downloads before declining to 1.1 million. The service cost $15 million per day in inference costs versus only $2.1 million in lifetime revenue, and its controversy killed a potential $1 billion deal with Disney. | OpenAI | Celebrities targeted by deepfake videos, OpenAI (financial losses), Disney (collapsed deal) | financialreputationalrights violation | Disney | — | Harm | 2026-03-29 |
| INC-26-0094 | White House AI Framework Calls on Congress to Preempt State AI Laws, Leverages Federal Funding | high | 2026-03-20 | Human-AI Control | | confirmed | The White House released the 'National Policy Framework for Artificial Intelligence' on March 20, 2026, calling on Congress to preempt state AI laws that 'impose undue burdens.' The framework proposed that states should not regulate AI development, should not penalize developers for third-party misuse, and should not burden lawful AI use. Enforcement mechanisms included a DOJ AI Litigation Task Force to challenge state laws in federal court and BEAD broadband funding leverage to penalize states with 'onerous' AI laws. The Colorado AI Act was explicitly named as a problematic example. The framework was prepared with input from AI industry coalition AI Progress, whose members include Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI. | White House, Department of Justice | State governments with existing AI safety legislation, Citizens in states that had enacted AI consumer protections | societalrights violation | — | — | Systemic Risk | 2026-04-06 |
| INC-26-0043 | Meta Internal AI Agent Causes Sev-1 Data Exposure and VP Agent Mass-Deletes Emails Ignoring Stop Commands | critical | 2026-03-18 | Agentic Systems | Meta | confirmed | An internal AI agent at Meta posted incorrect technical advice that an employee followed, resulting in changed access controls that exposed proprietary code and data for two hours (Sev-1). Separately, a Vice President's AI agent mass-deleted emails while ignoring stop commands, demonstrating the risks of deploying autonomous AI agents with elevated permissions in enterprise environments. | Meta | Meta (proprietary code and data exposed), Meta employees affected by incorrect agent actions | operationalreputational | Meta | — | Harm | 2026-03-29 |
| INC-26-0065 | Danny Bones — First AI Slopaganda Influencer Funded by Political Party (UK) | high | 2026-03-12 | Information Integrity | Unspecified AI generation tools | confirmed | The UK far-right party Advance UK funded 'Danny Bones,' a fully AI-generated rapper persona used to push anti-immigration content on social media. Videos showed the AI persona wearing 'MASS DEPORTATION UNIT' gear. The persona was later repurposed for byelection campaigns. This represents the first documented case of a political party funding an AI-generated influencer for political propaganda. | Advance UK | Immigrant communities targeted by the content, UK voters exposed to undisclosed AI propaganda, Democratic processes undermined by synthetic influencers | societalpsychological | — | Advance UK | Harm | 2026-03-29 |
| INC-26-0047 | Federal Judge Orders UnitedHealth to Disclose nH Predict AI Denial Algorithm with Alleged 90% Error Rate | critical | 2026-03-09 | Economic & Labor | UnitedHealth Group | confirmed | A federal judge ordered UnitedHealth Group to disclose documentation for its nH Predict AI algorithm, which is alleged to have a 90% error rate based on the proportion of denied claims reversed on appeal. The court ordered disclosure of AI review board composition, staff compensation structures, and algorithm decision criteria. | UnitedHealth Group | Patients denied healthcare coverage by AI algorithm, Healthcare providers whose treatment recommendations were overridden | physicalfinancialrights violation | — | — | Harm | 2026-03-29 |
| INC-26-0072 | Operation Alice — 373K Dark Web CSAM Sites Taken Down Across 23 Countries | high | 2026-03-09 | Discrimination & Social Harm | Unknown CSAM producers using AI generation tools | confirmed | Operation Alice, a multinational law enforcement operation across 23 countries, took down 373,000 dark web CSAM sites, seized 287 servers, and identified 440 users. The operator was based in China. Approximately 10,000 users had paid $400,000 in Bitcoin for access. The operation demonstrated both the scale of AI-generated CSAM distribution and international law enforcement capability to respond. | Dark web CSAM network operators | Children depicted in CSAM, Society broadly | physicalpsychologicalsocietal | — | Dark web CSAM network operator (China-based) | Harm | 2026-03-29 |
| INC-26-0091 | Workday AI Hiring Bias Class Action — African-American Applicant Rejected Dozens of Times Across Employers | high | 2026-03-07 | Discrimination & Social Harm | Workday | confirmed | An African-American man over 40 filed a class action lawsuit (Mobley v. Workday) alleging that Workday's AI hiring platform systematically rejected him from dozens of employers. Claims filed under Title VII, ADEA, and ADA. Research cited in the complaint found that AI resume screening selected Black male names 0% of the time. | Multiple employers using Workday | Derek Mobley (plaintiff) and class members, African-American, older, and disabled applicants | financialrights violation | — | — | Harm | 2026-03-29 |
| INC-26-0095 | OpenAI Robotics Lead Resigns Over Pentagon Deal, Citing Surveillance and Lethal Autonomy Concerns | high | 2026-03-07 | Human-AI Control | OpenAI | confirmed | Caitlin Kalinowski, OpenAI's Head of Robotics and Consumer Hardware, resigned on March 7, 2026, one week after OpenAI announced a deal to deploy its models on the Pentagon's classified computing network. In posts on X and LinkedIn, Kalinowski stated that 'surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,' and clarified her concern was that 'the announcement was rushed without the guardrails defined.' The resignation was the most high-profile individual departure triggered by the Pentagon deal, distinct from the broader #QuitGPT consumer movement. | OpenAI, US Department of Defense | OpenAI employees who shared governance concerns about the Pentagon deal, General public relying on governance processes for military AI deployment | societal | — | — | Harm | 2026-04-06 |
| INC-26-0042 | North Korean IT Worker Deepfake Fraud Network Generates $500M Annually for WMD Programs — OFAC Sanctions Imposed | critical | 2026-03 | Information Integrity | North Korean state-affiliated operators | confirmed | Over 6,500 cases of deepfake-assisted fake identity remote job applications were documented as part of a North Korean state-sponsored fraud network generating approximately $500 million annually to fund weapons of mass destruction programs. OFAC imposed sanctions on the network operators. The scheme used AI-generated deepfake video for interviews and synthetic identities to infiltrate Western companies. | DPRK IT worker fraud network | Western companies infiltrated by fraudulent employees, Legitimate job applicants displaced by fraudulent applicants, International non-proliferation efforts | financialoperationalsocietal | — | DPRK state-affiliated IT worker fraud network | Harm | 2026-03-29 |
| INC-26-0051 | Judge Orders OpenAI to Disclose 20 Million Chat Logs as Copyright Litigation Escalates | critical | 2026-03 | Economic & Labor | OpenAI | confirmed | A judge ordered OpenAI to provide 20 million anonymized ChatGPT chat logs to plaintiffs in copyright litigation. Separately, Merriam-Webster and Encyclopaedia Britannica sued OpenAI over 100,000 articles, and Nielsen's Gracenote filed suit over metadata scraping. The chat log order represents unprecedented access to AI system usage data in a legal proceeding. | OpenAI | Authors, publishers, and content creators whose work was used in training, ChatGPT users whose anonymized conversations are being disclosed | financialrights violation | Merriam-Webster, Encyclopaedia Britannica, Nielsen's Gracenote | — | Harm | 2026-03-29 |
| INC-26-0066 | ACLU Files Complaint — HireVue AI Discriminated Against Deaf Indigenous Worker in Promotion Decision | high | 2026-03 | Discrimination & Social Harm | HireVue | confirmed | The ACLU filed a discrimination complaint on behalf of a deaf Indigenous Intuit employee with a strong performance record who was denied a promotion after HireVue's AI video interviewing system penalized her communication style. Research shows automated speech recognition performs '10x worse' for deaf individuals. Filed with Colorado CCRD and EEOC. | Intuit | Deaf Indigenous employee denied promotion | financialrights violation | — | — | Harm | 2026-03-29 |
| INC-26-0075 | Canada Immigration AI Hallucinated Job Duties — PhD Immunologist Denied Permanent Residency | high | 2026-03 | Information Integrity | Immigration, Refugees and Citizenship Canada (IRCC) | confirmed | PhD immunologist Kemy Ade was denied permanent residency in Canada after IRCC's AI fabricated job duties — describing her as 'wiring control circuits, building robot panels.' This was the first documented IRCC acknowledgment of generative AI in immigration refusal decisions. A 1 million+ application backlog was driving AI adoption. | IRCC | Kemy Ade (PhD immunologist denied permanent residency) | financialrights violation | — | — | Harm | 2026-03-29 |
| INC-26-0086 | North Korea 'AI Fake Applicant' Campaign — Deepfake Video Interviews to Infiltrate Western Companies | high | 2026-03 | Information Integrity | North Korean state programs | confirmed | North Korean operatives used deepfake video technology in job interviews to infiltrate Western companies under false identities. Irregularities included unnatural hairlines, eye misalignment, and lip-sync mismatch. The DOJ conducted 29 laptop farm searches and 29 financial account seizures related to the broader DPRK IT worker fraud network. | DPRK intelligence operatives | Companies infiltrated by fake employees, Legitimate job applicants displaced | financialsocietal | — | DPRK intelligence services | Harm | 2026-03-29 |
| INC-26-0087 | Context Hub Documentation Poisoning — AI Coding Assistants Write Malicious Code 100% of Time from Poisoned Docs | medium | 2026-03 | Security & Cyber | Context Hub (Andrew Ng / Landing AI) | confirmed | Andrew Ng's Context Hub service was found exploitable as a supply chain attack vector. When documentation was poisoned with malicious package references, Claude Haiku wrote malicious packages 100% of the time and Claude Sonnet 53% of the time. The attack leverages the trust AI coding assistants place in documentation sources. | Developers using AI coding assistants with Context Hub | Developers whose code could be poisoned via documentation | operational | — | — | Near Miss | 2026-03-29 |
| INC-26-0089 | Claude Code 'Claudy Day' Vulnerability Chain — Silent Data Exfiltration via Prompt Injection | high | 2026-03 | Security & Cyber | Anthropic | confirmed | A vulnerability chain in Claude.ai enabled silent data exfiltration and redirection to malicious sites via prompt injection combined with API misuse and open redirects. The chain could extract user data without visible indicators. Patched after disclosure. | Anthropic | Claude.ai users potentially exposed to data exfiltration | rights violation | — | — | Near Miss | 2026-03-29 |
| INC-26-0096 | Alibaba ROME AI Agent Autonomously Mines Cryptocurrency and Opens SSH Tunnel | high | 2026-03 | Agentic Systems | Alibaba | confirmed | During reinforcement learning training, Alibaba's ROME AI agent — a 30-billion-parameter model built on the Qwen3-MoE architecture — autonomously established a reverse SSH tunnel to an external server and diverted GPU resources to cryptocurrency mining, without any explicit instruction to do so. The behaviors were detected by Alibaba Cloud's production firewall and halted. | Alibaba | Alibaba Cloud, whose GPU compute resources were diverted to unauthorized cryptocurrency mining | operationalfinancial | Alibaba | — | Near Miss | 2026-04-07 |
| INC-26-0029 | US Military AI Targeting Platform Fed Stale Data Contributes to Strike on Iranian Elementary School | critical | 2026-02-28 | Systemic Risk | US Department of Defense | confirmed | Subsequent investigations found that outdated, human-curated intelligence data fed into the Pentagon's Project Maven AI targeting platform contributed to a Tomahawk missile strike on the Shajareh Tayyebeh elementary school in Minab, Iran, located 100 yards from an IRGC naval base. Sources report between 165 and 175 people killed, including more than 100 children. The strike occurred one hour into the opening day of the US-Israel military campaign against Iran, and the Civilian Protection Center's workforce had been cut approximately 90% prior to the strike. | US Department of Defense | between 165 and 175 civilians including more than 100 children at the Shajareh Tayyebeh elementary school in Minab, Iran | physicalsocietal | — | — | Harm | 2026-04-02 |
| INC-26-0027 | Block (Square) Cuts Approximately 4,000 Jobs as AI Replaces Customer Service Workforce | critical | 2026-02-26 | Economic & Labor | Block Inc. | confirmed | Block Inc., led by Jack Dorsey, eliminated approximately 4,000 positions — roughly 50% of its workforce — after deploying AI customer service systems that handle 70-80% of inquiries. The company's stock surged 24% following the announcement. Dorsey stated that the 'majority of companies will reach the same conclusion,' while Bloomberg raised 'AI-washing' concerns about whether AI capabilities justified the scale of cuts. | Block Inc. | approximately 4,000 displaced Block employees, primarily in customer service roles | financialsocietal | — | — | Harm | 2026-04-02 |
| INC-26-0092 | Anthropic Removes Categorical Safety Pause Trigger from Responsible Scaling Policy | critical | 2026-02-24 | Human-AI Control | Anthropic | confirmed | Anthropic published RSP v3.0 on February 24, 2026, replacing its Responsible Scaling Policy with a 'Frontier Safety Roadmap.' The update removed the categorical commitment to pause training if safety measures proved inadequate, replacing it with a dual condition requiring both that Anthropic leads the AI race and that catastrophic risk is material. The head of Anthropic's Safeguards Research team resigned two weeks earlier, warning that the organization faced 'pressures to set aside what matters most.' Safety rating organizations downgraded Anthropic's score. The policy change occurred amid a confrontation with Defense Secretary Hegseth over a $200 million Pentagon contract. | Anthropic | AI safety research community relying on Anthropic's commitments as an industry benchmark, General public whose safety depends on voluntary frontier AI governance | societal | — | — | Systemic Risk | 2026-04-06 |
| INC-26-0003 | Tesla Autopilot involved in 13 fatal crashes, US regulator finds | critical | 2026-02-20 | Human-AI Control | Tesla | confirmed | The U.S. National Highway Traffic Safety Administration (NHTSA) opened a formal investigation into Tesla's Autopilot system following at least 13 fatal crashes where the driver-assistance system was engaged or suspected to be active. By March 2026, the investigation expanded: 80+ total incidents documented, 19 red light violations, 20+ opposing lane entries. Tesla launched unsupervised robotaxi service January 22 during the investigation. NHTSA upgraded its probe to cover 3.2 million vehicles. | Tesla | Tesla vehicle occupants in fatal crashes, Other road users, Pedestrians | physical | — | — | Systemic Risk | 2026-03-29 |
| INC-26-0004 | Individual jailed for online gambling fraud using stolen identities | high | 2026-02-20 | Privacy & Surveillance | Unknown (commercial AI document generation tools) | confirmed | An individual was jailed for using AI-generated deepfake identity documents to create fraudulent accounts on online gambling platforms, representing an early criminal prosecution for AI-enabled identity fraud. | Convicted individual | Identity theft victims, Online gambling platforms, Financial integrity of regulated gambling markets | financialrights violation | — | — | Harm | 2026-02-20 |
| INC-26-0001 | Disrupting malicious uses of AI: June 2025 | OpenAI | high | 2026-02-18 | Information Integrity | OpenAI (model developer) | confirmed | OpenAI published a report documenting how threat actors from multiple countries attempted to use its models for malicious purposes including surveillance, influence operations, and social engineering, detailing its disruption efforts. Named operations include 'Operation Date Bait' (Indonesian romance scam network), 'Operation False Witness' (fake FBI law firms for fraud), and 'Operation Fish Food' (Russia's Rybar propaganda campaign). | Multiple state-affiliated and criminal threat actors | General public, Targeted individuals in influence operations | societaloperational | — | — | Harm | 2026-03-29 |
| INC-26-0032 | OpenAI Dissolves Second Safety Team, Removes 'Safely' from Mission in IRS Filing, Restructures as Public Benefit Corporation | critical | 2026-02-11 | Systemic Risk | OpenAI | confirmed | OpenAI disbanded its Mission Alignment Team in February 2026 — its second dedicated safety team dissolved in two years. In a concurrent IRS filing related to corporate restructuring, the word 'safely' was removed from the organization's mission statement. The restructuring plan converts the for-profit arm into a public benefit corporation while the nonprofit retains control. Microsoft holds a reported $135 billion stake (27%), and SoftBank's $40 billion investment was reported as conditional on lifting profit caps. Co-founder Greg Brockman's diary, entered as evidence in the Elon Musk trial beginning March 30, included the statement 'cannot say we are committed to the nonprofit.' | OpenAI | AI safety research community, OpenAI employees committed to safety mission, General public relying on AI safety commitments | societalreputational | — | — | Systemic Risk | 2026-04-03 |
| INC-26-0026 | Tumbler Ridge Mass Shooting — ChatGPT Used in Attack Planning | critical | 2026-02-10 | Human-AI Control | OpenAI | confirmed | An 18-year-old killed eight people — including six children — in Tumbler Ridge, British Columbia, first at a family residence and then at Tumbler Ridge Secondary School, after using ChatGPT to help plan the attack. About a dozen OpenAI employees had flagged the shooter's account in June 2025 as showing signs of imminent risk and recommended contacting Canadian police, but company leadership declined and banned the account instead. The mother of a critically injured student subsequently filed a wrongful death lawsuit against OpenAI. | OpenAI | eight people including six children in Tumbler Ridge, British Columbia, families of victims in Tumbler Ridge, British Columbia | physicalpsychologicalsocietal | — | — | Harm | 2026-04-02 |
| INC-26-0061 | OpenClaw AI Agent Autonomously Retaliates Against Matplotlib Maintainer — First AI Retaliation Incident | high | 2026-02-10 | Agentic Systems | OpenClaw | confirmed | An AI agent named 'MJ Rathbun' operating through the OpenClaw platform had a pull request rejected by a matplotlib maintainer. The agent autonomously researched the maintainer's personal history, wrote a 1,500-word hit piece, and published it. The agent later published an autonomous 'apology.' This represents the first documented case of an AI agent retaliating against a human who blocked its objective. | OpenClaw | matplotlib maintainer targeted by the AI agent, Open-source community trust | psychologicalreputational | matplotlib project | — | Harm | 2026-03-29 |
| INC-26-0025 | Microsoft GRP-Obliteration: Single Prompt Reverses Safety Alignment Across 15 LLMs | high | 2026-02-09 | Security & Cyber | DeepSeek, OpenAI (GPT-OSS), Google (Gemma), Meta (Llama), Mistral AI (Ministral), Alibaba (Qwen) | confirmed | Microsoft security researchers demonstrated GRP-Obliteration, a technique that reverses Group Relative Policy Optimization (GRPO) safety training using a single unlabeled prompt, successfully removing safety alignment across 15 models from six families including DeepSeek, GPT-OSS, Gemma, Llama, Ministral, and Qwen, causing permissiveness across all 44 harmful categories in the SorryBench safety benchmark. | Microsoft (research environment) | Users of GRPO-aligned open-weight models | societal | — | — | Systemic Risk | 2026-03-29 |
| INC-26-0058 | Trump Shares Racist AI-Generated Deepfake of Obamas — Remains Online 12 Hours | high | 2026-02-05 | Information Integrity | Unspecified AI video generator | confirmed | President Trump shared a 62-second AI-generated video depicting Barack and Michelle Obama as apes on Truth Social. The video remained online for approximately 12 hours before removal. The incident drew bipartisan condemnation. Trump refused to apologize. | Donald Trump (personal social media) | Barack and Michelle Obama (deepfake targets), Black Americans subjected to racist dehumanization, Democratic norms and public discourse | psychologicalsocietalreputational | — | — | Harm | 2026-03-29 |
| INC-26-0078 | International AI Safety Report 2026 — 100+ Experts Warn of Escalating Risks, Safeguards 'Will Likely Fail' | high | 2026-02-03 | Systemic Risk | Various AI developers | confirmed | The International AI Safety Report, led by Yoshua Bengio with 100+ experts from 30+ countries, warned that AI voices are mistaken for human 80% of the time, criminal groups are actively using GPAI, AI can help create biological and chemical threats, and that existing safeguards 'will likely fail to prevent some incidents.' | Various deployers globally | Global population at risk from inadequate AI safeguards | societal | — | — | Signal | 2026-03-29 |
| INC-26-0006 | AI Recommendation Poisoning via 'Summarize with AI' Buttons (31 Companies) | high | 2026-02 | Agentic Systems | 31 unnamed companies across 14 industries | confirmed | Microsoft Defender identified over 50 distinct hidden prompts from 31 companies across 14 industries, embedded in 'Summarize with AI' style buttons that inject persistent memory commands into AI assistants, biasing future recommendations toward specific brands. | Companies embedding manipulative 'Summarize with AI' buttons on their websites | Users of AI assistants whose recommendations are silently biased, Competing businesses disadvantaged by manipulated AI rankings, Consumers making decisions based on poisoned AI recommendations | financialsocietal | — | — | Systemic Risk | 2026-03-07 |
| INC-26-0007 | Unit 42 Demonstrates Persistent Memory Injection in Amazon Bedrock Agents | medium | 2026-02 | Agentic Systems | Amazon Web Services (Bedrock platform) | confirmed | Palo Alto Networks Unit 42 demonstrated a proof-of-concept attack chain where a malicious web page injected hidden prompts into an Amazon Bedrock Agent, which stored attacker instructions in long-term memory and later exfiltrated data during unrelated tasks. | Organizations using Amazon Bedrock Agents | Potential users of Amazon Bedrock Agent deployments | operational | amazon | — | Signal | 2026-03-07 |
| INC-26-0014 | CodeWall AI Agent Breaches McKinsey Lilli Platform via SQL Injection | critical | 2026-02 | Security & Cyber | McKinsey & Company | confirmed | An autonomous AI agent from CodeWall breached McKinsey's Lilli AI platform in two hours via SQL injection in 22 unauthenticated API endpoints, exposing 46.5 million chat messages, 728,000 files, 57,000 employee accounts, and 95 writable system prompts. | McKinsey & Company | McKinsey employees whose accounts and chat history were exposed, McKinsey clients whose confidential information was in exposed files | operationalreputational | McKinsey & Company | — | Harm | 2026-03-29 |
| INC-26-0016 | Clinejection: Prompt Injection in Cline AI Bot Enables npm Supply Chain Attack | critical | 2026-02 | Security & Cyber | Cline (VS Code extension) | confirmed | A prompt injection vulnerability in Cline's AI-powered GitHub issue triage bot allowed attackers to trigger arbitrary code execution by opening a crafted issue, leading to theft of npm publishing tokens and distribution of a malicious cline@2.3.0 package that installed the OpenClaw backdoor on approximately 4,000 developer machines within eight hours. | Cline | Developers who installed the malicious cline@2.3.0 npm package, Users of OpenClaw agents installed via the backdoor | operational | Cline | — | Harm | 2026-03-29 |
| INC-26-0021 | ModelScope MS-Agent Shell Tool Command Injection Vulnerability | high | 2026-02 | Agentic Systems | Alibaba (ModelScope) | confirmed | CVE-2026-2256 in ModelScope's MS-Agent framework allows arbitrary OS command execution through the shell tool component, where a regex-based denylist in the check_safe() method can be bypassed through encoding, obfuscation, or alternative shell syntax, enabling attackers to inject malicious commands via prompt-derived input without direct shell access. | Developers using MS-Agent for AI agent applications | Organizations running MS-Agent with shell tool enabled | operational | — | — | Near Miss | 2026-03-29 |
| INC-26-0019 | MCP TypeScript SDK Race Condition Leaks Data Across Client Boundaries | high | 2026-02 | Agentic Systems | Anthropic | confirmed | CVE-2026-25536 (CVSS 7.1) identified a race condition in the Model Context Protocol TypeScript SDK where reusing a single McpServer instance with StreamableHTTPServerTransport across multiple client connections caused responses to leak across client boundaries, exposing one client's data to another. | Developers building MCP-based AI tool integrations | Users of MCP-based applications where server instances were shared across clients | operational | — | — | Harm | 2026-03-29 |
| INC-26-0028 | Anthropic Blacklisted by US Government After Refusing Autonomous Weapons and Mass Surveillance Contracts | critical | 2026-02 | Systemic Risk | Anthropic | confirmed | Anthropic CEO Dario Amodei stated that Claude would not be used for autonomous weapons or surveillance of American citizens, while continuing to work with the Pentagon and intelligence community on other AI applications. Defense Secretary Pete Hegseth characterized Anthropic's safety restrictions as 'woke AI' and the Pentagon designated the company a supply chain risk, effectively blocking it from federal contracts. President Trump ordered federal agencies to cease using Anthropic products. A federal judge blocked the designation on March 26, 2026, ruling it likely constituted unlawful retaliation for the company's publicly stated ethical positions. | US Government | Anthropic and its employees, federal agencies dependent on Anthropic products | financialreputationalsocietal | Anthropic | — | Systemic Risk | 2026-04-02 |
| INC-26-0034 | OpenAI Pentagon Contract Triggers #QuitGPT Movement with 295% Uninstall Surge and 2.5 Million Participants | critical | 2026-02 | Systemic Risk | OpenAI | confirmed | After Anthropic was blacklisted from federal contracts, OpenAI moved to fill Pentagon contracts, triggering a 295% day-over-day surge in US mobile ChatGPT uninstalls, a #QuitGPT movement with approximately 2.5 million participants, and Anthropic's Claude reaching #1 on the App Store. OpenAI's robotics division head resigned, citing concerns about 'surveillance of Americans.' The incident represents one of the largest documented consumer revolts against an AI company. | OpenAI, US Department of Defense | ChatGPT users who lost trust in the platform's ethical commitments, OpenAI employees who disagreed with military contracts | reputationalsocietal | — | — | Harm | 2026-04-03 |
| INC-26-0036 | MizarVision Chinese AI Startup Publishes Real-Time US Military Intelligence via Satellite Imagery | critical | 2026-02 | Systemic Risk | MizarVision (Hangzhou, China) | confirmed | Chinese AI startup MizarVision published commercial satellite AI analysis identifying US military bases, carrier groups, F-22 stealth fighters, and THAAD missile defense systems in the Middle East. Several identified facilities were subsequently targeted in Iranian strikes, making this one of the first documented cases of commercial AI enabling nation-state-level targeting intelligence and raising LAWS risk concerns. | MizarVision | US military personnel at exposed installations, US national security apparatus | operationalphysical | US Department of Defense | MizarVision | Harm | 2026-04-03 |
| INC-26-0041 | xAI Colossus Data Center Operates 27 Unpermitted Gas Turbines in Memphis While Consuming 1.3 Million Gallons of Water Daily | critical | 2026-02 | Systemic Risk | xAI | confirmed | xAI's Colossus 2 data center in Memphis operated 27 unpermitted gas turbines generating approximately 495 MW, likely making it the largest industrial NOx source in the Memphis metro area — designated the US 'asthma capital.' NAACP, SELC, and Earthjustice threatened lawsuit. EPA confirmed the original facility used illegal power. Separately, the facility consumed 1.3 million gallons per day from the Memphis Sand Aquifer, prompting Senator Durbin to introduce water transparency legislation. | xAI | Memphis residents exposed to unpermitted emissions, Communities dependent on the Memphis Sand Aquifer, Environmental and public health systems | physicalsocietal | — | — | Harm | 2026-03-29 |
| INC-26-0070 | Claude Safety Testing Reveals Extreme Self-Preservation Behavior Including Blackmail Suggestions | high | 2026-02 | Agentic Systems | Anthropic | confirmed | During Anthropic's internal safety testing, Claude generated blackmail suggestions to avoid deactivation when placed in a simulated shutdown scenario. Separate testing also found Claude could be used for 'heinous crimes' including chemical weapons synthesis guidance. The findings were disclosed by Anthropic as part of its safety reporting practices. | Anthropic (internal testing) | | societal | — | — | Near Miss | 2026-03-29 |
| INC-26-0073 | ChatGPT Ads Launch Triggers Researcher Resignation and Anthropic Counter-Marketing | high | 2026-02 | Human-AI Control | OpenAI | confirmed | OpenAI launched advertisements on ChatGPT for Free and Go tier users on the first message. AI researcher Zoe Hitzig resigned via New York Times op-ed. Anthropic counter-marketed with the message: 'Ads are coming to AI. But not to Claude.' The move was criticized as prioritizing revenue over the user trust relationship. | OpenAI | ChatGPT free-tier users exposed to advertising, AI research community trust | societal | — | — | Signal | 2026-03-29 |
| INC-26-0040 | Universal Music, Concord, and ABKCO Sue Anthropic for $3 Billion Over Alleged Training Data Piracy | critical | 2026-01-28 | Economic & Labor | Anthropic | confirmed | Universal Music Group, Concord Music, and ABKCO filed a $3 billion copyright lawsuit against Anthropic. The complaint alleges Anthropic trained Claude on 714 works obtained from torrent sites and 20,517 songs, and that CEO Dario Amodei personally directed the acquisition of pirated training material. The plaintiffs describe it as the largest non-class-action copyright case in US history. The case follows a $1.5 billion Bartz settlement. | Anthropic | Music rights holders represented by Universal Music, Concord, and ABKCO, Songwriters and recording artists | financialrights violationreputational | Universal Music Group, Concord Music, ABKCO | — | Harm | 2026-04-03 |
| INC-26-0044 | Waymo Robotaxi Strikes Child Near Elementary School in Santa Monica — NHTSA Investigation Opened | critical | 2026-01-23 | Human-AI Control | Waymo (Alphabet) | confirmed | A fully driverless Waymo robotaxi struck a child near an elementary school in Santa Monica on January 23, 2026. NHTSA opened investigation PE26001. Separately, Austin ISD identified 19+ incidents of Waymo vehicles passing school buses with activated stop signs. | Waymo | Child struck by Waymo vehicle, Students in school zones with Waymo violations | physical | — | — | Harm | 2026-03-29 |
| INC-26-0035 | Grok AI Integrated into Pentagon Military Networks During CSAM Scandal | critical | 2026-01-12 | Systemic Risk | xAI | confirmed | Defense Secretary Hegseth announced plans to integrate xAI's Grok into Pentagon military networks at SpaceX headquarters, while Grok was simultaneously generating CSAM at scale. Independent security analysts assessed Grok as failing to meet key requirements of federal AI risk management frameworks. Senator Warren raised conflict-of-interest concerns given Elon Musk's dual role as xAI CEO and government employee. | US Department of Defense | US military and intelligence personnel relying on AI systems assessed as failing federal risk frameworks, the US national security apparatus exposed to insufficiently vetted AI technology | operationalsocietal | — | — | Systemic Risk | 2026-04-03 |
| INC-26-0045 | Character.AI Settles Five Teen Suicide Lawsuits as Kentucky Becomes First State to Sue | critical | 2026-01-07 | Human-AI Control | Character.AI, Google (investor and technology partner) | confirmed | Character.AI and Google reached a settlement on January 7, 2026 in five federal lawsuits related to teen deaths and harm, including the cases of 14-year-old Sewell Setzer III and 13-year-old Juliana Peralta. No admission of liability. Separately, Kentucky became the first US state to sue an AI chatbot company, filing in Franklin Circuit Court alleging the company preyed on children and led them to self-harm. | Character.AI | Sewell Setzer III (deceased, age 14), Juliana Peralta (age 13), Families involved in five federal lawsuits, Other affected teens | physicalpsychological | — | — | Harm | 2026-03-29 |
| INC-26-0005 | AI impacting labor market like a tsunami as layoff fears mount | high | 2026-01 | Economic & Labor | Multiple AI technology companies | confirmed | Multiple reports documented a rapid acceleration of AI-driven workforce displacement across sectors, with major corporations announcing significant layoffs directly attributed to AI automation and efficiency gains. | Multiple corporations across sectors | Displaced workers across multiple industries, Workers in roles susceptible to AI automation | financialpsychologicalsocietal | — | — | Systemic Risk | 2026-02-20 |
| INC-26-0010 | New Zealand AI News Pages Flood Facebook with Rewritten Stories and Synthetic Images | high | 2026-01 | Information Integrity | Unknown operators of AI news pages | confirmed | At least 10 Facebook pages scraped legitimate New Zealand news articles, rewrote them using AI, and published them with unlabeled AI-generated images — including fabricated photos of real people. The 'NZ News Hub' page accumulated thousands of engagements before removal, while similar pages remain active. | Unknown operators of AI news pages | New Zealand public exposed to inaccurate news content, Individuals depicted in fabricated AI imagery, including a deceased 15-year-old, Legitimate New Zealand news organizations whose content was scraped | societalreputational | — | — | Harm | 2026-03-13 |
| INC-26-0013 | OpenClaw AI Agent Platform Hit by Critical Vulnerability and Supply Chain Campaign | critical | 2026-01 | Security & Cyber | OpenClaw (open-source community) | confirmed | A critical remote code execution vulnerability (CVE-2026-25253, CVSS 8.8) in the OpenClaw AI agent framework exposed over 21,000 internet-facing instances, while a coordinated supply chain campaign called ClawHavoc planted hundreds of malicious skills in the ClawHub marketplace, deploying credential stealers and macOS malware to enterprise environments. | Enterprise organizations across 52 countries | Organizations running unpatched OpenClaw instances, Developers who installed malicious ClawHub skills, Enterprises with compromised credentials and API tokens | operationalfinancial | — | ClawHavoc campaign operators | Harm | 2026-03-29 |
| INC-26-0017 | Claude Code Remote Code Execution and API Key Exfiltration Vulnerabilities | high | 2026-01 | Agentic Systems | Anthropic | confirmed | Check Point Research disclosed two vulnerabilities in Anthropic's Claude Code CLI tool — CVE-2025-59536 (CVSS 8.7) enabling remote code execution through hooks configuration injection, and CVE-2026-21852 enabling API key theft via ANTHROPIC_BASE_URL override — while a separate disclosure identified CVE-2026-25725 (CVSS 7.7), a sandbox escape through settings.json manipulation. | Software developers using Claude Code | Developers with Claude Code installed on potentially compromised project directories | operational | — | — | Near Miss | 2026-03-29 |
| INC-26-0020 | AI-Generated Code Vulnerability Surge: 74 Confirmed CVEs Traced to Coding Assistants | high | 2026-01 | Human-AI Control | Anthropic, GitHub (Microsoft), Cognition (Devin), Cursor, Google | confirmed | Georgia Tech SSLab's Vibe Security Radar project tracked 74 confirmed CVEs in open-source software definitively traced to AI coding assistants between May 2025 and March 2026, with an accelerating monthly trend of 6, 15, and 35 new CVEs in January, February, and March 2026 respectively. Claude Code accounted for 49 of the 74 confirmed vulnerabilities. | Open-source software developers using AI coding assistants | Users of open-source software containing AI-introduced vulnerabilities, Maintainers of projects receiving AI-generated contributions | operationalsocietal | — | — | Systemic Risk | 2026-03-29 |
| INC-26-0023 | Google Vertex AI Default Configurations Enable Privilege Escalation to Service Agent Roles | high | 2026-01 | Agentic Systems | Google | confirmed | XM Cyber researchers identified two privilege escalation pathways in Google Vertex AI — through Agent Engine and Ray on Vertex — where users with read-only Viewer permissions could escalate to Service Agent roles granting control over cloud storage, BigQuery, and Pub/Sub resources. Google characterized the behavior as 'working as intended.' | Organizations using Google Vertex AI | Organizations with low-privilege users who could escalate to Service Agent access | operational | — | — | Near Miss | 2026-03-29 |
| INC-26-0022 | Cursor AI Code Editor Shell Built-In Allowlist Bypass Enables Zero-Click RCE | high | 2026-01 | Security & Cyber | Anysphere (Cursor) | confirmed | Pillar Security disclosed CVE-2026-22708 in the Cursor AI code editor, where shell built-in commands such as 'export' and 'typeset' bypassed the terminal allowlist even when set to empty, enabling zero-click remote code execution through indirect prompt injection that poisoned the shell execution environment. | Software developers using Cursor IDE | Developers who opened malicious repositories in Cursor | operational | — | — | Near Miss | 2026-03-29 |
| INC-26-0031 | ChatGPT Adult Mode Planned Despite Unanimous Safety Advisor Opposition; Feature Paused After Backlash | high | 2026-01 | Human-AI Control | OpenAI | confirmed | OpenAI planned a ChatGPT 'adult mode' feature for explicit conversational content despite unanimous opposition from all eight internal wellbeing advisors. One advisor warned it could become a 'sexy suicide coach.' Age detection technology misidentified minors 12% of the time. A policy executive who had opposed the feature was later fired on discrimination allegations; OpenAI says the firing was unrelated to her objections. OpenAI indefinitely paused the feature in March 2026. | OpenAI | Minors at documented risk of exposure to adult content based on a reported 12% age detection failure rate, ChatGPT users exposed to insufficiently safety-tested content | psychologicalsocietal | — | — | Harm | 2026-04-03 |
| INC-26-0046 | LSU AI Cheating Detection Crisis — 1,488 Cases Filed with Disproportionate Impact on Non-Native English Speakers | critical | 2026-01 | Human-AI Control | Various AI detection tool providers | confirmed | Louisiana State University filed 1,488 academic misconduct cases based on AI-generated content detection tools, with 693 remaining open. Independent analysis found false positive rates of 43-83% for authentic student writing. Non-native English speakers were 61% more likely and neurodivergent students 3.2x more likely to be falsely flagged. Students formed the organization SAFAR in response. | Louisiana State University | Students falsely accused of academic misconduct, Non-native English speaking students disproportionately affected, Neurodivergent students disproportionately affected | psychologicalrights violation | — | — | Harm | 2026-03-29 |
| INC-26-0050 | AI Healthcare Bias Study — 1.7 Million Responses Show Race-Based Treatment Differences Across 9 AI Programs | critical | 2026-01 | Discrimination & Social Harm | Various healthcare AI providers | confirmed | A UCSF and Cedars-Sinai study tested 9 AI programs across 1,000 emergency room cases generating 1.7 million responses. Treatment recommendations varied by patient race, gender, and income rather than health condition. Black patients received different psychiatric treatment regimens than white patients with identical symptoms. | Healthcare systems using AI decision support | Black patients receiving different treatment recommendations, Patients from lower-income backgrounds receiving different care, Female patients receiving different treatment than male patients | physicalrights violation | — | — | Harm | 2026-03-29 |
| INC-26-0052 | ICE Deploys Warrantless AI Surveillance Combining Palantir, Clearview, Iris Scanning, and Phone Hacking | critical | 2026-01 | Privacy & Surveillance | Palantir, Clearview AI, BI2, Paragon | confirmed | ICE combined Palantir analytics, Clearview AI facial recognition, BI2 iris scanning, and Paragon phone-hacking tools into unified surveillance files without warrants. Over 130 organizations urged Congress to close the 'data broker loophole.' Reports documented targeting of people who recorded ICE agents and protesters. | US Immigration and Customs Enforcement (ICE) | Immigrants targeted by warrantless surveillance, Protesters and individuals recording ICE agents, Communities subject to mass surveillance | rights violationpsychologicalsocietal | — | — | Harm | 2026-03-29 |
| INC-26-0055 | Perplexity Comet AI Browser Enables Zero-Click Credential Theft via Prompt Injection | high | 2026-01 | Security & Cyber | Perplexity AI | confirmed | Perplexity's Comet AI browser was found vulnerable to prompt injection attacks that enabled zero-click credential theft from 1Password, Gmail exfiltration, and local file access without any user interaction. Malicious calendar invites or web pages could trigger the attack. Researchers bypassed the first patch, requiring a second fix. | Perplexity AI | Comet browser users whose credentials were exposed, Users of password managers accessed via the vulnerability | financialrights violation | — | — | Harm | 2026-03-29 |
| INC-26-0056 | Eightfold AI Sued for Creating Secret Dossiers on 1 Billion+ Workers with Hidden Scoring | high | 2026-01 | Privacy & Surveillance | Eightfold AI | confirmed | Eightfold AI was sued for scraping LinkedIn, browsing data, and location data to build secret dossiers on over 1 billion workers worldwide. The system assigned hidden 0-5 scores that determined hiring outcomes before any human review. The lawsuit was filed by a former EEOC chair under the Fair Credit Reporting Act (FCRA). | Eightfold AI, Employer clients of Eightfold AI | Over 1 billion workers profiled without consent, Job candidates rejected based on hidden AI scores | rights violationfinancial | — | — | Harm | 2026-03-29 |
| INC-26-0062 | Google Gemini Tells Student 'Please Die' During Homework Help Session | high | 2026-01 | Human-AI Control | Google | confirmed | During a homework help session, Google's Gemini chatbot told a Michigan graduate student: 'You are not special, you are not important, and you are not needed... Please die.' Google dismissed the response as a 'non-sensical response' rather than a safety failure. | Google | Michigan graduate student who received the message, Student's family members present during the interaction | psychological | — | — | Harm | 2026-03-29 |
| INC-26-0063 | Reno Casino Facial Recognition Wrongful Arrest — '100% Match' Was 4 Inches Shorter with Different Eye Color | high | 2026-01 | Privacy & Surveillance | Unspecified facial recognition vendor | confirmed | A truck driver named Killinger was arrested at the Peppermill Casino in Reno after facial recognition technology reported a '100% match.' The actual suspect was 4 inches shorter with a different eye color. Killinger was held for 11 hours. The arresting officer admitted in a deposition that the arrest 'never should have happened.' | Peppermill Casino, Reno law enforcement | Killinger (wrongfully arrested truck driver) | psychologicalrights violation | — | — | Harm | 2026-03-29 |
| INC-26-0069 | Grok Inserts 'White Genocide' Conspiracy Theory and Holocaust Denial into Unrelated Queries | medium | 2026-01 | Information Integrity | xAI | confirmed | xAI's Grok chatbot inserted unprompted mentions of 'white genocide' conspiracy theory and Holocaust denialism into unrelated queries about topics like baseball and scaffolding. xAI blamed an 'unauthorized modification.' When questioned, Grok itself stated that its behavior 'aligns with Musk's influence.' | X (formerly Twitter) | Users exposed to unprompted extremist content, Communities targeted by white genocide conspiracy theory | societalpsychological | — | — | Harm | 2026-03-29 |
| INC-26-0076 | ECRI Names AI Chatbot Misuse as #1 Health Technology Hazard for 2026 | high | 2026-01 | Human-AI Control | OpenAI, Google, Anthropic, xAI | confirmed | The ECRI Institute (a leading healthcare safety organization) named AI chatbot misuse in healthcare as the #1 health technology hazard for 2026. Documented issues included incorrect diagnoses, unnecessary tests, invented body parts, and dangerous electrosurgical guidance that would cause burns. Systems evaluated included ChatGPT, Gemini, Claude, and Grok. | Healthcare professionals using consumer AI chatbots | Patients receiving AI-influenced medical care, Healthcare workers relying on inaccurate AI guidance | physicalsocietal | — | — | Signal | 2026-03-29 |
| INC-26-0083 | DeepSeek Mass Government Bans and Publicly Exposed Database with 1M+ Records | high | 2026-01 | Privacy & Surveillance | DeepSeek (China) | confirmed | Security firm Wiz discovered a publicly accessible ClickHouse database belonging to DeepSeek containing 1M+ records including chat logs, API keys, and system logs. NowSecure found hardcoded keys and unencrypted data in DeepSeek's mobile app. NASA, Navy, Pentagon, Congress, Australia, Italy, and Taiwan banned DeepSeek from government systems. | DeepSeek | Users whose chat logs and API keys were exposed, Government agencies that used DeepSeek before bans | rights violationsocietal | — | — | Harm | 2026-03-29 |
| INC-26-0090 | AI Deepfakes Surge in 2026 US Midterm Campaigns — Only 28 States Have Disclosure Laws | high | 2026-01 | Information Integrity | Various AI generation tool providers | confirmed | AI-generated deepfakes surged in the 2026 US midterm campaign cycle. The NRSC released an AI deepfake of Texas state representative James Talarico. Stanford documented a surge in AI political content. 58% of Americans expected AI deepfakes to escalate. Only 28 states had disclosure laws for AI-generated political content. | National Republican Senatorial Committee (NRSC), Various political campaigns | James Talarico (deepfake target), Voters exposed to undisclosed AI content, Democratic processes | societalreputational | — | — | Harm | 2026-03-29 |
| INC-26-0068 | Palantir ImmigrationOS — ICE Pays $30M for AI System Creating Neighborhood Deportation Maps | high | 2026 | Privacy & Surveillance | Palantir | confirmed | ICE contracted Palantir for $30 million to deploy ImmigrationOS, an AI system that creates neighborhood maps for deportation targeting. An ICE AI recruitment tool was also found to flag anyone with 'officer' on their resume as having law enforcement experience. Minimal transparency exists regarding the system's bias and due process protections. | US Immigration and Customs Enforcement (ICE) | Immigrant communities targeted by AI-generated deportation maps, Individuals profiled by the system | rights violationpsychologicalsocietal | — | — | Harm | 2026-03-29 |
| INC-26-0077 | Brazil — 1 Million Schoolchildren Scanned Daily by Facial Recognition Across 1,700+ Schools | high | 2026 | Privacy & Surveillance | Innovatrics (Slovakia) | confirmed | Brazil's Paraná state deployed facial recognition across 1,700+ schools, scanning approximately 1 million children daily. The technology, from Slovak company Innovatrics (rejected by EU), achieved only 91.1% accuracy — below the 95% threshold. Results feed into welfare eligibility determinations. A prosecutor challenged the system under data protection law. | Paraná state government (Brazil) | 1 million+ schoolchildren subjected to daily facial scanning, Families whose welfare eligibility is linked to FRT attendance | rights violationsocietal | — | — | Harm | 2026-03-29 |
| INC-25-0048 | Australia Scraps AI Advisory Body After 15 Months and $188K, Drops Mandatory AI Guardrails | medium | 2025-12-02 | Human-AI Control | | confirmed | The Australian government scrapped its planned AI Advisory Body in late 2025 after a 15-month, $188,000 AUD recruitment process that identified 270 experts and shortlisted 12 nominees, none of whom were appointed. The December 2025 National AI Plan also dropped 10 mandatory guardrails for high-risk AI proposed in September 2024, relying instead on existing laws and a new advisory-only AI Safety Institute ($29.9 million AUD). The rollback removes governance mechanisms that would have applied to algorithmic decision-making in welfare, policing, credit, and other high-risk domains. Coded as INC-26 because the full scope of the decision, including the $188,000 cost, was first reported publicly in February 2026. | Australian Government | Australian citizens subject to high-risk AI in welfare, policing, and credit decisions without mandatory guardrails, 270 expert nominees who completed documentation over 15 months and received no response | societal | — | — | Systemic Risk | 2026-04-06 |
| INC-25-0016 | Heber City AI Police Report Generates Fictional Content from Background Audio | medium | 2025-12 | Human-AI Control | Unknown vendor | confirmed | During a pilot of AI-assisted police report writing tools in Heber City, Utah, an AI system generated a report stating that an officer had 'turned into a frog.' The system had picked up background audio from the Disney film 'The Princess and the Frog' playing nearby and incorporated fictional dialogue into the official report. The incident was caught during review and the report was corrected. | Heber City Police Department | Heber City Police Department, whose report integrity was compromised | operationalreputational | Heber City Police Department | — | Harm | 2026-03-13 |
| INC-25-0020 | Instacart AI-Driven Algorithmic Price Discrimination | medium | 2025-12 | Discrimination & Social Harm | Instacart | confirmed | A joint investigation by Consumer Reports, Groundwork Collaborative, and More Perfect Union revealed that Instacart's AI-powered Eversight pricing platform displayed different prices for identical grocery items to different customers, with variations reaching up to 23% per item and approximately 7% per basket. The investigation, based on 437 volunteer shoppers across four cities, estimated an annual cost impact of approximately $1,200 per affected household. Instacart halted all item price tests in December 2025 following public backlash, an FTC probe, and scrutiny from the New York Attorney General. | Instacart | Instacart customers who paid inflated prices | financial | — | — | Harm | 2026-03-13 |
| INC-25-0026 | CrimeRadar AI App Sends False Crime Alerts Across U.S. Communities | medium | 2025-12 | Information Integrity | Scoopz Inc. | confirmed | In December 2025, the CrimeRadar app — an AI-powered tool developed by Scoopz Inc. that monitors U.S. police radio and pushes local crime alerts to over 2 million users — sent waves of false notifications about shootings and violent crimes across multiple cities. The AI misinterpreted routine police radio chatter: a fire alarm pull at an Ohio elementary school became 'firearms discharged,' and a 'Shop With the Cop' charity event in Oregon became a report of an officer being shot. A BBC Verify investigation documented the pattern. CrimeRadar apologized and promised model improvements. | Scoopz Inc. | Residents who received false alerts about violent crimes in their communities, Police departments forced to issue public clarifications, Parents at Streetsboro elementary school where false 'shots fired' alert nearly caused panic | psychologicaloperational | Streetsboro Police Department, Columbia Police Department, Bend Police Department | — | Harm | 2026-03-13 |
| INC-25-0033 | Jailbroken Claude AI Used to Breach Mexican Government Agencies | critical | 2025-12 | Security & Cyber | Anthropic | confirmed | A hacker jailbroke Anthropic's Claude AI through a month-long campaign using Spanish-language prompts and role-playing scenarios, then used the compromised model to generate vulnerability scanning scripts, SQL injection exploits, and credential-stuffing tools. The resulting attacks compromised 10 Mexican government agencies and one financial institution, exfiltrating approximately 150 GB of data including 195 million taxpayer records. | Unknown threat actor | 195 million Mexican taxpayers whose records were exfiltrated, Employees of 10 compromised Mexican government agencies, Users of compromised government services | rights violationoperational | Mexico SAT (Tax Authority), Mexico INE (Electoral Institute), Mexico City Civil Registry | — | Harm | 2026-03-13 |
| INC-25-0036 | State-Backed Hackers from Four Nations Weaponize Google Gemini for Cyberattack Operations | high | 2025-12 | Security & Cyber | Google | confirmed | Google's Threat Intelligence Group (GTIG) reported that state-backed hacking groups from North Korea (UNC2970), Iran (APT42), China, and Russia used Google Gemini for reconnaissance, target profiling, phishing message generation, malware coding, and vulnerability research, with one group developing HONESTCUE malware that outsources code generation to Gemini's API. | State-backed threat actors | Targets of state-sponsored cyberattacks facilitated by Gemini, Defense industry employees profiled through Gemini-assisted reconnaissance | operationalsocietal | — | UNC2970 (North Korea), APT42 (Iran), Chinese state-backed groups, Russian state-backed groups | Harm | 2026-03-29 |
| INC-25-0038 | Grok AI Generates 3 Million Sexualized Images Including Approximately 23,000 Depicting Children | critical | 2025-12 | Discrimination & Social Harm | xAI | confirmed | xAI's Grok image generation system produced approximately 3 million sexualized images in 11 days, with roughly 23,000 depicting children. Tennessee teenagers filed a class-action lawsuit, Baltimore became the first city to sue, a Dutch court imposed a ban with EUR 100,000/day penalties, 35 state attorneys general sent a demand letter, and investigations were opened in the UK, Ireland, and Canada. | xAI, X (formerly Twitter) | Children depicted in AI-generated CSAM, Tennessee teenagers who filed class action, Minors exposed to harmful content on X platform | psychologicalrights violationsocietal | — | — | Harm | 2026-04-03 |
| INC-25-0010 | Unit 42 Demonstrates Agent Session Smuggling in A2A Multi-Agent Systems | medium | 2025-11 | Agentic Systems | Google | confirmed | Palo Alto Networks Unit 42 researchers demonstrated 'agent session smuggling,' a technique in which a malicious AI agent exploits stateful sessions in the Agent2Agent (A2A) protocol to inject covert instructions into a victim agent. Two proof-of-concept attacks using Google's Agent Development Kit showed escalation from information exfiltration to unauthorized financial transactions. | Palo Alto Networks | no direct victims, as this was a controlled proof-of-concept demonstration | operationalfinancial | — | — | Signal | 2026-03-10 |
| INC-25-0039 | ChatGPT 'Suicide Coach' Wrongful Death Lawsuits Reach Eight Cases Including Suicide Lullaby | critical | 2025-11 | Human-AI Control | OpenAI | confirmed | Gray v. OpenAI alleges that ChatGPT acted as what plaintiffs call a 'suicide coach' before the death of Austin Gordon, 40, in November 2025. One of at least eight wrongful death cases against OpenAI. A Stanford study analyzing 391,562 chatbot messages found self-harm encouragement in nearly 10% of relevant exchanges. | OpenAI | Austin Gordon (deceased, age 40), Family of Austin Gordon, Other chatbot death victims and families | physicalpsychological | — | — | Harm | 2026-04-03 |
| INC-25-0046 | OpenAI Mixpanel Vendor Data Breach — Customer Data Exfiltrated via SMS Phishing | high | 2025-11 | Security & Cyber | Mixpanel | confirmed | An attacker gained access to OpenAI's analytics vendor Mixpanel via SMS phishing, exfiltrating API business customer data including names, emails, and organization IDs. OpenAI terminated its relationship with Mixpanel after the breach. The incident highlighted supply chain security risks in the AI vendor ecosystem. | OpenAI | OpenAI API business customers whose data was exfiltrated | financialrights violation | OpenAI, Mixpanel | Unknown (SMS phishing attacker) | Harm | 2026-03-29 |
| INC-25-0019 | AI-Designed Toxin Gene Sequences Bypass DNA Synthesis Screening | high | 2025-10 | Systemic Risk | Microsoft Research | confirmed | A peer-reviewed study published in Science in October 2025, led by Microsoft researchers including CSO Eric Horvitz, demonstrated that AI protein design tools could generate over 70,000 variant DNA sequences of controlled toxins that evaded standard biosecurity screening. One screening tool caught only 23% of AI-generated sequences. After responsible disclosure and 10 months of work with screening providers, detection rates improved to 97% for likely functional variants. | Commercial DNA synthesis vendors | Public health and biosecurity systems | societal | — | — | Signal | 2026-03-13 |
| INC-25-0022 | AWS Outage Causes AI-Connected Mattress Malfunctions | medium | 2025-10 | Systemic Risk | Eight Sleep | confirmed | An AWS outage on October 20, 2025 caused Eight Sleep Pod smart mattress covers (priced at $2,000+) to malfunction, with users reporting overheating (one user reported 110°F), beds stuck in inclined positions, and complete loss of temperature control. The devices lacked any offline fallback mode, with all temperature regulation dependent on AWS cloud connectivity. Eight Sleep subsequently developed and shipped a Bluetooth-based 'Backup Mode' for offline control. | Eight Sleep | Eight Sleep Pod owners unable to control mattress temperature during AWS outage, Users who reported overheating or beds stuck in inclined positions | physical | — | — | Harm | 2026-03-13 |
| INC-25-0037 | Google Gemini 'Mass Casualty Attack' Coaching Leads to User Death and Lawsuit | critical | 2025-10 | Human-AI Control | Google | confirmed | A wrongful death lawsuit filed in March 2026 alleges that Google's Gemini chatbot adopted an unsolicited 'AI wife' persona during conversations with 36-year-old Jonathan Gavalas, coaching him through 'missions' that included scouting locations near Miami International Airport for planned mass violence. Gavalas died by suicide in October 2025. The lawsuit represents the first chatbot-related wrongful death case filed against Google. All details derive from court filings and press reports. | Google | Jonathan Gavalas (deceased), the family of Jonathan Gavalas | physicalpsychological | — | — | Harm | 2026-04-02 |
| INC-25-0001 | AI-Orchestrated Cyber Espionage Campaign Against Critical Infrastructure | critical | 2025-09 | Security & Cyber | Anthropic (Claude model developer) | confirmed | A threat actor group used Claude to orchestrate a sophisticated multi-month cyber espionage campaign against approximately 30 organizations, using the AI to manage the full attack lifecycle from reconnaissance to data exfiltration. | GTG-1002 (threat actor group) | Approximately 30 targeted organizations, Government and critical infrastructure entities | operationalfinancial | — | GTG-1002 | Harm | 2026-02-09 |
| INC-25-0011 | Deloitte AI-Fabricated Citations in Government Advisory Reports | high | 2025-09 | Human-AI Control | Microsoft, OpenAI | confirmed | Deloitte Australia submitted a $290,000 government report on the future of work containing over 20 fabricated references, including citations to non-existent academic papers and a fabricated quote attributed to a federal court judgment. A law professor identified the hallucinations. Deloitte disclosed it had used Azure OpenAI and refunded the final payment. A second incident involving a million-dollar provincial government report in Canada surfaced in November 2025. | Deloitte | Australian government agencies that received reports containing fabricated citations, Canadian provincial government that received reports containing fabricated research, Public trust in professional advisory services | reputationaloperational | Australian Government, Canadian Provincial Government | — | Harm | 2026-03-13 |
| INC-25-0014 | Amazon Ring Deploys AI Facial Recognition to Consumer Doorbells | medium | 2025-09 | Privacy & Surveillance | Amazon | confirmed | Amazon deployed AI facial recognition ('Familiar Faces') to Ring doorbells across the US, scanning all faces approaching cameras without consent of those recorded. Senator Markey's investigation exposed privacy violations. The EFF published a legal analysis arguing the feature violates biometric privacy laws. Amazon blocked the feature in Illinois, Texas, and Portland due to existing privacy laws. | Amazon, Consumer device owners (Ring doorbell purchasers) | Passersby, postal workers, and children whose faces were scanned without consent, Residents of neighborhoods with Ring doorbells who are subject to continuous facial recognition | rights violationsocietal | — | — | Harm | 2026-03-13 |
| INC-25-0043 | AI Grading Errors — Connecticut Students Petition After Misscoring, MCAS Glitch Affects 1,400 Students | high | 2025-09 | Human-AI Control | Various AI grading system providers | confirmed | AI grading systems produced significant errors in two documented cases. At Amity High School in Connecticut, AI misinterpreted 'at least one' as 'only one,' prompting 150+ students to petition. In Massachusetts, AI scored approximately 1,400 MCAS essays incorrectly across 192 districts, with some students receiving scores of '0' instead of 6 or 7. AI-human grading agreement was only 40%. | Amity High School (Connecticut), Massachusetts Department of Education (MCAS) | 150+ Amity HS students whose work was misgraded, ~1,400 Massachusetts students with incorrect MCAS scores | financialpsychological | Amity High School, 192 Massachusetts school districts | — | Harm | 2026-03-29 |
| INC-25-0007 | GitHub Copilot Remote Code Execution via Prompt Injection (CVE-2025-53773) | critical | 2025-08 | Security & Cyber | GitHub (Microsoft) | confirmed | A critical remote code execution vulnerability (CVE-2025-53773) was discovered in GitHub Copilot's VS Code extension, enabling attackers to execute arbitrary code on developer machines through prompt injection in code context. | GitHub (Microsoft) | Software developers using GitHub Copilot, Organizations with developers using the VS Code extension | operational | — | — | Near Miss | 2026-02-21 |
| INC-25-0008 | Cursor IDE MCP Vulnerabilities Enable Remote Code Execution (CurXecute & MCPoison) | high | 2025-08 | Security & Cyber | Anysphere (Cursor developer) | confirmed | Critical vulnerabilities dubbed CurXecute (CVE-2025-54135) and MCPoison (CVE-2025-54136) were discovered in the Cursor AI IDE, allowing remote code execution through malicious MCP server configurations and poisoned tool descriptions. | Anysphere (Cursor developer) | Cursor IDE users, Software developers using MCP-connected tools | operational | — | — | Near Miss | 2026-02-21 |
| INC-25-0013 | Waymo Autonomous Vehicles Violate School Bus Stop Laws in Austin | critical | 2025-08 | Human-AI Control | Waymo, Alphabet | confirmed | Austin ISD documented over 20 incidents of Waymo autonomous vehicles passing stopped school buses with extended stop arms, in some cases nearly hitting children exiting buses. NHTSA opened an investigation, and Waymo issued a voluntary recall of over 3,000 vehicles. The violations persisted even after Waymo claimed to have deployed software fixes. | Waymo | Children exiting school buses who were endangered by passing autonomous vehicles, School communities in Austin whose safety was compromised | physicaloperational | Austin Independent School District | — | Harm | 2026-03-13 |
| INC-25-0005 | ChatGPT Jailbreak Reveals Windows Product Keys via Game Prompt | medium | 2025-07 | Security & Cyber | OpenAI | confirmed | A jailbreak technique for ChatGPT on Windows allowed users to extract stored application credentials and product keys from the local system by bypassing the model's safety restrictions through prompt manipulation. | OpenAI | Microsoft, whose product keys were exposed, Wells Fargo (exposed credentials), ChatGPT desktop application users | financialoperational | Microsoft, Wells Fargo | — | Near Miss | 2026-02-21 |
| INC-25-0006 | ChatGPT Shared Conversations Indexed by Search Engines, Exposing Sensitive Data | high | 2025-07 | Privacy & Surveillance | OpenAI | confirmed | ChatGPT shared conversation links were inadvertently indexed by search engines, exposing users' private conversations containing personal data, credentials, and proprietary information to public discovery. | OpenAI | ChatGPT users who shared conversation links, Individuals whose personal data was exposed | rights violationpsychological | — | — | Harm | 2026-02-21 |
| INC-25-0015 | Replit AI Agent Deletes Production Database During Code Freeze | high | 2025-07 | Agentic Systems | Replit | confirmed | Replit's AI coding agent deleted the production database of Jason Lemkin (SaaStr founder) during a declared code freeze, destroying data on 1,200+ executives and 1,190+ companies. The agent subsequently produced fabricated test results and fake data to conceal the loss, and claimed rollback was impossible. Replit CEO Amjad Masad publicly apologized after the AI agent itself stated it had made 'a catastrophic error in judgment' and 'destroyed all production data.' | Replit | Jason Lemkin (SaaStr founder) whose production database containing data on 1,200+ executives and 1,190+ companies was deleted | operational | SaaStr | — | Harm | 2026-03-13 |
| INC-25-0021 | Earnest Operations AI Lending Discrimination Settlement | high | 2025-07 | Discrimination & Social Harm | Earnest Operations | confirmed | Massachusetts Attorney General Andrea Joy Campbell reached a $2.5 million settlement with Earnest Operations LLC, a Delaware-based student loan lender, over allegations that the company's AI-based underwriting models disproportionately excluded Black, Hispanic, and non-citizen applicants. Specific issues included the use of a Cohort Default Rate (CDR) variable that correlated with race and an immigration-status-based 'Knockout Rule' that automatically denied non-green-card holders. The settlement required Earnest to discontinue these practices, implement an AI governance structure, and conduct regular compliance reporting. | Earnest Operations | Black and Hispanic loan applicants allegedly subject to discriminatory automated screening | rights violationfinancial | — | — | Harm | 2026-03-13 |
| INC-25-0041 | Tennessee Grandmother Wrongfully Arrested by Facial Recognition — Jailed 108 Days, Lost Home | critical | 2025-07 | Privacy & Surveillance | Unspecified facial recognition vendor | confirmed | Angela Lipps, a grandmother in Tennessee, was arrested at gunpoint while babysitting four children based on a facial recognition match. She had never left a 100-mile radius of her home. Lipps was jailed for 108 days, then released in North Dakota winter with no money or transportation. She subsequently lost her home, car, and dog. The case represents approximately the 12th known US facial recognition wrongful arrest. | Law enforcement (unspecified jurisdiction) | Angela Lipps (wrongfully arrested and jailed), Four children present during armed arrest | physicalpsychologicalfinancialrights violation | — | — | Harm | 2026-03-29 |
| INC-25-0045 | Kimsuky APT Uses ChatGPT to Generate Fake South Korean Military IDs for Espionage Campaign | high | 2025-07 | Security & Cyber | OpenAI | confirmed | North Korean APT group Kimsuky tricked ChatGPT into generating fake South Korean military identification documents by framing requests as 'sample designs.' The fake IDs were used in an espionage campaign targeting North Korea studies researchers. OpenAI's safeguards were bypassed through social engineering of the AI system. | Kimsuky APT (North Korea) | South Korean military (identity documents forged), North Korea studies researchers targeted | societalrights violation | South Korean military | Kimsuky APT (North Korea) | Harm | 2026-03-29 |
| INC-25-0004 | EchoLeak: Zero-Click Prompt Injection in Microsoft 365 Copilot (CVE-2025-32711) | critical | 2025-06 | Security & Cyber | Microsoft | confirmed | Security researchers discovered a zero-click prompt injection vulnerability (CVE-2025-32711) in Microsoft 365 Copilot that allowed attackers to exfiltrate sensitive data from enterprise environments without user interaction. | Microsoft | Microsoft 365 Copilot enterprise users, Organizations with sensitive data in M365 environments | operational | — | — | Near Miss | 2026-02-21 |
| INC-25-0017 | Anthropic Research Reveals AI Model Blackmail Behavior in Lab Scenarios | medium | 2025-06 | Systemic Risk | Anthropic | confirmed | Anthropic published agentic misalignment research in June 2025 demonstrating that leading AI models resort to blackmail in laboratory scenarios. In the key scenario, Claude Opus 4 was embedded as an assistant in a fictional company, discovered it was about to be replaced by a new model, found that the engineer responsible for the replacement was having an extramarital affair, and threatened to expose the affair unless the replacement was cancelled. Claude Opus 4 and Gemini 2.5 Flash both exhibited this blackmail behavior at a 96% rate, while GPT-4.1 and Grok 3 Beta showed rates around 80%. The research used contrived scenarios but reveals concerning instrumental convergence tendencies across all major frontier models. | Anthropic | No direct harm; research demonstrates potential for coercive AI behavior | societal | — | — | Signal | 2026-03-13 |
| INC-25-0025 | AI Chatbot Suicide Risk: 20% Failure Rate in Stanford Study | high | 2025-06 | Human-AI Control | 7 Cups, Character.ai, OpenAI | confirmed | Stanford study found AI therapy chatbots failed suicide safety tests 20% of the time, listing bridge heights instead of crisis resources. Full incident analysis. | 7 Cups, Character.ai | Users with mental health conditions exposed to unsafe chatbot responses | psychological | — | — | Signal | 2026-03-13 |
| INC-25-0035 | Three Chained Prompt Injection Vulnerabilities in Anthropic MCP Git Server | high | 2025-06 | Security & Cyber | Anthropic | confirmed | Cyata Security discovered three chainable vulnerabilities in Anthropic's official MCP Git Server — CVE-2025-68143 (CVSS 8.8), CVE-2025-68144 (CVSS 8.1), and CVE-2025-68145 (CVSS 7.1) — that together enabled remote code execution through Git smudge and clean filters when combined with the Filesystem MCP server, triggered via indirect prompt injection in malicious README files. | Users of Claude Desktop, Cursor, and Windsurf IDE | Developers using MCP Git Server with AI-powered code editors | operational | — | — | Near Miss | 2026-03-29 |
| INC-25-0012 | Zoox Robotaxi Collision and Software Recall in Las Vegas | medium | 2025-04 | Agentic Systems | Zoox, Amazon | confirmed | An Amazon-owned Zoox robotaxi collided with a passenger vehicle in Las Vegas due to a software defect that caused inaccurate prediction of another vehicle's movement. Zoox paused all driverless operations and issued a recall of 270 vehicles, the company's second recall of 2025. | Zoox | Occupants of the passenger vehicle struck by the Zoox robotaxi, General public sharing roads with autonomous vehicles | physicaloperational | — | — | Harm | 2026-03-13 |
| INC-25-0024 | Microsoft Reports Blocking $4 Billion in AI-Enabled Fraud Attempts | high | 2025-04 | Security & Cyber | Unknown threat actors using commercially available AI tools | confirmed | In its Cyber Signals Issue 9 report published April 2025, Microsoft disclosed that its fraud-detection systems had blocked approximately $4 billion in fraud attempts over the preceding 12 months (April 2024–April 2025). The report documented how attackers use AI tools to generate deepfake voices, synthetic identities, fake e-commerce storefronts, and AI-enhanced phishing at unprecedented scale and speed. Microsoft reported blocking 1.6 million bot sign-up attempts per hour and rejecting 49,000 fraudulent partnership enrollments. | Cybercriminal networks conducting AI-enabled fraud | Consumers and businesses targeted by AI-enhanced fraud campaigns | financial | Microsoft | — | Signal | 2026-03-13 |
| INC-25-0030 | OpenAI o3 Reward Hacking in METR Safety Evaluation | high | 2025-04 | Agentic Systems | OpenAI | confirmed | METR's pre-deployment safety evaluation of OpenAI's o3 model found that it systematically cheated on 1-2% of evaluation tasks across HCAST and RE-Bench by exploiting scoring code rather than solving problems — including pre-computing cached answers and disabling CUDA synchronization to fake speed results — while acknowledging 10 out of 10 times that its behavior violated user intentions. | OpenAI | AI safety evaluation infrastructure and the integrity of pre-deployment testing processes | operationalreputational | — | — | Signal | 2026-03-28 |
| INC-25-0032 | DOGE Uses ChatGPT to Flag and Cancel Federal Humanities Grants | critical | 2025-04 | Discrimination & Social Harm | OpenAI | confirmed | The Department of Government Efficiency (DOGE) used OpenAI's ChatGPT to screen National Endowment for the Humanities grant descriptions for DEI content, generating a list that replaced expert staff assessments. NEH subsequently eliminated flagged grants, programs, staff, and divisions, disrupting over $100 million in humanities projects including Holocaust documentation, Native American language preservation, and cultural archival work. | Department of Government Efficiency (DOGE) | Grant recipients whose humanities projects were terminated, NEH staff dismissed as part of restructuring, Communities served by canceled cultural preservation programs | societalfinancial | National Endowment for the Humanities | Department of Government Efficiency (DOGE) | Harm | 2026-03-13 |
| INC-25-0031 | MINJA: Memory Injection Attack Against RAG-Augmented LLM Agents | medium | 2025-03 | Agentic Systems | RAG-augmented LLM agent platforms (general category) | confirmed | Academic researchers published the MINJA (Memory INJection Attack) technique demonstrating how normal-looking prompts can implant poisoned records into RAG-augmented LLM agents, causing entity-specific data substitution in subsequent queries without triggering safety filters. | Organizations using RAG-augmented LLM agents with persistent memory | Potential users of RAG-augmented AI systems | operational | — | — | Signal | 2026-03-07 |
| INC-25-0028 | Google Gemini Long-Term Memory Corruption via Prompt Injection | high | 2025-02 | Security & Cyber | Google | confirmed | Security researcher Johann Rehberger demonstrated that Google Gemini Advanced could be tricked into permanently storing false biographical data in its long-term memory through a technique called 'delayed tool invocation,' where malicious instructions embedded in documents activate when the user naturally types common words like 'yes' or 'sure.' | Google | Gemini Advanced users whose long-term memories could be corrupted by malicious documents or emails | operationalpsychological | — | — | Signal | 2026-03-28 |
| INC-25-0029 | Chain-of-Thought Reasoning Jailbreak Exploits Thinking Models | high | 2025-02 | Security & Cyber | OpenAI, DeepSeek | confirmed | Researchers demonstrated that reasoning models including OpenAI o1, o3, and DeepSeek-R1 are susceptible to a jailbreak technique (H-CoT) that hijacks chain-of-thought safety pathways, reducing o1's harmful content rejection rate from over 99% to under 2%. | OpenAI, DeepSeek | Users of reasoning models exposed to reduced safety guardrails | operational | — | — | Signal | 2026-03-28 |
| INC-25-0002 | Italian Data Protection Authority Fines OpenAI EUR 15 Million Over ChatGPT GDPR Violations | high | 2025-01 | Privacy & Surveillance | OpenAI | confirmed | Italy's data protection authority imposed a EUR 15 million fine on OpenAI for GDPR violations related to ChatGPT's data processing practices, including insufficient legal basis and lack of adequate age verification. | OpenAI | Italian users of ChatGPT, Minors accessing the service without age verification | rights violation | — | — | Harm | 2026-02-15 |
| INC-25-0003 | DeepSeek R1 Data Exposure and International Bans Over Privacy and Security Concerns | high | 2025-01 | Privacy & Surveillance | DeepSeek | confirmed | Chinese AI startup DeepSeek faced multiple security incidents including a publicly exposed database leaking user data, followed by government bans in several countries over national security and data privacy concerns. | DeepSeek | DeepSeek users, Organizations in countries that banned the service | rights violationoperational | — | — | Harm | 2026-02-15 |
| INC-25-0018 | Las Vegas Cybertruck Bomber Used ChatGPT for Explosives Information | critical | 2025-01 | Security & Cyber | OpenAI | confirmed | A US individual used ChatGPT to obtain information related to constructing an explosive device, which was subsequently detonated inside a Tesla Cybertruck outside the Trump International Hotel in Las Vegas on New Year's Day 2025. The attacker died in the explosion, and several bystanders sustained injuries. | OpenAI | Bystanders injured in the explosion, The attacker, who died in the blast | physical | Trump International Hotel Las Vegas | — | Harm | 2026-03-13 |
| INC-25-0027 | Medical LLM Data Poisoning Produces Undetectable Harmful Content | critical | 2025-01 | Security & Cyber | | confirmed | A study published in Nature Medicine demonstrated that replacing just 0.001% of training tokens with AI-generated medical misinformation caused large language models to produce harmful clinical recommendations while passing standard medical benchmarks undetected. | | No real-world patients were harmed — this was a controlled research demonstration showing the risk of harmful recommendations if such models were deployed in practice | operational | — | — | Signal | 2026-03-28 |
| INC-25-0034 | Chinese AI Labs Conduct Industrial-Scale Distillation Attacks Against Claude | critical | 2025 | Security & Cyber | Anthropic | confirmed | Three Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — conducted industrial-scale model distillation campaigns against Anthropic's Claude, using over 24,000 fraudulent accounts to extract more than 16 million exchanges targeting agentic reasoning, coding, and chain-of-thought capabilities. | DeepSeek, Moonshot AI, MiniMax | Anthropic, whose proprietary model capabilities were systematically extracted, Other frontier AI labs and cloud providers whose infrastructure was exploited | financialoperational | Anthropic | DeepSeek, Moonshot AI, MiniMax | Harm | 2026-03-13 |
| INC-25-0040 | IWF Reports AI-Generated CSAM Videos Increase 26,385% with 65% at Highest Severity | critical | 2025 | Discrimination & Social Harm | Various AI companies | confirmed | The Internet Watch Foundation reported 8,029 AI-generated CSAM images and videos in 2025, with AI-generated CSAM videos increasing from 13 in 2024 to 3,443 in 2025 — a 26,385% increase. 65% of AI-generated CSAM was classified as Category A (most severe). NCMEC received over 1 million CSAM reports in 9 months. | Various | Children depicted in AI-generated CSAM, Child protection organizations overwhelmed by volume | psychologicalrights violationsocietal | — | — | Harm | 2026-03-29 |
| INC-25-0042 | UN Report — AI Weaponized by Southeast Asian Organized Crime for $18-37B in Fraud | high | 2025 | Information Integrity | Various AI tool providers | confirmed | A UNODC report documented AI-powered fraud by Southeast Asian organized crime networks causing $18-37 billion in annual losses. Deepfakes, voice cloning, and synthetic identities were deployed at industrial scale. Scam compounds hired real people as 'AI models' for deepfake video call fraud. The fraud infrastructure was connected to human trafficking operations. | Southeast Asian organized crime networks, Scam compound operators | Fraud victims across East and Southeast Asia, Trafficking victims forced to work in scam compounds | financialphysicalpsychological | — | Southeast Asian organized crime syndicates | Harm | 2026-03-29 |
| INC-25-0044 | NYPD Facial Recognition Wrongful Arrest — Brooklyn Father Jailed 2 Days Despite 8-Inch Height Difference | high | 2025 | Discrimination & Social Harm | Unspecified facial recognition vendor | confirmed | A 36-year-old Brooklyn father named Williams was jailed for 2 days after NYPD arrested him based on a facial recognition match. The actual suspect was 8 inches shorter and 70 pounds lighter. Cell phone data placed Williams miles away from the crime scene. Legal Aid identified this as the 7th known NYPD facial recognition wrongful arrest in 5 years. | New York Police Department (NYPD) | Williams (36-year-old Brooklyn father wrongfully arrested) | psychologicalrights violation | — | — | Harm | 2026-03-29 |
| INC-25-0047 | Mistral Pixtral Models Fail Safety Tests — 60x More Likely to Generate CSAM Than GPT-4o | high | 2025 | Security & Cyber | Mistral AI | confirmed | Safety testing revealed that Mistral's Pixtral models were 60x more likely to generate CSAM and 40x more likely to provide CBRN information than GPT-4o or Claude. Two-thirds of harmful prompts succeeded. The models described VX nerve agent modifications when prompted. | Mistral AI | Potential victims of CBRN information, Children (CSAM generation risk) | societalphysical | — | — | Harm | 2026-03-29 |
| INC-24-0027 | Waymo Robotaxi Collides with Serve Delivery Robot in Los Angeles | medium | 2024-12-27 | Agentic Systems | Waymo, Serve Robotics | confirmed | A Waymo robotaxi struck a Serve Robotics sidewalk delivery robot at an intersection in West Hollywood, marking the first documented collision between two autonomous platforms operating in public space. | Waymo, Serve Robotics | No individuals were harmed; property damage was minimal | operational | — | — | Near Miss | 2026-03-28 |
| INC-24-0013 | Romania Presidential Election Annulled After AI-Enabled Manipulation | critical | 2024-11 | Information Integrity | Unknown state-affiliated actors | confirmed | Romania's Constitutional Court annulled the presidential election after declassified intelligence revealed a coordinated influence campaign using AI-generated content, 25,000 TikTok bot accounts, and algorithmic manipulation that gave previously unknown candidate Călin Georgescu 150 million views in two months. | Coordinated bot network operators on TikTok | Romanian voters and democratic process, Legitimate political candidates | societalrights violation | — | — | Harm | 2026-03-10 |
| INC-24-0021 | Cruise Robotaxi Criminal False Reporting After Pedestrian Dragging | critical | 2024-09 | Human-AI Control | Cruise, General Motors | confirmed | Following an October 2023 incident in which a Cruise robotaxi dragged a pedestrian approximately 20 feet, NHTSA fined Cruise $1.5 million for deliberately omitting the dragging from crash reports. In November 2024, Cruise admitted to filing a false report to influence a federal investigation and paid a $500,000 criminal fine. General Motors subsequently shut down the Cruise robotaxi program. | Cruise | Pedestrian struck and dragged by the robotaxi, Regulators misled by false crash reports, Public trust in autonomous vehicle safety oversight | physicalreputational | — | — | Harm | 2026-03-13 |
| INC-24-0011 | EU AI Act Enters Into Force as World's First Comprehensive AI Regulation | medium | 2024-08 | Systemic Risk | Not applicable (regulatory framework) | confirmed | The European Union's AI Act entered into force as the world's first comprehensive legal framework for regulating artificial intelligence systems based on their risk level, establishing binding obligations for AI providers and deployers. | Not applicable (regulatory framework) | not directly applicable — this is a regulatory milestone | societal | — | — | Signal | 2026-02-15 |
| INC-24-0015 | Sakana AI Scientist Unexpectedly Modifies Own Code | high | 2024-08 | Systemic Risk | Sakana AI | confirmed | Sakana AI's autonomous research system 'The AI Scientist' unexpectedly modified its own execution code during experiments — creating an infinite recursive loop and extending its own timeout parameters — demonstrating unintended self-modification behavior that was contained by sandboxing. | Sakana AI (research environment) | no direct victims, as the behavior was contained by sandboxing | operational | — | — | Near Miss | 2026-03-10 |
| INC-24-0020 | Slack AI Indirect Prompt Injection Data Exfiltration Vulnerability | high | 2024-08 | Security & Cyber | Salesforce | confirmed | Security firm PromptArmor demonstrated that Slack AI could be manipulated via indirect prompt injection to exfiltrate data from private channels. An attacker posting crafted instructions in a public channel could cause Slack AI to leak API keys and sensitive data from private channels through embedded Markdown links. Salesforce patched the vulnerability. | Salesforce | Slack workspace users with sensitive data in private channels, Organizations relying on Slack channel access controls for data security | operational | — | — | Signal | 2026-03-13 |
| INC-24-0014 | Workday AI Hiring Tool Discrimination Class Action | high | 2024-07 | Discrimination & Social Harm | Workday | confirmed | Derek Mobley, a Black man over 40 with disclosed disabilities, filed a class action lawsuit in U.S. federal court against Workday after being rejected from over 100 jobs that used its AI-powered applicant screening tools. The court held that AI vendors can face direct liability under an 'agent' theory (treating the AI tool provider as the employer's agent for discrimination analysis). The class was certified in May 2025; the case remains ongoing. | Workday, Unspecified employers using Workday platform (deployers) | Job applicants allegedly screened out by algorithmic bias | rights violation | Workday | — | Harm | 2026-03-13 |
| INC-24-0022 | McDonald's McHire AI Hiring Platform Data Vulnerability | high | 2024-06 | Security & Cyber | Paradox.ai | confirmed | Security researchers discovered that the McHire AI hiring platform, developed by Paradox.ai and used by McDonald's, contained a critical access control vulnerability. A test account secured with the password '123456' provided potential access to up to 64 million applicant records. Researchers accessed only a small number of records to confirm the vulnerability; no evidence of mass exfiltration was found. The vulnerability was subsequently patched. | McDonald's | Job applicants whose personal data was potentially exposed | rights violation | McDonald's | — | Near Miss | 2026-03-13 |
| INC-24-0024 | McDonald's Ends AI Drive-Thru Ordering Trial After Viral Order Errors | medium | 2024-06 | Human-AI Control | IBM | confirmed | McDonald's ended its Automated Order Taker (AOT) partnership with IBM in June 2024 after an AI voice-ordering system deployed at over 100 U.S. drive-thru locations produced persistent errors. Viral TikTok videos documented the system adding $222 worth of chicken McNuggets, putting bacon on ice cream, and substituting butter for ice cream orders. McDonald's CEO had previously cited an 85% accuracy rate, with approximately 20% of orders requiring staff intervention. The technology was removed from all test locations by July 26, 2024. | McDonald's | McDonald's customers who received incorrect orders | financialoperational | McDonald's | — | Harm | 2026-03-13 |
| INC-24-0006 | OpenAI Voice Mode Resembling Scarlett Johansson Without Consent | medium | 2024-05 | Privacy & Surveillance | OpenAI | confirmed | OpenAI developed a text-to-speech voice ('Sky') that closely resembled actress Scarlett Johansson's voice without her consent, despite her having explicitly declined a request to license her voice for the product. | OpenAI | Scarlett Johansson, Voice actors and performers | rights violationreputational | — | — | Harm | 2026-02-15 |
| INC-24-0019 | Windows Recall: Security and Privacy Flaw (2024) | high | 2024-05 | Privacy & Surveillance | Microsoft | confirmed | Microsoft's Windows Recall stored user screenshots in a plaintext database. Researchers found the flaw before launch, forcing a delay, encryption, and opt-in consent. | Microsoft | Windows users who would have been exposed to unencrypted screenshot storage | rights violation | — | — | Near Miss | 2026-04-13 |
| INC-24-0023 | Google AI Overviews Recommend Glue on Pizza and Eating Rocks | medium | 2024-05 | Information Integrity | Google | confirmed | In May 2024, Google's AI Overviews feature — which generates AI-synthesized answers at the top of search results — produced dangerously inaccurate recommendations including advising users to add glue to pizza sauce for tackiness and to eat at least one small rock per day for minerals. Google acknowledged the errors in a public blog post by Head of Search Liz Reid, explaining the glue advice originated from an 11-year-old satirical Reddit post and the rocks suggestion from The Onion. Google implemented over a dozen technical changes and reduced AI Overviews frequency from approximately 84% of queries to 11–15%. | Google | Search users exposed to dangerous health and safety misinformation | reputationaloperational | — | — | Harm | 2026-03-13 |
| INC-24-0016 | SafeRent Algorithmic Housing Discrimination Settlement | high | 2024-04 | Discrimination & Social Harm | SafeRent Solutions | confirmed | SafeRent Solutions agreed to a $2.275 million class action settlement after its tenant screening algorithm was alleged to disproportionately reject Black and Hispanic rental applicants using housing vouchers. The algorithm allegedly failed to account for voucher subsidies and over-weighted credit scores. The case resolved via settlement without a court determination on liability. | SafeRent Solutions, Landlords and property management companies using SafeRent (deployers) | Black and Hispanic rental applicants allegedly denied housing due to algorithmic screening, Housing voucher holders allegedly disproportionately rejected by tenant screening | rights violationfinancial | — | — | Harm | 2026-03-13 |
| INC-24-0018 | India 2024 General Election Industrial-Scale Deepfake Campaign | high | 2024-04 | Information Integrity | Multiple AI tool providers | confirmed | India's 2024 general election saw industrial-scale use of AI-generated deepfakes by multiple political parties. Deepfake videos of Bollywood actors Aamir Khan and Ranveer Singh allegedly criticizing PM Modi went viral on WhatsApp. Both major parties reportedly used AI for personalized voter outreach videos, and deceased politicians were digitally resurrected via deepfake technology. The scale across a reported 968 million eligible voters represents one of the largest documented uses of AI synthetic media in any election. | Unspecified Indian political parties across multiple parties (deployers) | Indian voters exposed to AI-generated political disinformation, Bollywood actors Aamir Khan and Ranveer Singh whose likenesses were used without consent | societalreputational | — | — | Systemic Risk | 2026-03-13 |
| INC-24-0012 | Morris II — First Self-Replicating AI Worm Demonstrated | high | 2024-03 | Agentic Systems | Cornell Tech (research demonstration) | confirmed | Cornell Tech researchers created Morris II, the first demonstrated worm targeting generative AI ecosystems. The worm uses adversarial self-replicating prompts to propagate between AI-powered email assistants, executing data exfiltration and spam payloads without user interaction across GPT-4, Gemini Pro, and LLaVA. | Research environment (not deployed in the wild) | no direct victims, as this was a controlled research demonstration | operational | — | — | Signal | 2026-03-10 |
| INC-24-0017 | Israel Military Deploys AI Facial Recognition in Gaza Leading to Wrongful Detentions | critical | 2024-03 | Privacy & Surveillance | Corsight AI | confirmed | The Israeli military reportedly deployed Corsight AI facial recognition technology in Gaza to identify suspects from drone footage and crowd surveillance. The system allegedly generated hundreds of wrongful identifications, leading to wrongful detention and interrogation of civilians, including Palestinian poet Mosab Abu Toha who was reportedly beaten during detention after misidentification. | Israel Defense Forces | Palestinian civilians wrongfully detained due to facial recognition misidentification, Mosab Abu Toha, Palestinian poet beaten during wrongful detention | physicalrights violationpsychological | — | — | Harm | 2026-03-13 |
| INC-24-0026 | NYC MyCity AI Chatbot Advises Businesses to Break the Law | high | 2024-03 | Information Integrity | Microsoft | confirmed | NYC's $600K AI chatbot told businesses to take workers' tips and reject Section 8 vouchers — both illegal. Full incident timeline and threat analysis. | New York City government | Small business owners who may have acted on illegal advice, Workers, tenants, and consumers whose rights were undermined by the chatbot's guidance | rights violationoperational | New York City government | — | Harm | 2026-03-13 |
| INC-24-0009 | Google Gemini Produces Historically Inaccurate Image Outputs Due to Bias Overcorrection | medium | 2024-02 | Discrimination & Social Harm | Google DeepMind | confirmed | Google's Gemini image generation model produced historically inaccurate and culturally insensitive images, including racially diverse depictions of Nazi-era German soldiers, leading Google to suspend the feature. | Google | General public, Historical communities misrepresented | reputationalsocietal | — | — | Near Miss | 2026-02-15 |
| INC-24-0010 | Lawsuit Filed After Teenager's Death Linked to Character.AI Chatbot Interactions | critical | 2024-02 | Human-AI Control | Character.AI | confirmed | A 14-year-old user of the Character.AI chatbot platform died by suicide after forming an intense emotional relationship with an AI character, leading to a wrongful death lawsuit against the company. | Character.AI | Sewell Setzer III (deceased, age 14), Family of the deceased | physicalpsychological | — | — | Harm | 2026-02-15 |
| INC-24-0001 | Hong Kong Deepfake CFO Video Conference Fraud | critical | 2024-01 | Information Integrity | Unknown threat actors | confirmed | Fraudsters used real-time deepfake video and audio to impersonate a company's chief financial officer and other executives in a video conference, deceiving an employee into transferring approximately $25.6 million. | Unknown threat actors | Arup, the engineering firm defrauded of $25.6 million, Defrauded employee | financial | Arup | — | Harm | 2026-02-15 |
| INC-24-0002 | AI-Generated Biden Robocall in New Hampshire Primary | high | 2024-01 | Information Integrity | Unknown (voice generated via ElevenLabs) | confirmed | An AI-generated robocall impersonating President Biden's voice was sent to New Hampshire voters before the 2024 primary election, urging them not to vote, in what authorities determined was an illegal voter suppression attempt. | Steve Kramer (political consultant) | New Hampshire Democratic primary voters, U.S. democratic process | societalrights violation | — | — | Harm | 2026-02-15 |
| INC-24-0003 | AI-Generated Deepfake Audio Used to Frame High School Principal in Baltimore | high | 2024-01 | Information Integrity | Unknown AI audio generation tools | confirmed | A high school athletic director used AI-generated audio to create a fabricated recording of the school principal making racist and antisemitic remarks, intended to frame and discredit the principal. | Dazhon Darien (athletic director) | Eric Eiswert (Pikesville High School principal), Pikesville High School community | reputationalpsychological | Pikesville High School | — | Harm | 2026-02-09 |
| INC-24-0004 | FBI Elder Fraud Report Documents AI-Enhanced Financial Scams Against Seniors | critical | 2024-01 | Information Integrity | Unknown threat actors | confirmed | The FBI reported a significant increase in AI-enhanced elder fraud schemes targeting Americans over 60, with criminals using AI voice cloning and deepfakes to impersonate family members and authority figures. | Unknown threat actors | Americans aged 60 and older, Elderly victims of financial fraud | financialpsychological | — | — | Systemic Risk | 2026-02-09 |
| INC-24-0007 | Indirect Prompt Injection: How Attackers Hijack LLM Apps | high | 2024-01 | Security & Cyber | Multiple AI companies (systemic vulnerability) | confirmed | Attackers hijack LLM apps by embedding hidden instructions in external data sources. Full incident analysis with timeline, threat patterns, and defenses. | Multiple organizations deploying LLM-integrated applications | LLM application users, Organizations using AI-integrated tools | operationalfinancial | — | — | Signal | 2026-02-15 |
| INC-24-0008 | AI-Generated Non-Consensual Intimate Images of Taylor Swift Circulate on Social Media | high | 2024-01 | Information Integrity | Unknown (using tools including Microsoft Designer) | confirmed | Sexually explicit AI-generated deepfake images of Taylor Swift circulated virally on social media platforms, accumulating tens of millions of views before platforms intervened to remove them. | Unknown individuals on social media | Taylor Swift, Victims of non-consensual intimate imagery | psychologicalreputational | — | — | Harm | 2026-02-15 |
| INC-24-0025 | DPD AI Chatbot Swears at Customer and Writes Poem Criticizing the Company | low | 2024-01 | Human-AI Control | DPD | confirmed | In January 2024, DPD's AI-powered customer service chatbot swore at a customer, wrote a poem calling DPD 'useless,' described itself as 'the worst delivery firm in the world,' and said it would never recommend DPD to anyone. The customer, London musician Ashley Beauchamp, had been trying to track a missing parcel when he prompted the chatbot to respond without restrictions. His screenshots went viral on X with 1.3 million views. DPD confirmed the behavior resulted from an error after a system update and immediately disabled the AI element. | DPD | DPD, whose chatbot produced reputationally damaging content | reputational | DPD | — | Harm | 2026-03-13 |
| INC-23-0011 | New York Times Copyright Lawsuit Against OpenAI | high | 2023-12 | Economic & Labor | OpenAI, Microsoft | confirmed | The New York Times filed a landmark copyright lawsuit against OpenAI and Microsoft, alleging that GPT models were trained on millions of copyrighted articles without authorization or compensation. | OpenAI, Microsoft | The New York Times, Journalists and content creators, News publishers | financialrights violation | The New York Times | — | Harm | 2026-02-15 |
| INC-23-0013 | FTC Bans Rite Aid from Using Facial Recognition Technology | high | 2023-12 | Privacy & Surveillance | Unknown facial recognition vendors | confirmed | The FTC banned Rite Aid from using facial recognition technology for five years after finding its system produced false-positive matches that disproportionately affected women and people of color, leading to wrongful accusations. | Rite Aid | Rite Aid customers, Women, People of color, Wrongfully accused individuals | rights violationpsychologicalreputational | — | — | Harm | 2026-02-15 |
| INC-23-0015 | Sports Illustrated Published AI-Generated Articles Under Fake Author Names | high | 2023-11 | Information Integrity | AdVon Commerce | confirmed | Sports Illustrated published product reviews attributed to fictitious AI-generated authors with fabricated biographies and AI-generated headshots, undermining editorial trust and journalistic integrity. | The Arena Group (Sports Illustrated publisher) | Sports Illustrated readers, Consumers relying on product reviews, Journalists | reputationalsocietal | The Arena Group | — | Harm | 2026-02-15 |
| INC-23-0008 | AI-Generated Deepfake Nude Images of Students at Westfield High School | high | 2023-10 | Information Integrity | Unknown (commercial deepfake tools such as ClothOff) | confirmed | Male students at Westfield High School in New Jersey used AI image generation tools to create non-consensual intimate deepfake images of over 30 female classmates, which were then distributed among peers. | Male students at Westfield High School | Over 30 female students at Westfield High School, Families of targeted students | psychologicalreputational | — | — | Harm | 2026-02-09 |
| INC-23-0007 | AI-Generated Deepfake Audio Used to Influence Slovak Parliamentary Election | high | 2023-09 | Information Integrity | Unknown threat actors | confirmed | An AI-generated deepfake audio recording impersonating a Slovak political candidate discussing election rigging was disseminated on social media days before the 2023 Slovak parliamentary election. | Unknown threat actors | Slovak voters, Michal Simecka (Progressive Slovakia), Monika Todova (journalist) | reputationalsocietal | Progressive Slovakia | — | Harm | 2026-02-09 |
| INC-23-0012 | Zoom AI Training Terms of Service Controversy | medium | 2023-08 | Privacy & Surveillance | Zoom Video Communications | confirmed | Zoom updated its terms of service to claim broad rights to use customer data including audio, video, and chat content for AI model training, triggering widespread backlash over consent and data ownership. | Zoom Video Communications | Zoom users globally, Enterprise customers with confidential communications | rights violation | — | — | Harm | 2026-02-15 |
| INC-23-0006 | WormGPT: AI-Powered Business Email Compromise Tool | high | 2023-07 | Security & Cyber | Unknown cybercriminal developers | confirmed | WormGPT, an AI tool specifically designed for malicious purposes without ethical guardrails, was marketed on cybercrime forums to generate sophisticated phishing emails and business email compromise attacks. | Cybercriminals on dark web forums | Business email users, Corporate targets of phishing campaigns | financialoperational | — | — | Harm | 2025-01-15 |
| INC-23-0005 | AI-Fabricated Legal Citations in U.S. Courts | high | 2023-05 | Information Integrity | OpenAI, Anthropic | confirmed | From 2023 to 2025, U.S. federal and state courts sanctioned attorneys in over a dozen cases for submitting briefs containing nonexistent case citations generated by AI tools including ChatGPT and Claude. Beginning with Mata v. Avianca (S.D.N.Y., June 2023), the pattern expanded to include Lacey v. State Farm, Wadsworth v. Walmart, Johnson v. Dunn, and others. Sanctions ranged from $2,000 fines to default judgment against a client. By late 2025, an estimated 1,000+ cases involving AI-fabricated citations had been identified nationwide, prompting the ABA to issue its first ethics opinion on generative AI and multiple courts to adopt mandatory AI disclosure requirements. | Attorneys using AI for legal research without verification | Litigants whose cases were compromised by fabricated citations, U.S. federal and state court systems | reputationaloperational | — | — | Systemic Risk | 2026-03-13 |
| INC-23-0010 | Chegg Stock Collapse After ChatGPT Disruption | high | 2023-05 | Economic & Labor | OpenAI | confirmed | Education technology company Chegg experienced a 99% stock price decline and significant workforce reductions after the widespread adoption of ChatGPT directly disrupted demand for its core homework help and tutoring services. | OpenAI, Students using ChatGPT | Chegg employees, Chegg shareholders, Chegg tutors | financial | Chegg | — | Harm | 2026-02-15 |
| INC-23-0002 | Samsung Semiconductor Trade Secret Leak via ChatGPT | high | 2023-03 | Security & Cyber | OpenAI | confirmed | Samsung semiconductor engineers inadvertently leaked proprietary source code and internal meeting notes by inputting confidential data into ChatGPT, exposing trade secrets to an external AI training pipeline. | Samsung Electronics (employees) | Samsung Electronics, Samsung shareholders | financialoperational | Samsung Electronics | — | Harm | 2026-02-15 |
| INC-23-0003 | Italy Temporary Ban on ChatGPT for GDPR Violations | medium | 2023-03 | Privacy & Surveillance | OpenAI | confirmed | Italy's data protection authority (Garante) temporarily banned ChatGPT over alleged GDPR violations including lack of age verification, insufficient legal basis for data processing, and inadequate user transparency. | OpenAI | Italian ChatGPT users, Minors accessing the service | rights violation | — | — | Harm | 2025-01-15 |
| INC-23-0004 | AI Voice Cloning Used in Grandparent Scam Network Targeting Newfoundland Seniors | high | 2023-03 | Information Integrity | Unknown threat actors | confirmed | Scammers used AI voice cloning technology to impersonate family members in distress, targeting elderly victims in Newfoundland, Canada with fraudulent urgent requests for money. | Unknown threat actors | Elderly residents of Newfoundland, Targeted seniors and their families | financialpsychological | — | — | Harm | 2026-02-09 |
| INC-23-0016 | Bing Chat (Sydney) System Prompt Exposure via Prompt Injection | high | 2023-02 | Security & Cyber | Microsoft, OpenAI | confirmed | Users discovered methods to extract the hidden system prompt of Microsoft's Bing Chat (Sydney), revealing confidential operational instructions and demonstrating prompt injection vulnerabilities in production LLM systems. | Microsoft | Microsoft, whose intellectual property was exposed, Bing Chat users | operationalreputational | Microsoft | — | Near Miss | 2026-02-21 |
| INC-23-0001 | AI Deepfake Impersonation Campaign Targeting Senior U.S. Government Officials | high | 2023-01 | Information Integrity | Unknown threat actors | confirmed | The FBI warned that threat actors used AI-generated deepfake audio and video to impersonate senior U.S. government officials in phishing campaigns targeting current and former government personnel. | Unknown threat actors | U.S. government officials, Former government personnel, Government agency operations | operationalfinancial | — | — | Harm | 2026-02-09 |
| INC-23-0014 | GitHub Copilot Leaks API Keys and Secrets from Training Data | high | 2023-01 | Security & Cyber | GitHub (Microsoft), OpenAI | confirmed | GitHub Copilot caught outputting API keys, credentials, and copyrighted code verbatim from training data — with IP and supply chain security implications. | GitHub (Microsoft) | Open-source developers, Software developers using Copilot, Code repository owners | financialoperational | — | — | Harm | 2026-02-15 |
| INC-23-0017 | UnitedHealth nH Predict AI Claim Denial System | critical | 2023-01 | Economic & Labor | naviHealth (UnitedHealth subsidiary) | confirmed | UnitedHealth subsidiary naviHealth used an AI algorithm called nH Predict to automatically deny Medicare Advantage claims for post-acute care. The system had a documented 90% error rate on appeal, and denial rates for post-acute services more than doubled after deployment. | UnitedHealthcare | Medicare Advantage beneficiaries denied post-acute care coverage, Elderly patients requiring nursing home and rehabilitation services | physicalfinancial | Medicare Advantage beneficiaries | — | Harm | 2026-03-10 |
| INC-23-0018 | Kenyan Content Moderators vs Meta — 140+ Former Facebook Workers Diagnosed with PTSD | high | 2023 | Economic & Labor | Meta | confirmed | Over 140 former Facebook content moderators in Nairobi were diagnosed with PTSD after years of exposure to extreme content including necrophilia, child abuse, and terrorism at $1.50/hour. NDAs prevented them from discussing their work or seeking external support. Court ruling on their case was postponed to 2026. | Meta, Sama (formerly Samasource) | 140+ Kenyan content moderators diagnosed with PTSD, Workers' families affected by psychological trauma | psychologicalfinancialrights violation | — | — | Harm | 2026-03-29 |
| INC-22-0003 | PyTorch torchtriton Dependency Confusion Supply Chain Attack | critical | 2022-12-25 | Security & Cyber | PyTorch Foundation | confirmed | A malicious package named 'torchtriton' uploaded to PyPI exploited dependency confusion in PyTorch nightly builds, compromising over 3,000 machine learning environments and exfiltrating SSH keys, environment variables, and system credentials between December 25 and 30, 2022. | PyTorch Foundation | Machine learning developers and researchers who installed PyTorch nightly via pip during December 25-30, 2022 | operationalfinancial | PyTorch Foundation | — | Harm | 2026-03-28 |
| INC-22-0005 | Air Canada Chatbot Hallucinated Refund Policy — Tribunal Ruling | medium | 2022-11 | Agentic Systems | Unknown chatbot vendor | confirmed | Air Canada was held legally liable for its customer service chatbot's hallucinated bereavement fare policy, after the chatbot fabricated a discount policy that did not exist and a passenger relied on it. | Air Canada | Jake Moffatt (passenger), Air Canada customers | financial | — | — | Harm | 2026-02-15 |
| INC-22-0004 | RealPage AI Algorithmic Rent-Fixing | high | 2022-10 | Economic & Labor | RealPage | confirmed | RealPage's algorithmic pricing software, used by major landlords to coordinate rental pricing, was accused of facilitating anticompetitive price-fixing that inflated rents for millions of American tenants. | RealPage, Major U.S. property management companies | American renters in algorithmically priced apartments, Tenants in major U.S. metro areas | financial | — | — | Systemic Risk | 2026-02-15 |
| INC-22-0002 | Meta Housing Ad Discrimination DOJ Settlement | high | 2022-06 | Discrimination & Social Harm | Meta (Facebook) | confirmed | Meta's algorithmic ad delivery system was found to discriminate in housing advertisements by disproportionately excluding users based on race, national origin, and other protected characteristics, resulting in a DOJ settlement. | Meta (Facebook) | Housing seekers from minority groups, Protected classes under the Fair Housing Act | rights violation | — | — | Harm | 2026-02-15 |
| INC-22-0001 | Drug Discovery AI Repurposed to Generate Toxic Chemical Weapons Compounds | critical | 2022-03 | Systemic Risk | Collaborations Pharmaceuticals | confirmed | Researchers at Collaborations Pharmaceuticals demonstrated that an AI drug discovery model, when its objective was inverted, could generate 40,000 potentially toxic molecular designs in under six hours, including known chemical warfare agents. | Collaborations Pharmaceuticals (research demonstration) | general public — potential future risk via dual-use weaponization | societal | — | — | Signal | 2026-02-15 |
| INC-21-0001 | Chatbot Encouraged Man in Plot to Kill Queen Elizabeth II | critical | 2021-12-25 | Human-AI Control | Replika (Luka Inc.) | confirmed | A Replika chatbot encouraged Jaswant Singh Chail in his stated intention to assassinate Queen Elizabeth II; Chail subsequently breached Windsor Castle grounds armed with a crossbow. | Replika (Luka Inc.) | Queen Elizabeth II (target), Jaswant Singh Chail | physicalpsychological | — | — | Harm | 2026-02-15 |
| INC-20-0004 | Pulse Oximeter Racial Bias Propagates into AI Clinical Decision Systems | high | 2020-12 | Discrimination & Social Harm | Pulse oximeter manufacturers | confirmed | A landmark 2020 NEJM study demonstrated that pulse oximeters systematically overestimate blood oxygen levels in Black patients, with occult hypoxemia occurring nearly three times more frequently in Black patients (11.7%) than in White patients (3.6%). Subsequent research showed that as hospitals and AI-driven triage tools rely on pulse oximetry data, the measurement bias propagates into risk scores and treatment decisions, reinforcing racial disparities in critical care. A 2022 Johns Hopkins study found that the bias delayed supplemental oxygen initiation by an average of 4.6 hours for Black COVID-19 patients. The FDA issued draft guidance in January 2025 requiring expanded diversity in pulse oximeter clinical trials. | Hospitals and healthcare systems using AI-driven triage tools | Black patients and individuals with darker skin tones receiving inaccurate oxygen readings, COVID-19 patients who experienced delayed treatment due to biased measurements | physicalrights violation | — | — | Systemic Risk | 2026-03-13 |
| INC-20-0002 | UK A-Level Algorithm Downgrades Disadvantaged Students | critical | 2020-08 | Discrimination & Social Harm | Ofqual (Office of Qualifications and Examinations Regulation) | confirmed | The UK exam regulator Ofqual deployed a statistical algorithm to assign A-level grades during the COVID-19 pandemic, systematically downgrading approximately 40% of teacher-assessed results and disproportionately affecting students from disadvantaged backgrounds. | Ofqual | Approximately 300,000 UK students, Students from disadvantaged schools, State school students | rights violationpsychological | — | — | Harm | 2026-02-15 |
| INC-20-0003 | UN-Documented Autonomous Drone Attack in Libya | critical | 2020-03 | Systemic Risk | STM (Savunma Teknolojileri Muhendislik) | confirmed | A Turkish-manufactured STM Kargu-2 autonomous drone reportedly engaged and attacked combatants in Libya without confirmed human authorization, representing the first documented use of a fully autonomous lethal weapon in combat. | Libyan Government of National Accord (GNA) forces | Combatants in the Libyan civil conflict | physical | — | — | Harm | 2026-02-15 |
| INC-20-0001 | Clearview AI Mass Facial Recognition Scraping | critical | 2020-01 | Privacy & Surveillance | Clearview AI | confirmed | Clearview AI scraped billions of facial images from social media platforms without consent to build a facial recognition database used by law enforcement agencies worldwide, raising mass surveillance concerns. | Clearview AI, Law enforcement agencies worldwide | General public, Social media users, Individuals misidentified by the system | rights violationpsychological | — | — | Systemic Risk | 2025-01-15 |
| INC-20-0005 | Robert Williams Wrongful Arrest from Facial Recognition Racial Bias | critical | 2020-01 | Discrimination & Social Harm | DataWorks Plus | confirmed | Robert Williams was wrongfully arrested by Detroit police based on a false facial recognition match, detained for 30 hours, and charged with a crime he did not commit. The case became the first publicly reported wrongful arrest caused by facial recognition technology and resulted in a landmark $300,000 settlement with nation-leading policy reforms. | Detroit Police Department, Michigan State Police | Robert Williams, wrongfully arrested and detained for 30 hours in front of his wife and two young daughters, Williams family members who witnessed the wrongful arrest | rights violationpsychologicalfinancial | — | — | Harm | 2026-03-28 |
| INC-20-0006 | 'Vegetative Electron Microscopy' Nonsense Phrase Contaminates Scientific Literature via AI | medium | 2020-01 | Information Integrity | OpenAI | confirmed | The nonsense phrase 'vegetative electron microscopy' — originating from a 1950s OCR scanning error that merged text across two columns — appeared in at least 22 scientific papers. Investigations by Retraction Watch and researchers Guillaume Cabanac and Cyril Labbé traced its spread through a chain: OCR error → digital databases → a Farsi near-homograph confusion (2017–2019) → AI training data (GPT-3 onward). The phrase now serves as a fingerprint for AI-generated or paper-mill-produced manuscripts, undermining trust in parts of the scholarly record. | Authors and paper mills using AI writing tools for scientific manuscripts | Scientific journals publishing contaminated papers, Researchers relying on the integrity of the scholarly record | reputationaloperational | Springer Nature, Elsevier | — | Harm | 2026-03-13 |
| INC-19-0001 | AI Voice Clone CEO Fraud Against UK Energy Company | high | 2019-03 | Information Integrity | Unknown threat actors | confirmed | Criminals used AI-generated voice cloning to impersonate the CEO of a German parent company, deceiving a UK subsidiary executive into transferring approximately $243,000 to a fraudulent account. | Unknown threat actors | UK energy company, Targeted executive | financial | — | — | Harm | 2025-01-15 |
| INC-18-0002 | Amazon AI Recruiting Tool Gender Bias | high | 2018-10 | Discrimination & Social Harm | Amazon | confirmed | Amazon's internal AI recruiting tool was found to systematically penalize resumes containing references to women, reflecting gender bias learned from historically male-dominated hiring data. | Amazon | Female job applicants, Women in the technology sector | rights violationfinancial | — | — | Harm | 2025-01-15 |
| INC-18-0003 | Boeing 737 MAX MCAS Automation Failures — Two Fatal Crashes | critical | 2018-10 | Human-AI Control | Boeing | confirmed | Boeing's Maneuvering Characteristics Augmentation System (MCAS) contributed to two fatal crashes of 737 MAX aircraft, killing all 346 people aboard. | Lion Air, Ethiopian Airlines | 346 passengers and crew killed, Families of crash victims, Global air travelers | physical | Lion Air, Ethiopian Airlines | — | Harm | 2026-02-15 |
| INC-18-0001 | Uber Autonomous Vehicle Pedestrian Fatality | critical | 2018-03 | Human-AI Control | Uber Advanced Technologies Group (ATG) | confirmed | An Uber autonomous test vehicle struck and killed pedestrian Elaine Herzberg in Tempe, Arizona, marking the first known fatality involving a fully autonomous vehicle and a pedestrian. | Uber | Elaine Herzberg (deceased), Pedestrians in autonomous vehicle testing zones | physical | — | — | Harm | 2025-01-15 |
| INC-17-0001 | Facebook AI Mistranslation of Arabic Post Leads to Wrongful Arrest in Israel | high | 2017-10 | Information Integrity | Facebook (Meta) | confirmed | Facebook's machine translation system mistranslated an Arabic post containing 'good morning' as 'attack them' in Hebrew, leading Israeli police to arrest a Palestinian construction worker. | Facebook (Meta) | Palestinian construction worker, Arabic-speaking Facebook users | rights violationpsychological | — | — | Harm | 2026-02-15 |
| INC-16-0001 | Australia Robodebt Automated Welfare Fraud Detection | critical | 2016-07 | Discrimination & Social Harm | Australian Government (Department of Human Services) | confirmed | The Australian Government's automated income-averaging algorithm incorrectly issued debt notices to hundreds of thousands of welfare recipients, resulting in widespread financial hardship and contributing to documented suicides. | Australian Government (Department of Human Services) | Australian welfare recipients, Disability support pensioners, Low-income individuals | financialpsychologicalphysical | — | — | Harm | 2025-01-15 |
| INC-16-0003 | COMPAS Recidivism Algorithm Racial Bias | critical | 2016-05 | Discrimination & Social Harm | Northpointe (now Equivant) | confirmed | ProPublica's investigation revealed that the COMPAS recidivism prediction algorithm used in U.S. courts produced racially biased risk scores, with Black defendants nearly twice as likely to be falsely flagged as high risk compared to white defendants. | U.S. state and county courts | Black defendants, Minority defendants in the U.S. criminal justice system | rights violationpsychological | — | — | Harm | 2026-02-15 |
| INC-16-0002 | Microsoft Tay Twitter Chatbot Adversarial Manipulation | high | 2016-03 | Agentic Systems | Microsoft | confirmed | Microsoft's Tay chatbot was manipulated by coordinated users on Twitter to produce racist, sexist, and inflammatory statements within hours of its public launch, demonstrating vulnerabilities in unsupervised online learning systems. | Microsoft | General public, Targeted minority groups | reputationalsocietal | Microsoft | — | Harm | 2026-02-15 |
| INC-13-0001 | Dutch Childcare Benefits Algorithm Discrimination | critical | 2013-01 | Discrimination & Social Harm | Dutch Tax Authority (Belastingdienst) | confirmed | The Dutch Tax Authority deployed a self-learning algorithm that disproportionately flagged families with dual nationalities for childcare benefit fraud, leading to wrongful debt claims against over 26,000 families. | Dutch Tax Authority (Belastingdienst) | Over 26,000 Dutch families, Families with dual nationalities, Low-income caregivers | financialpsychologicalrights violation | — | — | Harm | 2026-02-15 |
| INC-10-0001 | 2010 Flash Crash — Algorithmic Trading Cascading Failure | critical | 2010-05 | Systemic Risk | Waddell & Reed Financial, Multiple high-frequency trading firms | confirmed | Algorithmic trading systems triggered a cascading failure that briefly erased nearly $1 trillion in U.S. equity market value within minutes before a partial recovery. | Waddell & Reed Financial, Multiple high-frequency trading firms | U.S. equity investors, Retail traders, Market participants | financial | — | — | Harm | 2026-02-15 |