AI Threats Affecting Society at Large
How AI-enabled threats produce diffuse systemic harm to social cohesion, public trust, epistemic integrity, or institutional stability — extending beyond identifiable individuals or organizations.
systemsThis page documents AI societal risks and the broader AI impact on society — diffuse, systemic harms to social cohesion, public trust, epistemic integrity, and institutional stability that extend beyond identifiable individuals or organizations. It is intended for policymakers, civil society organizations, researchers, and anyone concerned with the long-term societal effects of AI.
Society at large is classified under the Systems category — groups where harm manifests at the level of societal structures. This category distinguishes systemic-level harms from individual impacts (affecting natural persons) and organizational impacts (affecting institutions). Society at large is the highest-threshold affected group, used only when harm genuinely diffuses across society and cannot be meaningfully captured by other group values. It is distinguished from democratic institutions (which focuses on governance mechanisms) and national security systems (which focuses on defense) by its scope encompassing all of society.
This page summarizes recurring AI threat patterns, protective measures, and relevant regulatory context for society at large.
At a glance
- Primary threats: Epistemic degradation, trust erosion, power concentration, existential and catastrophic risk
- 15 documented incidents — including drug discovery AI repurposed for chemical weapons and AI-generated nonsense contaminating scientific literature
- Key domains: Systemic Risk, Information Integrity, Economic & Labor
How AI Threats Appear
The following are recurring patterns of AI-enabled harm documented across incidents affecting society at large. Each pattern reflects real-world events, not hypothetical risk.
| Threat Pattern | Primary Domain | Key Indicator |
|---|---|---|
| Epistemic degradation | Information Integrity | Declining public trust correlated with AI content proliferation |
| Trust erosion | Information Integrity | Increasing difficulty verifying authenticity of public communications |
| Power concentration | Economic & Labor | AI-driven market consolidation across multiple sectors |
| Social fragmentation | Human-AI Control | AI influence on public opinion at population scale |
| Existential and catastrophic risk | Systemic Risk | AI failures cascading across interconnected critical sectors |
- Epistemic degradation — Widespread AI-generated content that erodes the shared capacity to distinguish fact from fabrication, undermining the epistemic foundations of public discourse
- Trust erosion — Cumulative loss of public confidence in institutions, media, and interpersonal communications due to the prevalence of AI-generated synthetic content
- Power concentration — Structural accumulation of economic and informational power by entities controlling advanced AI systems, reducing competitive diversity and democratic accountability
- Social fragmentation — AI-driven recommendation and content generation systems that create incompatible information environments, fragmenting shared reality across population segments
- Existential and catastrophic risk — Evidence-informed concerns about advanced AI systems whose optimization targets diverge from collective human welfare
How systemic AI harm differs from individual harm
Society-level AI threats are distinct because they:
- Emerge from aggregation — No single incident creates the harm; it emerges from the cumulative effect of millions of AI interactions across the population
- Resist attribution — The harm cannot be traced to a specific actor, decision, or system — it is a property of the AI ecosystem as a whole
- Erode shared foundations — The damage is to collective infrastructure (trust, shared knowledge, institutional legitimacy) rather than to identifiable victims
- Manifest over time — Societal harms often develop gradually, making them difficult to detect until structural damage is advanced
Relevant AI Threat Domains
- Systemic Risk — Infrastructure dependency, strategic misalignment, uncontrolled capability escalation, and existential risk
- Information Integrity — Large-scale erosion of public trust in information ecosystems through synthetic content at population scale
- Economic & Labor — Structural market concentration and economic power asymmetry driven by AI capability advantages
- Human-AI Control — Gradual transfer of societal decision-making to AI systems without adequate collective governance
What to Watch For
Where the section above describes threat patterns, this section identifies concrete warning signs that policymakers, researchers, and civil society organizations may encounter — and the immediate steps they can take in response.
-
Measurable decline in public trust in information sources, institutions, or democratic processes correlated with AI content proliferation — What researchers can do: Track trust metrics longitudinally and disaggregate by exposure to AI-generated content. Publish findings that connect information ecosystem changes to measurable trust outcomes.
-
Market concentration metrics showing AI-driven consolidation across multiple sectors — What policymakers can do: Monitor market concentration indicators in AI-intensive sectors. Assess whether competition law frameworks adequately address AI-enabled concentration dynamics.
-
Evidence of AI systems influencing public opinion or behavior at population scale — What civil society organizations can do: Fund and support independent research on AI influence at scale. Advocate for transparency requirements on AI-driven content curation and recommendation systems.
-
Increasing difficulty in attributing authorship or verifying authenticity of public communications — What policymakers can do: Support the development and adoption of content provenance standards. Invest in public infrastructure for content authenticity verification.
-
Cascading effects where AI failures in one domain propagate to create systemic instability — What researchers can do: Study AI dependency chains across critical sectors. Model cascading failure scenarios to identify systemic vulnerabilities before they materialize.
Protective Measures
These are practical steps policymakers, civil society organizations, and researchers can take to address systemic AI risks at the societal level.
- Strengthen content authenticity infrastructure — Content provenance and watermarking standards and AI-generated text detection help preserve the integrity of public information ecosystems. See the guides on detecting deepfakes and detecting AI-generated text.
- Monitor for systemic patterns — AI risk monitoring systems can track aggregate trends in AI-related harm, while deepfake detection addresses the erosion of trust in visual and audio media at scale.
- Promote equitable AI design — Bias and fairness auditing tools and human oversight design frameworks help ensure that widely deployed AI systems do not concentrate power or amplify structural inequities.
- Build societal awareness — The AI threat protection overview and AI threat risk assessment guide provide accessible entry points for understanding the landscape of AI-enabled threats and available defenses.
Questions policymakers and regulators can ask
Use these when developing AI governance frameworks or evaluating systemic AI risk at the national and international level.
- “What mechanisms exist to detect and respond to AI-driven information ecosystem degradation before public trust is irreversibly damaged?”
- “How are competition frameworks being adapted to address AI-enabled market concentration?”
- “What public investment is being made in content provenance infrastructure and media literacy at scale?”
- “How are existential and catastrophic AI risks being assessed and governed at the national and international level?”
Questions researchers and civil society can ask
Use these when investigating AI-driven societal impacts or advocating for independent AI oversight.
- “What longitudinal data exists on the relationship between AI-generated content proliferation and public trust metrics?”
- “How are AI dependency chains across critical sectors being mapped and stress-tested?”
- “What independent oversight exists for the most capable AI systems, and is it adequate to the scale of potential societal impact?”
- “How are the communities most affected by systemic AI harms involved in governance and policy decisions?”
Regulatory Context
- EU AI Act — Establishes systemic risk assessment requirements for general-purpose AI models with wide societal reach, including mandatory evaluations and incident reporting
- NIST AI RMF — Addresses societal-scale AI risks through organizational risk management, with guidance on measuring and mitigating systemic impacts
- International governance initiatives (UN, OECD, G7) — Address cross-border systemic AI risks including AI safety summits, voluntary commitments, and emerging multilateral frameworks
International governance of systemic AI risks remains fragmented, with no binding global framework addressing AI-driven epistemic degradation, power concentration, or catastrophic risk at the scale required.
Documented Incidents
Based on incident analysis, society at large is most frequently affected by threats in the Systemic Risk and Information Integrity domains, reflecting the convergence of catastrophic dual-use risks and large-scale epistemic contamination.
19 documented incidents affecting society at large — showing top 6 by severity
View all 19 incidents for this group →
For classification rules and evidence standards, refer to the Methodology.
Last updated: 2026-04-02 · Back to Affected Groups