Skip to main content
TopAIThreats home TOP AI THREATS

AI Threats Affecting Children

How AI-enabled threats affect minors under 18, a group requiring distinct protection due to developmental vulnerability, legal protections, and inability to provide informed consent.

individuals

This page documents the AI risks for children and minors under 18 — including threats to child safety online from AI chatbots, synthetic imagery, and algorithmic content exposure. It is intended for parents, educators, child protection professionals, platform trust and safety teams, and policymakers focused on youth AI safety.

Children are classified under the Individuals category — groups where harm is experienced by natural persons. This category distinguishes individual-level harms from organizational impacts (affecting institutions) and systems-level harms (affecting societal structures like democracy or national security). Children are treated as a distinct group because of their legal protections (COPPA, GDPR Article 8, UK Age Appropriate Design Code), developmental vulnerability to manipulative design, and structural inability to consent to AI system interactions. When harm extends to the broader population (general public), workplace contexts (workers), or structurally disadvantaged populations (vulnerable communities), those dedicated pages provide more targeted guidance.

This page summarizes recurring AI threat patterns, protective measures, and relevant regulatory context for children.

At a glance


How AI Threats Appear

The following are recurring patterns of AI-enabled harm documented across incidents affecting children. These threats concentrate on digital platforms and educational systems where minors spend significant time.

Threat PatternPrimary DomainKey Indicator
Synthetic imagery and deepfakesPrivacy & SurveillanceAI-generated content using children’s images
AI chatbots and companionsHuman-AI ControlEmotional dependency on conversational AI without age verification
Recommendation systems and content exposureInformation IntegrityEscalating content bypassing parental controls
Educational AI biasDiscrimination & Social HarmOpaque grading criteria with demographic disparities
Data collection and profilingPrivacy & SurveillanceBiometric data collection through school devices
  • Synthetic imagery and deepfakes — AI-generated explicit or exploitative images of minors, including deepfakes created from publicly available photos. AI-generated content targeting children that mimics trusted sources or authority figures.
  • AI chatbots and companions — Conversational AI systems that form emotional bonds with minors, creating risks of manipulation, dependency, and exposure to harmful content without adequate age verification or safety guardrails. Chatbots and virtual companions that exploit developmental vulnerabilities through addictive design.
  • Recommendation systems and content exposure — AI recommendation algorithms that surface age-inappropriate, harmful, or radicalizing content to young users. Systems that learn to serve increasingly extreme or addictive content based on engagement signals, overriding parental controls through alternative pathways.
  • Educational AI systems — Automated grading, behavioral monitoring, or learning assessment systems that disadvantage students based on demographic characteristics. AI assessment tools deployed without transparency about criteria or validation across diverse student populations.
  • Data collection and profiling — AI systems that profile minors through educational platforms, gaming, or social media, often without parental knowledge or meaningful consent. Collection of biometric and behavioral data from children through school-provided devices.

Relevant AI Threat Domains

  • Privacy & Surveillance — Data collection and profiling of minors across educational, entertainment, and social platforms
  • Information Integrity — Exposure to AI-generated misinformation and manipulated content that children lack the developmental capacity to evaluate
  • Discrimination & Social Harm — Biased educational and behavioral assessment systems that affect academic outcomes and opportunities
  • Human-AI Control — Manipulative interface design targeting developmental vulnerabilities, including addictive design patterns and emotional exploitation

What to Watch For

Where the section above describes threat patterns, this section identifies concrete warning signs that parents, educators, and child protection professionals may encounter — and the immediate steps they can take in response.

  • AI systems interacting with minors without age-appropriate safeguards or parental controlsWhat parents and educators can do: Review the AI features in platforms children use. Check whether age verification and parental controls are active and effective. Report platforms that allow unfiltered AI interactions with minors.

  • Educational platforms using AI assessment without transparency about criteria or data useWhat educators can do: Request documentation from vendors about how AI assessment criteria are determined. Ask whether outcomes have been audited for bias across student demographics. Require that AI assessments can be reviewed and overridden by teachers.

  • Social media or gaming platforms with AI recommendation systems that lack youth-specific protectionsWhat parents can do: Use platform safety settings to limit AI-driven recommendations. Monitor for patterns of escalating content that suggest algorithmic amplification. Report recommendation content that is age-inappropriate.

  • AI-generated content targeting children that mimics trusted sources or authority figuresWhat educators can do: Teach age-appropriate media literacy that includes AI-generated content awareness. Help children understand that text, images, and voices can be artificially generated to deceive.

  • Collection of biometric or behavioral data from children through school-provided devicesWhat parents and school administrators can do: Review school AI and technology policies for data collection scope. Ask what biometric or behavioral data is collected, how long it is retained, and who has access. Exercise data deletion rights where available.


Protective Measures

These are practical steps parents, educators, and child protection professionals can take to reduce children’s exposure to AI-enabled threats.

Questions parents and educators can ask platforms and schools

Use these when evaluating AI-powered platforms, tools, or educational systems that interact with children.

  • “Does this platform use AI to interact with or recommend content to children, and what safety guardrails are in place?”
  • “What data about my child is collected by AI systems, and how can I access or delete it?”
  • “Has the AI grading or assessment system been audited for bias across student demographics?”
  • “What happens if the AI makes an error that affects my child’s grade, behavioral record, or online safety?”

Questions policymakers and child protection organizations can ask

Use these when engaging with AI providers, platform operators, or regulators on youth AI safety standards.

  • “How does this platform verify the age of users before exposing them to AI-generated content or AI interactions?”
  • “What testing has been conducted to ensure AI recommendation systems do not serve harmful content to minors?”
  • “How does this educational AI system comply with child data protection requirements in the jurisdictions where it operates?”
  • “What independent oversight exists for AI systems that interact directly with children?”

Regulatory Context

  • EU AI Act — Specifically identifies AI systems interacting with children as requiring heightened risk assessment and additional safeguards
  • COPPA (US) — Restricts collection of personal information from children under 13, with AI systems processing children’s data subject to these requirements
  • UK Age Appropriate Design Code — Requires AI-powered services likely to be accessed by children to meet specific design standards prioritizing the best interests of the child
  • GDPR Article 8 (EU) — Requires parental consent for data processing of children under 16 (or lower age set by member states), applying to AI systems that collect or profile minors

These protections often lag behind cross-platform AI deployments and generative AI tools that children can access through third-party applications not covered by platform-specific regulations.


Documented Incidents

Based on incident analysis, children are most frequently affected by threats in the Privacy & Surveillance and Human-AI Control domains, reflecting the intersection of data exploitation and manipulative design targeting developmental vulnerabilities.

Last updated: 2026-04-02 · Back to Affected Groups