Skip to main content
TopAIThreats home TOP AI THREATS
Governance Concept

Automated Decision-Making

Using algorithms or AI to make decisions affecting individuals with limited human review.

Definition

Automated decision-making (ADM) refers to the use of algorithms, rule-based systems, or AI models to make or substantially inform decisions that affect individuals — including determinations about welfare eligibility, hiring, credit scoring, criminal sentencing, and public service allocation. ADM systems range from fully automated processes with no human involvement to semi-automated workflows where algorithmic outputs guide a human decision-maker. The concept is central to data protection law, where regulations such as the GDPR establish rights related to solely automated decisions with legal or significant effects.

How It Relates to AI Threats

Automated decision-making sits at the intersection of Human-AI Control and Discrimination & Social Harm. Within Human-AI Control, ADM raises concerns about overreliance on algorithmic outputs (automation bias) and the adequacy of human-in-the-loop safeguards — particularly when reviewers lack the time, training, or information to meaningfully override system recommendations. Within Discrimination & Social Harm, ADM can produce allocational harm when biased models systematically deny resources or opportunities to certain groups. The opacity of many ADM systems further compounds these risks by limiting individuals’ ability to understand or contest decisions made about them.

Why It Occurs

  • Organisations adopt ADM to reduce costs, increase processing speed, and impose consistency in high-volume decision environments
  • The perceived objectivity of algorithmic systems can lead to reduced scrutiny compared with human decision-makers
  • Regulatory frameworks in many jurisdictions have not kept pace with the deployment of AI in consequential decision-making
  • Human reviewers in semi-automated systems frequently default to accepting algorithmic recommendations
  • Transparency requirements are often limited or poorly enforced, leaving affected individuals without meaningful recourse

Real-World Context

The Dutch childcare benefits scandal (INC-13-0001) is among the most extensively documented cases of ADM failure, in which an automated fraud detection system wrongly flagged thousands of families — disproportionately those with dual nationality — for benefit repayment. The resulting harm included financial ruin, family separations, and the eventual resignation of the Dutch government. The case prompted significant regulatory discussion across the EU regarding the need for human oversight provisions and algorithmic impact assessments in public-sector ADM deployments.

Related Incidents

INC-13-0001 critical 2013-01

Dutch Childcare Benefits Algorithm Discrimination

INC-26-0097 critical 2026-03-31

Oracle Cuts 20,000–30,000 Jobs to Fund $50B AI Infrastructure Push (2026)

INC-26-0047 critical 2026-03-09

Federal Judge Orders UnitedHealth to Disclose nH Predict AI Denial Algorithm with Alleged 90% Error Rate

INC-26-0091 high 2026-03-07

Workday AI Hiring Bias Class Action — African-American Applicant Rejected Dozens of Times Across Employers

INC-26-0066 high 2026-03

ACLU Files Complaint — HireVue AI Discriminated Against Deaf Indigenous Worker in Promotion Decision

INC-26-0075 high 2026-03

Canada Immigration AI Hallucinated Job Duties — PhD Immunologist Denied Permanent Residency

INC-26-0029 critical 2026-02-28

US Military AI Targeting Platform Fed Stale Data Contributes to Strike on Iranian Elementary School

INC-26-0027 critical 2026-02-26

Block (Square) Cuts Approximately 4,000 Jobs as AI Replaces Customer Service Workforce

INC-26-0036 critical 2026-02

MizarVision Chinese AI Startup Publishes Real-Time US Military Intelligence via Satellite Imagery

INC-26-0046 critical 2026-01

LSU AI Cheating Detection Crisis — 1,488 Cases Filed with Disproportionate Impact on Non-Native English Speakers

INC-26-0050 critical 2026-01

AI Healthcare Bias Study — 1.7 Million Responses Show Race-Based Treatment Differences Across 9 AI Programs

INC-26-0056 high 2026-01

Eightfold AI Sued for Creating Secret Dossiers on 1 Billion+ Workers with Hidden Scoring

INC-26-0068 high 2026

Palantir ImmigrationOS — ICE Pays $30M for AI System Creating Neighborhood Deportation Maps

INC-25-0043 high 2025-09

AI Grading Errors — Connecticut Students Petition After Misscoring, MCAS Glitch Affects 1,400 Students

INC-23-0018 high 2023

Kenyan Content Moderators vs Meta — 140+ Former Facebook Workers Diagnosed with PTSD

INC-17-0001 high 2017-10

Facebook AI Mistranslation of Arabic Post Leads to Wrongful Arrest in Israel

Last updated: 2026-02-14