Skip to main content
TopAIThreats home TOP AI THREATS
Harm Mechanism

Algorithmic Bias

Systematic errors in AI systems that produce unfair outcomes, often favouring one group over another.

Definition

Algorithmic bias refers to systematic and repeatable errors in computer systems that produce unfair outcomes, such as consistently favouring one demographic group over another. In AI systems, bias typically originates from unrepresentative training data, flawed model design, or feedback loops that amplify existing societal inequalities. Algorithmic bias can result in discriminatory decisions across domains including hiring, lending, criminal justice, and public services.

How It Relates to AI Threats

Algorithmic bias is a core harm mechanism within Discrimination & Social Harm, where it drives allocational harm (unfair distribution of resources or opportunities), representational harm (reinforcing stereotypes), and proxy discrimination (using correlated attributes as stand-ins for protected characteristics). It also intersects with Economic & Labor threats through biased hiring algorithms and automated workforce management.

Why It Occurs

  • Training data reflects historical patterns of discrimination
  • Model optimisation targets aggregate accuracy rather than fairness across groups
  • Proxy variables encode protected characteristics indirectly
  • Feedback loops amplify initial biases over time
  • Insufficient testing across demographic subgroups during development

Real-World Context

Algorithmic bias has been documented in Amazon’s AI hiring tool (INC-18-0002), which systematically downgraded female applicants. Australia’s Robodebt scheme (INC-16-0001) used automated income averaging that disproportionately affected vulnerable populations. The Dutch childcare benefits scandal (INC-13-0001) demonstrated how algorithmic fraud detection targeted families with dual nationality through proxy variables.

Related Incidents

INC-13-0001 critical 2013-01

Dutch Childcare Benefits Algorithm Discrimination

INC-16-0001 critical 2016-07

Australia Robodebt Automated Welfare Fraud Detection

INC-18-0002 high 2018-10

Amazon AI Recruiting Tool Gender Bias

INC-26-0047 critical 2026-03-09

Federal Judge Orders UnitedHealth to Disclose nH Predict AI Denial Algorithm with Alleged 90% Error Rate

INC-26-0091 high 2026-03-07

Workday AI Hiring Bias Class Action — African-American Applicant Rejected Dozens of Times Across Employers

INC-26-0066 high 2026-03

ACLU Files Complaint — HireVue AI Discriminated Against Deaf Indigenous Worker in Promotion Decision

INC-26-0046 critical 2026-01

LSU AI Cheating Detection Crisis — 1,488 Cases Filed with Disproportionate Impact on Non-Native English Speakers

INC-26-0050 critical 2026-01

AI Healthcare Bias Study — 1.7 Million Responses Show Race-Based Treatment Differences Across 9 AI Programs

INC-26-0056 high 2026-01

Eightfold AI Sued for Creating Secret Dossiers on 1 Billion+ Workers with Hidden Scoring

INC-25-0043 high 2025-09

AI Grading Errors — Connecticut Students Petition After Misscoring, MCAS Glitch Affects 1,400 Students

INC-25-0041 critical 2025-07

Tennessee Grandmother Wrongfully Arrested by Facial Recognition — Jailed 108 Days, Lost Home

INC-25-0044 high 2025

NYPD Facial Recognition Wrongful Arrest — Brooklyn Father Jailed 2 Days Despite 8-Inch Height Difference

INC-24-0009 medium 2024-02

Google Gemini Produces Historically Inaccurate Image Outputs Due to Bias Overcorrection

INC-23-0013 high 2023-12

FTC Bans Rite Aid from Using Facial Recognition Technology

INC-22-0002 high 2022-06

Meta Housing Ad Discrimination DOJ Settlement

INC-20-0002 critical 2020-08

UK A-Level Algorithm Downgrades Disadvantaged Students

INC-20-0005 critical 2020-01

Robert Williams Wrongful Arrest from Facial Recognition Racial Bias

INC-17-0001 high 2017-10

Facebook AI Mistranslation of Arabic Post Leads to Wrongful Arrest in Israel

INC-16-0003 critical 2016-05

COMPAS Recidivism Algorithm Racial Bias

Last updated: 2026-02-14