Algorithmic Bias
Systematic errors in AI systems that produce unfair outcomes, often favouring one group over another.
Definition
Algorithmic bias refers to systematic and repeatable errors in computer systems that produce unfair outcomes, such as consistently favouring one demographic group over another. In AI systems, bias typically originates from unrepresentative training data, flawed model design, or feedback loops that amplify existing societal inequalities. Algorithmic bias can result in discriminatory decisions across domains including hiring, lending, criminal justice, and public services.
How It Relates to AI Threats
Algorithmic bias is a core harm mechanism within Discrimination & Social Harm, where it drives allocational harm (unfair distribution of resources or opportunities), representational harm (reinforcing stereotypes), and proxy discrimination (using correlated attributes as stand-ins for protected characteristics). It also intersects with Economic & Labor threats through biased hiring algorithms and automated workforce management.
Why It Occurs
- Training data reflects historical patterns of discrimination
- Model optimisation targets aggregate accuracy rather than fairness across groups
- Proxy variables encode protected characteristics indirectly
- Feedback loops amplify initial biases over time
- Insufficient testing across demographic subgroups during development
Real-World Context
Algorithmic bias has been documented in Amazon’s AI hiring tool (INC-18-0002), which systematically downgraded female applicants. Australia’s Robodebt scheme (INC-16-0001) used automated income averaging that disproportionately affected vulnerable populations. The Dutch childcare benefits scandal (INC-13-0001) demonstrated how algorithmic fraud detection targeted families with dual nationality through proxy variables.
Related Incidents
Dutch Childcare Benefits Algorithm Discrimination
Australia Robodebt Automated Welfare Fraud Detection
Amazon AI Recruiting Tool Gender Bias
Federal Judge Orders UnitedHealth to Disclose nH Predict AI Denial Algorithm with Alleged 90% Error Rate
Workday AI Hiring Bias Class Action — African-American Applicant Rejected Dozens of Times Across Employers
ACLU Files Complaint — HireVue AI Discriminated Against Deaf Indigenous Worker in Promotion Decision
LSU AI Cheating Detection Crisis — 1,488 Cases Filed with Disproportionate Impact on Non-Native English Speakers
AI Healthcare Bias Study — 1.7 Million Responses Show Race-Based Treatment Differences Across 9 AI Programs
Eightfold AI Sued for Creating Secret Dossiers on 1 Billion+ Workers with Hidden Scoring
AI Grading Errors — Connecticut Students Petition After Misscoring, MCAS Glitch Affects 1,400 Students
Tennessee Grandmother Wrongfully Arrested by Facial Recognition — Jailed 108 Days, Lost Home
NYPD Facial Recognition Wrongful Arrest — Brooklyn Father Jailed 2 Days Despite 8-Inch Height Difference
Google Gemini Produces Historically Inaccurate Image Outputs Due to Bias Overcorrection
FTC Bans Rite Aid from Using Facial Recognition Technology
Meta Housing Ad Discrimination DOJ Settlement
UK A-Level Algorithm Downgrades Disadvantaged Students
Robert Williams Wrongful Arrest from Facial Recognition Racial Bias
Facebook AI Mistranslation of Arabic Post Leads to Wrongful Arrest in Israel
COMPAS Recidivism Algorithm Racial Bias
Related Threat Patterns
Related Terms
Last updated: 2026-02-14