Liar's Dividend
The phenomenon where the mere existence of deepfakes and AI-generated media allows individuals to dismiss authentic evidence — including genuine photographs, videos, and audio recordings — as potentially fabricated. The liar's dividend erodes the evidentiary value of all digital media, benefiting those who wish to deny documented events.
Definition
The liar’s dividend is a concept coined by legal scholars Robert Chesney and Danielle Citron in 2019 to describe a secondary consequence of deepfake technology: as the public becomes aware that convincing fake media can be created, any authentic recording can be plausibly denied as a deepfake. A politician caught on video making a damaging statement can claim the video is AI-generated. A corporation confronted with photographic evidence of environmental violations can question its authenticity. The liar’s dividend inverts the burden of proof — instead of the subject proving a recording is fake, the presenter must prove it is genuine. This effect compounds over time as generative AI capabilities become more widely known.
How It Relates to AI Threats
The liar’s dividend operates within the Information Integrity Threats domain as a second-order harm mechanism. While deepfakes directly create false content, the liar’s dividend degrades the value of true content. This makes it particularly insidious: even if every deepfake were perfectly detected and removed, the liar’s dividend would persist as long as the public believes deepfakes could exist. The phenomenon contributes to consensus reality erosion — a broader pattern where shared agreement on factual events becomes harder to maintain. It undermines journalism, legal proceedings, democratic accountability, and institutional trust.
Why It Occurs
- Public awareness of deepfake capabilities has grown faster than understanding of detection and provenance technologies
- The asymmetry between creating doubt (easy) and proving authenticity (hard) favours those who wish to deny evidence
- Legal and institutional frameworks were designed for an era where audio-visual recordings were presumed authentic
- No universally adopted provenance standard (such as C2PA) yet exists to certify content authenticity at scale
- Political and social incentives to deny unfavourable evidence create strong demand for plausible deniability
Real-World Context
The liar’s dividend has been invoked in multiple documented contexts. Political figures in several countries have dismissed authentic recordings as AI-generated. Legal proceedings have faced challenges to the admissibility of digital evidence based on deepfake possibility. The concept was central to academic analysis of the 2023 Slovak election deepfakes and subsequent attempts to deny authentic counter-evidence. Chesney and Citron’s original 2019 paper has been widely cited in AI policy and information integrity research. Content provenance initiatives (C2PA, SynthID) are explicitly designed to counter the liar’s dividend by enabling proof of authenticity.
Related Threat Patterns
Related Terms
Last updated: 2026-04-03