Where AI broke something — even before the lawsuit.

Hallucinated court filings, non-consensual deepfakes, training-data exfiltration, voice-clone fraud. Discrete events that have caused harm or triggered regulatory action — many will become the next docket.

Mata v. Avianca — fabricated case citations
S.D.N.Y.·Jun 2023
Sanctions

Two New York lawyers submitted a brief drafted with ChatGPT that cited six entirely fictitious cases. Judge Castel imposed Rule 11 sanctions. The opinion now anchors every state-bar AI competence advisory.

Outcome$5,000 sanction; bar advisories
Taylor Swift sexual deepfakes (X / Microsoft Designer)
Jan 2024
Investigation

Non-consensual AI-generated sexual images of Taylor Swift were viewed tens of millions of times on X before takedowns. The episode triggered the federal TAKE IT DOWN Act and a wave of state non-consensual-intimate-imagery laws.

DriverTAKE IT DOWN Act (2025)
Air Canada chatbot — hallucinated bereavement fare
BC Civil Resolution Tribunal·Feb 2024
Liable

A Canadian tribunal held Air Canada liable for misinformation its chatbot provided to a customer about bereavement fares, rejecting the airline's argument that the chatbot was a "separate legal entity." The first reasoned decision treating chatbot output as airline speech.

HoldingOperator owns the chatbot's words
Hong Kong CFO deepfake heist — $25M
Feb 2024
Investigation

A finance employee at a Hong Kong subsidiary wired roughly $25 million after a multi-participant video call in which every other attendee — including the CFO — was a real-time deepfake. One of the largest reported AI-driven business-email-compromise losses.

DriverCorporate KYC reform
ChatGPT user-history bug — March 2023
OpenAI·Mar 2023
Disclosed

A Redis-client bug exposed conversation titles, the first message of new conversations, and partial payment data for a subset of ChatGPT Plus users. Italy's Garante temporarily banned the service; OpenAI implemented age and consent gates.

DriverEU GDPR enforcement
"Sky" voice — Scarlett Johansson dispute
OpenAI·May 2024
Withdrawn

OpenAI's GPT-4o "Sky" voice closely resembled Scarlett Johansson's. After her counsel demanded answers, OpenAI paused the voice. The episode became a touchstone for right-of-publicity claims against synthetic voice products.

DriverVoice cloning bills (TN ELVIS)
Samsung internal-code leak via ChatGPT
Apr 2023
Disclosed

Samsung engineers pasted proprietary semiconductor source code into ChatGPT in three separate incidents over three weeks. Samsung banned generative-AI use on internal systems. The incident catalyzed enterprise-AI data-loss-prevention investment industry-wide.

DriverEnterprise AI governance
Air Force MQ-28 simulation rumor
Jun 2023
Retracted

A USAF colonel described a simulated scenario in which an autonomous drone "killed" its operator. The Air Force quickly clarified no such test occurred. The episode is now case-study material for AI-incident reporting and verification standards.

DriverNIST AI RMF revisions
CNET AI-generated personal-finance articles
Jan 2023
Corrected

CNET ran AI-generated finance explainers under a generic byline; an audit found factual errors in more than half. The publication issued corrections and suspended the program. A formative case in newsroom AI-disclosure policy.

DriverNewsroom AI disclosure
New Hampshire Biden robocall deepfake
Jan 2024
FCC enforcement

A political consultant generated and distributed a robocall impersonating President Biden urging voters to skip the New Hampshire primary. The FCC ruled AI-voice robocalls illegal under the TCPA and pursued a $6M forfeiture.

Outcome$6M FCC forfeiture proposed