AI failures — documented case studies
Every case sourced, structured, and rated. No speculation without a label.
This is the BrokenCtrl case library — documented instances of AI failures, governance breakdowns, corporate conduct contradictions, and harm events. Every case uses a fixed template: what happened, what is verified, what failed, and what should have been in place.
Cases are not commentary. They are structured analysis built from primary sources, regulatory filings, credible reporting, and technical documentation. If the evidence is thin, the confidence label says so.
Looking to evaluate a specific AI tool rather than a documented incident? Ethical AI Reviews scores tools across six dimensions — transparency, privacy, safety, data governance, corporate conduct, and real-world harm evidence.
Confirmed by primary sources. Can be independently checked.
Supported by multiple credible outlets. Not yet primary-confirmed.
Reported but contested or single-sourced. Included; not treated as fact.
Cases loading — or none published yet. Check back soon.
Have information relevant to a documented case, or a case that should be covered? Use the contact page. Everything submitted is read.
QUESTIONS
What counts as an AI failure?
An AI failure is any documented instance where an AI system produced harmful outcomes, where a company's stated safety commitments were contradicted by observed behaviour, or where governance mechanisms that should have prevented harm were absent or ineffective. The term includes technical failures, deployment failures, and institutional failures.
How are artificial intelligence failures documented here?
Every case follows a fixed template: summary, verified vs unverified claims, mechanism and failure mode, harm assessment, controls that would have reduced harm, and primary sources. Confidence labels — Verified, Probable, Unverified — are applied to individual claims, not just to the case overall. Full methodology on the About page →
What is AI accountability?
AI accountability means that organisations deploying AI systems can be identified, questioned, and held responsible for outcomes their systems produce. It requires transparency about how systems work, documented enforcement mechanisms, and consequences when things go wrong. Most of the cases in this library are stories about accountability that did not happen.
Are these cases covered from a legal perspective?
No. BrokenCtrl covers cases from an ethics, governance, and risk perspective — not a legal one. Where regulatory or legal proceedings are relevant to a case, they are noted with appropriate confidence labelling. This site does not provide legal advice.