AI Failures — Case Studies
Documented artificial intelligence failures, governance breakdowns and corporate accountability gaps. Every case sourced, every claim rated.
Each case study on BrokenCtrl documents a real AI failure, ethics violation, or governance breakdown. That includes AI systems used to cause harm, companies that publicly committed to responsible AI principles and then violated them, regulatory failures that allowed documented harms to persist, and the gap between what AI companies say in press releases and what they do in practice.
These are not opinion pieces. They are structured documentation — written with primary sources, labelled for confidence level, and updated when new information emerges. The goal is to build a reliable record of AI accountability failures that researchers, journalists, compliance professionals and policymakers can actually use.
Every case carries one of three confidence labels. Read the label before the content.
First cases coming soon
We are documenting AI failures and governance breakdowns with sourced evidence. The first case studies will be published shortly.
Read our methodologyHave information about an AI failure?
If you have sourced information about an AI governance failure, ethics violation, or accountability gap that should be documented — contact us. We read everything and assess all submissions against our confidence criteria before publishing.
What counts as an AI failure?
On BrokenCtrl, an AI failure is any documented case where an artificial intelligence system produced harmful outcomes, where a company violated its own stated AI ethics principles, where governance mechanisms failed to prevent or address harm, or where accountability was absent when it should have been present. This includes both technical failures and organisational ones.
How do you verify AI accountability cases?
We require primary sources — official documents, original reporting from credible outlets, or direct evidence — before labelling a case Verified. Where only secondary reporting exists, the case is labelled Probable. Where the claim is significant but unconfirmed, it is labelled Unverified and published with explicit acknowledgement of that uncertainty. We do not publish based on social media posts or unverified viral claims alone.
Do you cover AI failures from all companies?
Yes. BrokenCtrl is not affiliated with any AI company, government body, or advocacy organisation. Cases are selected based on evidential weight and significance of the harm or governance failure involved — not based on who the company is or how large they are. That includes the companies whose products we otherwise review in the AI Tools section.
What happens when a case label changes?
If new information upgrades or downgrades a confidence label, we update the case and add a visible correction note at the top stating what changed and when. We do not quietly edit published cases — all changes are documented.