AI failures — documented case studies

Every case sourced, structured, and rated. No speculation without a label.

AI failures documented — BrokenCtrl independent case library tracking governance and ethics violations

This is the BrokenCtrl case library — documented instances of AI failures, AI governance failures, AI ethics violations, corporate conduct contradictions, and harm events. Every case uses a fixed template: what happened, what is verified, what failed, and what should have been in place.

Cases are not commentary. They are structured analysis built from primary sources, regulatory filings, credible reporting, and technical documentation. If the evidence is thin, the confidence label says so.

Looking to evaluate a specific AI tool rather than a documented incident? Ethical AI Reviews scores tools across six dimensions — transparency, privacy, safety, data governance, corporate conduct, and real-world harm evidence.


Verified

Confirmed by primary sources. Can be independently checked.

Probable

Supported by multiple credible outlets. Not yet primary-confirmed.

Unverified

Reported but contested or single-sourced. Included; not treated as fact.


Cases loading — or none published yet. Check back soon.

Have information relevant to a documented case, or a case that should be covered? Use the contact page. Everything submitted is read.


QUESTIONS

What counts as an AI failure?

An AI failure is any documented instance where an AI system produced harmful outcomes, where a company's stated safety commitments were contradicted by observed behaviour, or where governance mechanisms that should have prevented harm were absent or ineffective. The term includes technical failures, deployment failures, and institutional failures.

What are AI governance failures?

AI governance failures occur when the structures meant to oversee AI systems — internal policies, regulatory compliance, audit procedures, escalation protocols — fail to prevent harm despite their stated purpose. They include cases where governance exists on paper but is not enforced, where enforcement happens too late, or where the governance structure itself is captured by commercial pressure. BrokenCtrl documents these failures with primary sources and confidence labels.

What are AI ethics violations?

AI ethics violations are documented breaches of an AI company's own published ethical commitments — using AI in ways the company said it would not, deploying systems without the safeguards it claimed were in place, or treating known harm as acceptable cost. BrokenCtrl's case library tracks these violations with primary sources, dated entries, and confidence labels applied to every individual claim.

How are artificial intelligence failures documented here?

Every case follows a fixed template: summary, verified vs unverified claims, mechanism and failure mode, harm assessment, controls that would have reduced harm, and primary sources. Confidence labels — Verified, Probable, Unverified — are applied to individual claims, not just to the case overall. Full methodology on the About page →

What is AI accountability?

AI accountability means that organisations deploying AI systems can be identified, questioned, and held responsible for outcomes their systems produce. It requires transparency about how systems work, documented enforcement mechanisms, and consequences when things go wrong. Most of the cases in this library are stories about accountability that did not happen.

Are these cases covered from a legal perspective?

No. BrokenCtrl covers cases from an ethics, governance, and risk perspective — not a legal one. Where regulatory or legal proceedings are relevant to a case, they are noted with appropriate confidence labelling. This site does not provide legal advice.