by ctrl-admin | Apr 11, 2026 | AI Governance Frameworks — BrokenCtrl
Framework 05 AI negligence: foreseeable misuse is not an excuse Once a harmful use case is predictable, "we didn't expect it" is not a defence. What pre-deployment hazard analysis actually requires. Content label AGC AI Generated, Human Reviewed What does this mean? →...
by ctrl-admin | Apr 8, 2026 | AI Governance Frameworks — BrokenCtrl
HAC Human + AI Collaboration Framework 07 Media capture as a control point — how AI companies buy the narrative When the companies being scrutinised acquire, fund, or structurally depend on the outlets doing the scrutiny, accountability journalism becomes a managed...
by ctrl-admin | Apr 8, 2026 | AI Accountability Cases — BrokenCtrl
HAC Human + AI Collaboration Case BC-002 Anthropic DMCA takedown: when the AI safety company became a copyright enforcer Claude Code's source was leaked. Anthropic's response raised harder questions than the leak itself. Case ID BC-002 Actor Anthropic Period 2025 –...
by ctrl-admin | Apr 8, 2026 | AI Governance Frameworks — BrokenCtrl
HAC Human + AI Collaboration Framework 04 Policy vs enforcement — how to tell the difference Most AI ethics commitments are policy documents. Policy without enforcement is preference, not constraint. Every major AI company has an ethics page. Most have a responsible...
by ctrl-admin | Apr 8, 2026 | AI Accountability Cases — BrokenCtrl
HAC Human + AI Collaboration Anthropic vs. the Pentagon: when ethics commitments meet state power How the US Department of Defense forced a reckoning with AI safety red lines — and what happened next. Case ID BC-001 Actor Anthropic / US DoD Period Feb – Apr 2026 Harm...
by ctrl-admin | Oct 25, 2025 | AI Governance Frameworks — BrokenCtrl
What AI Ethics Actually Means — And Why Most Companies Use the Term to Avoid Accountability The only distinction that matters is whether ethics commitments have enforcement mechanisms. Most don't. HAC Human + AI The definition problem AI ethics has a language problem....
by ctrl-admin | Oct 21, 2025 | AI Governance Frameworks — BrokenCtrl
Can AI Grade Its Own Homework? The Self-Assessment Problem in AI Governance Why self-evaluation is not a safety mechanism — and how to tell when a safety report is a marketing document. HAC Human + AI Framework 04 — Policy vs Enforcement The problem in one sentence A...
by ctrl-admin | Mar 3, 2024 | AI Accountability Cases — BrokenCtrl
Babylon Health: How an AI Health Startup Built a $4 Billion Benchmark Lie A case study in selective benchmarking, regulatory gaps, and healthcare AI governance failure. HAC Human + AI Confidence Verified Corporate collapse, SPAC filing, and bankruptcy are confirmed by...