by ctrl-admin | Apr 11, 2026 | AI Governance Frameworks — BrokenCtrl
Framework 05 AI negligence: foreseeable misuse is not an excuse Once a harmful use case is predictable, "we didn't expect it" is not a defence. What pre-deployment hazard analysis actually requires. Content label AGC AI Generated, Human Reviewed What does this mean? →...
by ctrl-admin | Apr 8, 2026 | AI Governance Frameworks — BrokenCtrl
HAC Human + AI Collaboration Framework 07 Media capture as a control point — how AI companies buy the narrative When the companies being scrutinised acquire, fund, or structurally depend on the outlets doing the scrutiny, accountability journalism becomes a managed...
by ctrl-admin | Apr 8, 2026 | AI Accountability Cases — BrokenCtrl
HAC Human + AI Collaboration Case BC-002 Anthropic DMCA takedown: when the AI safety company became a copyright enforcer Claude Code's source was leaked. Anthropic's response raised harder questions than the leak itself. Case ID BC-002 Actor Anthropic Period 2025 –...
by ctrl-admin | Apr 8, 2026 | AI Governance Frameworks — BrokenCtrl
HAC Human + AI Collaboration Framework 04 Policy vs enforcement — how to tell the difference Most AI ethics commitments are policy documents. Policy without enforcement is preference, not constraint. Every major AI company has an ethics page. Most have a responsible...
by ctrl-admin | Apr 8, 2026 | AI Accountability Cases — BrokenCtrl
HAC Human + AI Collaboration Anthropic vs. the Pentagon: when ethics commitments meet state power How the US Department of Defense forced a reckoning with AI safety red lines — and what happened next. Case ID BC-001 Actor Anthropic / US DoD Period Feb – Apr 2026 Harm...