by ctrl-admin | Apr 11, 2026 | AI Governance Frameworks — BrokenCtrl
Framework 05 AI negligence: foreseeable misuse is not an excuse Once a harmful use case is predictable, "we didn't expect it" is not a defence. What pre-deployment hazard analysis actually requires. Content label AGC AI Generated, Human Reviewed What does this mean? →...
by ctrl-admin | Apr 8, 2026 | AI Governance Frameworks — BrokenCtrl
HAC Human + AI Collaboration Framework 07 Media capture as a control point — how AI companies buy the narrative When the companies being scrutinised acquire, fund, or structurally depend on the outlets doing the scrutiny, accountability journalism becomes a managed...
by ctrl-admin | Apr 8, 2026 | AI Governance Frameworks — BrokenCtrl
HAC Human + AI Collaboration Framework 04 Policy vs enforcement — how to tell the difference Most AI ethics commitments are policy documents. Policy without enforcement is preference, not constraint. Every major AI company has an ethics page. Most have a responsible...
by ctrl-admin | Oct 25, 2025 | AI Governance Frameworks — BrokenCtrl
What AI Ethics Actually Means — And Why Most Companies Use the Term to Avoid Accountability The only distinction that matters is whether ethics commitments have enforcement mechanisms. Most don't. HAC Human + AI The definition problem AI ethics has a language problem....
by ctrl-admin | Oct 21, 2025 | AI Governance Frameworks — BrokenCtrl
Can AI Grade Its Own Homework? The Self-Assessment Problem in AI Governance Why self-evaluation is not a safety mechanism — and how to tell when a safety report is a marketing document. HAC Human + AI Framework 04 — Policy vs Enforcement The problem in one sentence A...