What Is AI Governance — Frameworks

Conceptual tools for understanding how AI harms happen, how governance fails, and what accountability actually requires.

What is AI governance

AI governance is the set of rules, principles, institutions and enforcement mechanisms that determine how artificial intelligence systems are built, deployed and held accountable. It covers everything from internal company ethics policies to national legislation like the EU AI Act, from voluntary industry standards to binding regulatory requirements.

The short version: AI governance is how society decides what AI can do, who is responsible when it goes wrong, and what happens when those responsible do nothing.

AI governance vs AI ethics

AI ethics refers to the principles that should guide AI development — fairness, transparency, accountability, safety. AI governance refers to the mechanisms that actually enforce those principles. Ethics without governance is a press release. Governance without ethics is compliance theatre. The frameworks on this site address both — and the gap between them.

Why AI governance fails

Governance fails primarily because the incentive structures inside AI companies reward speed and commercial scale over safety. Ethics boards exist but rarely have veto power. Regulators are outpaced by development. Voluntary commitments disappear when contracts are large enough. The frameworks on this page are tools for identifying exactly where and why those failures happen.


AI governance maturity model

Most organisations sit somewhere on this spectrum. The frameworks on BrokenCtrl are designed to help identify where an organisation is — and what the gap to the next level actually looks like in practice.

1

No governance

AI deployed without policy, oversight or accountability structures. Failures handled reactively when they become public.

2

Policy exists, not enforced

Ethics principles or responsible AI statements published. No enforcement mechanism, no accountability when principles are violated.

3

Internal oversight

Ethics board or review process in place. Oversight is internal, advisory only. No external audit or independent verification.

4

External accountability

Third-party audits, regulatory compliance, transparent incident reporting. Accountability has teeth — failures have documented consequences.

5

Systemic governance

Governance embedded in product development, procurement and deployment. Risk assessment mandatory before launch. Public transparency on failures and mitigations.


Published frameworks

Frameworks coming soon

We are building conceptual tools for understanding AI governance failures and risk management. The first frameworks will be published shortly.

Browse case studies instead

Frequently asked questions

What is an AI governance framework?

An AI governance framework is a structured set of principles, processes and controls that an organisation uses to manage the development and deployment of AI systems. A framework typically covers risk assessment, human oversight requirements, transparency obligations, incident response procedures and accountability structures. Examples include the NIST AI Risk Management Framework, the EU AI Act's risk classification system, and internal frameworks published by major AI companies. BrokenCtrl documents the gap between what those frameworks say and how they are actually applied.

What is a governance framework example in practice?

A real-world AI governance framework example: a financial institution deploying an AI credit-scoring system would be required under good governance to document the model's decision logic, test for discriminatory outcomes across protected groups, maintain human review for disputed decisions, log all outputs for audit purposes, and publish a summary of the model's limitations. Most organisations have some of these elements. Very few have all of them enforced consistently.

What is the NIST AI risk management framework?

The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary framework published by the US National Institute of Standards and Technology that helps organisations identify, assess and manage AI-related risks. It is structured around four core functions — Govern, Map, Measure, Manage — and is designed to be sector-agnostic. It is one of the most widely referenced AI governance frameworks in the US. BrokenCtrl uses it as a baseline reference when assessing whether an organisation's governance meets minimum standards.

Why do AI governance frameworks fail?

Most AI governance frameworks fail for one of three reasons: they are voluntary with no enforcement mechanism, they exist at policy level but are not embedded in actual product and engineering decisions, or they are designed to demonstrate compliance rather than prevent harm. A framework that cannot stop a deployment decision is not a governance framework — it is a document. BrokenCtrl's case studies document exactly these failures with sourced evidence.