AI governance regulations — BrokenCtrl
Ethical AI Reviews & Governance

AI governance regulations: what companies say vs what they do

BrokenCtrl documents the gap between AI ethics commitments and actual conduct. Every case sourced. Every claim labelled Verified, Probable, or Unverified.

How content is produced on BrokenCtrl — full transparency, always

Every article, case study and review carries one of these labels. Learn what each one means →

ETHICAL AI REVIEWS

View all reviews →
Ethical AI Reviews

Reviews loading...

CASE STUDIES

View all cases →
Cases

Cases loading...

Frameworks

Frameworks loading...


FREQUENTLY ASKED

What are AI governance regulations?

AI governance regulations are legal frameworks, policy standards, and institutional rules that determine how AI systems can be developed, deployed, and audited. The EU AI Act is the most comprehensive binding framework to date. BrokenCtrl documents how companies respond — and fail to respond — to these requirements in practice.

What is an ethical AI review?

An ethical AI review assesses an AI tool across documented dimensions: transparency, privacy practices, safety controls, data governance, corporate conduct, and real-world harm evidence. BrokenCtrl scores every tool on these six dimensions using publicly verifiable sources, not marketing claims.

How does BrokenCtrl verify claims?

Every claim is labelled Verified (confirmed by primary sources), Probable (supported by credible reporting), or Unverified (contested or unconfirmed). This three-tier system is applied to all case studies and reviews. Read the full methodology →

Does BrokenCtrl use AI to write content?

Yes — and we label it. Every piece of content carries one of three authorship labels: HGC (Human Written), HAC (Human + AI Collaboration), or AGC (AI Generated, Human Reviewed). An ethics publication that hides its own AI use would contradict everything it covers. See the full authorship policy →

Who is BrokenCtrl for?

Practitioners in risk, compliance, governance, journalism, and policy who need sourced, documented analysis of AI conduct — not commentary. Also useful for anyone evaluating AI tools for organisational use who needs an honest assessment beyond vendor marketing.