AI governance regulations: what companies say vs what they do
BrokenCtrl documents the gap between AI ethics commitments and actual conduct. Every case sourced. Every claim labelled Verified, Probable, or Unverified.
FROM THE EDITOR
My compliance experience lets me see the patterns. Regulated industries see compliance as a burden that they have to implement due to fear of the law. AI moves faster, does not even have rules to follow, it affects everyone, so it is a much bigger danger. AI companies fake caring about ethics and accountability, then break them quietly. BrokenCtrl is the record — independent and fair, sourced, labelled and dated. Content is split into three. Frameworks — established ones do not exist yet for AI ethics, so I am building my own. Cases — records of failures, ethics violations and AI misuse. Reviews — independent assessments of AI tools with ethics scores.
ETHICAL AI REVIEWS
View all reviews →Reviews loading...
CASE STUDIES
View all cases →Cases loading...
FRAMEWORKS
View all frameworks →Frameworks loading...
FREQUENTLY ASKED
What are AI governance regulations?
AI governance regulations are legal frameworks, policy standards, and institutional rules that determine how AI systems can be developed, deployed, and audited. Sometimes called artificial intelligence governance, the field covers binding regulation like the EU AI Act, voluntary standards like NIST AI RMF, and internal corporate policies. BrokenCtrl documents how companies respond — and fail to respond — to these requirements in practice.
What are AI governance principles?
AI governance principles are the foundational rules that determine how AI systems should be developed, deployed, and overseen — covering accountability, transparency, fairness, safety, and human oversight. The OECD AI Principles, the EU AI Act, and NIST AI RMF each codify versions of these. BrokenCtrl examines how stated AI governance principles compare to observable corporate conduct.
What are AI governance best practices?
AI governance best practices include documented risk assessments, independent audits, kill-switch protocols, transparency reports, incident response procedures, and enforced internal policies that produce consequences when violated. The gap between stated best practices and observable conduct is what BrokenCtrl's case library documents.
What is an ethical AI review?
An ethical AI review assesses an AI tool across documented dimensions: transparency, privacy practices, safety controls, data governance, corporate conduct, and real-world harm evidence. BrokenCtrl scores every tool on these six dimensions using publicly verifiable sources, not marketing claims.
How does BrokenCtrl verify claims?
Every claim is labelled Verified (confirmed by primary sources), Probable (supported by credible reporting), or Unverified (contested or unconfirmed). This three-tier system is applied to all case studies and reviews. Read the full methodology →
Does BrokenCtrl use AI to write content?
Yes — and we label it. Every piece of content carries one of three authorship labels: HGC (Human Written), HAC (Human + AI Collaboration), or AGC (AI Generated, Human Reviewed). An ethics publication that hides its own AI use would contradict everything it covers. See the full authorship policy →
Who is BrokenCtrl for?
Practitioners in risk, compliance, governance, journalism, and policy who need sourced, documented analysis of AI conduct — not commentary. Also useful for anyone evaluating AI tools for organisational use who needs an honest assessment beyond vendor marketing.