AI Accountability — About BrokenCtrl

Who built this, why it exists, and what it is actually trying to do.

How this started — and why it changed

BrokenCtrl is an independent publication focused on AI accountability — documenting what AI companies actually do, not what their press releases say. It covers AI ethics failures, corporate governance gaps, and the distance between responsible AI principles and real-world conduct. That was not always the plan.

BrokenCtrl started as an affiliate AI tool comparison site. I wanted to build something around AI, and if I am honest, I wanted to make money from it. We all need to live. I started reviewing tools, comparing models, writing about what AI could do. I was enthusiastic about it — the same way most people were in 2023 and 2024.

Then I kept reading. I read about how AI systems were being used by governments in targeting decisions. I read about autonomous weapons and the removal of human oversight from lethal systems. I read about surveillance tools deployed at scale against civilian populations. I read about companies publicly committing to ethical AI principles while quietly negotiating those same principles away when a large contract was on the table.

At some point I could not keep writing product reviews as if none of that was happening.

The pivot was not dramatic. There was no single article that triggered it. It was an accumulation — a growing gap between how AI was being presented and what it was actually being used for. AI governance and regulation cannot keep up with development at this pace, and waiting for governments to catch up is not a strategy. Someone has to document what is happening now, with sources, without the press release framing.

That is what BrokenCtrl is now.


Who is behind this

My name is Bogdan. I am a Player Risk and Compliance specialist in the gambling industry.

Gambling is not the most obvious background for an AI accountability publication. But the overlap is real. My job is to identify harm before it becomes irreversible — to close accounts, apply limits, and intervene when a product designed to be engaging starts destroying someone's life. The industry I work in is built on the same tension that runs through AI: enormous commercial incentive on one side, genuine potential for harm on the other, and AI ethics policy that is always a few steps behind reality.

I am not on the casino side of that equation. I am on the protection side. I assess risk, spot patterns, and act on findings. That is the same lens I apply to AI governance failures and corporate accountability gaps on this site.

I am not an AI researcher. I am not a lawyer. I am someone who has spent years learning to spot the gap between what companies say they do and what they actually do — and I think that skill transfers.


What BrokenCtrl means

A broken Ctrl key is a nuisance. It does not stop the keyboard from working. You find workarounds. You adapt. Things are still functional — just not as clean as they should be.

That is an honest description of where we are with AI governance. The control mechanisms exist — safety teams, ethics boards, AI ethics committees, policy frameworks, regulatory proposals. They are broken in ways that matter. Not broken beyond repair, but broken enough that the system keeps producing outputs nobody intended and nobody is held accountable for.

We do not have the pretension that we will save the world. We will try to do more good than harm.


What BrokenCtrl publishes

Three types of content, all built around AI accountability and responsible AI governance:

Case studies — structured analysis of documented AI incidents, abuses, and governance failures. Every case is labelled Verified, Probable, or Unverified. Claims are sourced. Uncertainty is stated explicitly. These are not opinion pieces — they are documentation.

Frameworks — conceptual tools for understanding how AI harms happen and persist. Working models for people who need to think clearly about AI risk, ethical AI principles, and corporate accountability.

Templates — practical resources. Incident report formats, AI ethics guidelines and checklists, risk assessment structures, due diligence questions for procurement. Things you can actually use.

What we do not publish: hot takes dressed as analysis, press releases rewritten as news, or speculation presented as fact.


How we assess AI accountability claims — the confidence system

Every case study carries one of three labels. This is how we handle the reality that AI ethics and accountability coverage moves fast and sources vary wildly in quality.

Verified
Confirmed by primary sources, official documents, or multiple independent credible outlets. Treated as established fact.
Probable
Supported by credible reporting but not independently confirmed by primary sources, or confirmed by a single outlet without corroboration. Likely but not certain.
Unverified
Reported but not confirmed. Included when the claim is significant enough to document, with explicit acknowledgement that verification is pending or not possible.

This system exists because the alternative — treating everything as equally true, or excluding anything uncertain — produces worse analysis. We try to do neither.


What we want readers to do

Understand AI — not the marketing version, but the operational reality. What these systems actually do, who controls them, who benefits, and who bears the cost.

Question AI outputs. Question company safety statements. Question government assurances. Ask what the incentives are. Ask who benefits from the current lack of AI accountability.

And where possible — push governments to make changes that serve everyone, not only the people who can afford to buy the policy.

The people building these systems are not malicious by default. But the incentive structures they operate inside reward speed, scale, and commercial dominance over safety and accountability. That gap does not close by itself.


Corrections and contact

If something on this site is wrong, we want to know. If a confidence label is incorrect, if a source does not support the claim attributed to it, or if new information changes the analysis — contact us and we will correct it and note the correction publicly.

If you have information relevant to a case we have covered, or a case we have not, use the contact page. We read everything.


Frequently asked questions

What is AI accountability?

AI accountability means that the companies and governments deploying AI systems can be held responsible for the outcomes those systems produce — including harm to individuals, communities, or democratic processes. It requires transparency about how systems work, documentation of decisions made, and mechanisms for redress when things go wrong. Currently, most of those mechanisms are either absent or not enforced.

What are the key principles of ethical AI?

The most cited ethical AI principles include transparency (systems should be explainable), fairness (outputs should not discriminate), accountability (someone must be responsible for outcomes), safety (systems should not cause harm), and human oversight (humans should remain in control of consequential decisions). The gap BrokenCtrl documents is between these stated principles and how AI is actually deployed.

How do you evaluate AI ethics accountability in a company?

Look at the gap between stated policy and documented behaviour. Does the company publish an AI ethics policy? Does that policy have enforcement mechanisms or is it aspirational? Have there been documented cases where the company acted against its own stated principles? Were there consequences? BrokenCtrl uses a three-tier confidence system — Verified, Probable, Unverified — to assess and document exactly this gap.

Why is AI governance failing?

Primarily because regulation is outpaced by development speed, and because the incentive structures inside AI companies reward shipping fast over shipping safely. Ethics boards exist but rarely have veto power. Governments are aware of the risks but disagree on jurisdiction, definitions, and enforcement. The result is a system where accountability is voluntary — and voluntary accountability tends to disappear when contracts are large enough.

Last updated: March 2026