About BrokenCtrl — AI accountability, documented
What this site is, how it works, and why the methodology matters.
What BrokenCtrl is
BrokenCtrl started as a domain a friend wanted to use to sell Call of Duty codes. When that failed, I inherited the name and thought it was too good to waste.
I spent most of late 2024 and early 2025 building it as an AI tools directory — affiliate reviews, bulk content written by ChatGPT, video AIs, voiceovers. That experiment taught me two things quickly: AI-written content does not perform in search, and some of what these tools could do was genuinely dangerous.
By 2026 I was reading more and more about AI failures, about the speed at which jobs were being replaced, about billionaires operating systems that affected millions of people with no accountability to anyone. I have compliance and player safety experience from regulated gambling — an industry that causes real damage and where I work as a compromise, not a calling. I started to think that experience might be more useful somewhere else. AI governance does not yet have a mature compliance profession behind it. Someone needs to start building that. This site is where I am starting.
Who runs it
This is a one-person operation. I have a background in compliance, player risk, and website building, with earlier experience in banking. I am not an AI researcher. I am not an academic. What I am is someone who has spent years working inside regulated systems, watching the gap between what companies say about responsible conduct and what they actually do.
I bought Bitcoin in 2010 and sold it in 2014. I am not a billionaire. I am also not someone who needs this to be a business immediately, which means I can afford to be honest about what I find.
How claims are verified — three-tier confidence labelling
Every factual claim in a case study or review carries one of three confidence labels. This is the core of the methodology — not a disclaimer, a discipline. The label tells you exactly how much weight to give a claim and what evidence supports it.
Confirmed by primary sources — official statements, regulatory filings, academic research, technical documentation, or direct primary evidence. Can be independently checked.
Supported by credible reporting from multiple independent outlets, but not yet confirmed by primary documentation. Treated as likely but subject to revision.
Reported but contested, based on a single source, or not yet confirmed by independent evidence. Included for completeness — never treated as established fact.
When evidence changes, confidence labels are updated. Every page carries a "Last updated" timestamp. Corrections are noted explicitly — not silently overwritten.
The Pentagon gave Anthropic a Friday 5:01pm deadline to remove objections to autonomous weapons use — confirmed by Washington Post primary reporting and corroborated by Anthropic's own public statements.
OpenAI moved quickly to fill the gap left by Anthropic on classified networks — reported consistently across multiple credible outlets, but not confirmed in direct statements from either company at the time of writing.
Some reporting suggested the pressure on Anthropic included threats beyond standard procurement leverage — based on single-source accounts, not independently corroborated.
The same case, three different levels of evidence. The label does not say whether something matters — it says how confident you should be that it happened.
How AI tools are reviewed — the Ethics Score
Every tool in the Ethical AI Reviews section is scored across six dimensions. Scores are based on publicly verifiable evidence only — not vendor claims, not marketing copy.
| Dimension | What is assessed |
|---|---|
| Transparency | Does the company disclose how the model works, what data it was trained on, and what its limitations are? |
| Privacy | How is user data handled, stored, and shared? Are data practices clearly documented and verifiable? |
| Safety controls | What guardrails exist? Are they technical or purely policy-based? Have they been tested or red-teamed? |
| Data governance | Are training data sources documented? Are third-party data rights respected? |
| Corporate conduct | Does the company's behaviour match its stated ethics commitments? Are there documented contradictions? |
| Real-world harm | Is there documented evidence of harm caused by the tool in deployment? How did the company respond? |
AI accountability requires context, not just scores. A rating is a starting point — the full case notes behind it contain the evidence.
Independence statement
BrokenCtrl is independently operated and will stay that way. No AI company, research institution, or lobby group has paid for coverage or influenced what gets published here. I would consider collaborations with people who think along the same lines. I will not sell the site to any company or entity.
Some tool reviews contain affiliate links. These are labelled. They do not affect Ethics Scores or conclusions.
I am not a hypocrite about money. The long-term goal is to become a working consultant and authority in AI ethics — to study, publish, and eventually write a guide or a book. That is two or three years away. For now I write here and on Substack, working out whether there is a real audience and community for this kind of work.
Corrections and contact
If something here is wrong — a source does not support what I attributed to it, a confidence label is off, or new evidence changes the picture — use the contact page. I will correct it, note what changed, and update the timestamp.
If you are working on similar things, or you think your experience is relevant to a case, write to me. I am not claiming to be a specialist yet. I will answer.