About BrokenCtrl — AI accountability, documented

What this site is, how it works, and why the methodology matters.

What BrokenCtrl is

BrokenCtrl started as a domain a friend wanted to use to sell Call of Duty codes. When that failed, I inherited the name and thought it was too good to waste.

I spent most of late 2024 and early 2025 building it as an AI tools directory — affiliate reviews, bulk content written by ChatGPT, video AIs, voiceovers. That experiment taught me two things quickly: AI-written content does not perform in search, and some of what these tools could do was genuinely dangerous.

By 2026 I was reading more and more about AI failures, about the speed at which jobs were being replaced, about billionaires operating systems that affected millions of people with no accountability to anyone. I have compliance and player safety experience from regulated gambling — an industry that causes real damage and where I work as a compromise, not a calling. I started to think that experience might be more useful somewhere else. AI governance does not yet have a mature compliance profession behind it. Someone needs to start building that. This site is where I am starting.


Who runs it

This is a one-person operation. I have a background in compliance, player risk, and website building, with earlier experience in banking. I am not an AI researcher. I am not an academic. What I am is someone who has spent years working inside regulated systems, watching the gap between what companies say about responsible conduct and what they actually do.

I bought Bitcoin in 2010 and sold it in 2014. I am not a billionaire. I am also not someone who needs this to be a business immediately, which means I can afford to be honest about what I find.


How claims are verified — three-tier confidence labelling

Every factual claim in a case study or review carries one of three confidence labels. This is the core of the methodology — not a disclaimer, a discipline. The label tells you exactly how much weight to give a claim and what evidence supports it.

Verified

Confirmed by primary sources — official statements, regulatory filings, academic research, technical documentation, or direct primary evidence. Can be independently checked.

Probable

Supported by credible reporting from multiple independent outlets, but not yet confirmed by primary documentation. Treated as likely but subject to revision.

Unverified

Reported but contested, based on a single source, or not yet confirmed by independent evidence. Included for completeness — never treated as established fact.

When evidence changes, confidence labels are updated. Every page carries a "Last updated" timestamp. Corrections are noted explicitly — not silently overwritten.

How the labels work in practice — Anthropic / Pentagon case
Verified

The Pentagon gave Anthropic a Friday 5:01pm deadline to remove objections to autonomous weapons use — confirmed by Washington Post primary reporting and corroborated by Anthropic's own public statements.

Probable

OpenAI moved quickly to fill the gap left by Anthropic on classified networks — reported consistently across multiple credible outlets, but not confirmed in direct statements from either company at the time of writing.

Unverified

Some reporting suggested the pressure on Anthropic included threats beyond standard procurement leverage — based on single-source accounts, not independently corroborated.

The same case, three different levels of evidence. The label does not say whether something matters — it says how confident you should be that it happened.


How content is produced — authorship policy

A publication about AI ethics that conceals its own use of AI would contradict everything it covers. BrokenCtrl uses AI as a research and drafting tool — and labels it explicitly on every piece of content.

Every article, case study, framework post, and review carries one of three authorship labels, visible at the top of the page:

HGC

HGC — Human Generated Content

Written and researched entirely by the author. No AI involvement in the drafting process. Applied to editorial opinion pieces, personal commentary, the About page, and any content where the author's direct voice and judgment are the primary value.

HAC

HAC — Human + AI Collaboration

AI researched and structured the draft. The author rewrote the introduction, conclusion, and any section requiring original judgment, professional assessment, or editorial voice. A minimum of 30% of the final text is directly written or substantially rewritten by the author. Applied to most case studies, framework posts, and ethical reviews.

AGC

AGC — AI Generated Content, Human Reviewed

AI produced the draft. The author reviewed the content for factual accuracy, verified sources, and approved publication. Applied to factual reference content — tool specifications, pricing tables, FAQ blocks, and structured data summaries — where the primary value is accuracy, not voice.

This labelling system exists because transparency about AI involvement is not optional for a publication covering AI ethics. The label appears at the top of every piece of content — not as a footnote, not as a disclaimer buried in the footer. Readers have a right to know how what they are reading was produced.


How AI tools are reviewed — the Ethics Score

Every tool in the Ethical AI Reviews section is scored across six dimensions. Scores are based on publicly verifiable evidence only — not vendor claims, not marketing copy.

DimensionWhat is assessed
TransparencyDoes the company disclose how the model works, what data it was trained on, and what its limitations are?
PrivacyHow is user data handled, stored, and shared? Are data practices clearly documented and verifiable?
Safety controlsWhat guardrails exist? Are they technical or purely policy-based? Have they been tested or red-teamed?
Data governanceAre training data sources documented? Are third-party data rights respected?
Corporate conductDoes the company's behaviour match its stated ethics commitments? Are there documented contradictions?
Real-world harmIs there documented evidence of harm caused by the tool in deployment? How did the company respond?

AI accountability requires context, not just scores. A rating is a starting point — the full case notes behind it contain the evidence.


Independence statement

BrokenCtrl is independently operated and will stay that way. No AI company, research institution, or lobby group has paid for coverage or influenced what gets published here. I would consider collaborations with people who think along the same lines. I will not sell the site to any company or entity.

Some tool reviews contain affiliate links. These are labelled. They do not affect Ethics Scores or conclusions.

I am not a hypocrite about money. The long-term goal is to become a working consultant and authority in AI ethics — to study, publish, and eventually write a guide or a book. That is two or three years away. For now I write here and on Substack, working out whether there is a real audience and community for this kind of work.


Corrections and contact

If something here is wrong — a source does not support what I attributed to it, a confidence label is off, or new evidence changes the picture — use the contact page. I will correct it, note what changed, and update the timestamp.

If you are working on similar things, or you think your experience is relevant to a case, write to me. I am not claiming to be a specialist yet. I will answer.

Last updated: April 2026