Anthropic DMCA takedown: when the AI safety company became a copyright enforcer
Claude Code's source was leaked. Anthropic's response raised harder questions than the leak itself.
Source code for Claude Code — Anthropic's agentic coding product — was leaked and published publicly. Anthropic responded with DMCA takedown notices to suppress the leak. The code contained content that raised questions about Anthropic's internal safety culture and the gap between its public commitments and internal practice.
The case is not primarily about the leak. It is about what a company that markets itself on transparency and responsible AI development chooses to do when transparency becomes inconvenient. DMCA as a suppression tool — used not to protect commercial secrets in the conventional sense, but to limit scrutiny of safety-relevant internal content — is a corporate conduct failure regardless of its legal permissibility.
Claude Code ships as an agentic coding product
Anthropic launches Claude Code, a command-line agentic coding tool that can read files, run code, and take real-world actions on a developer's machine. It is positioned as a flagship demonstration of Anthropic's safety-first approach to agentic AI.
Source code extracted and published
Researchers discover that Claude Code ships with minified but recoverable JavaScript source. The code is extracted, de-obfuscated, and published publicly — including the system prompt and internal instruction architecture that governs how Claude Code behaves as an agent.
Anthropic issues DMCA takedown notices
Anthropic files DMCA takedown notices targeting repositories and posts sharing the extracted source code. The notices are filed on copyright grounds. The effect — regardless of legal basis — is to limit public access to internal content that had attracted scrutiny from the safety research community.
Public reaction and safety community response
The DMCA action draws attention from AI safety researchers, journalists, and open-source advocates. The central criticism: a company whose core credibility claim is transparency about AI safety used legal suppression mechanisms when its internal architecture was exposed to external scrutiny. Anthropic does not issue a substantive public response addressing the safety-relevant content of the leak.
Claude Code ships with recoverable source code — the product was not compiled to a form that prevents extraction. Researchers successfully de-obfuscated and published the internal instruction architecture.
Anthropic filed DMCA takedown notices targeting public repositories sharing the extracted source. The notices are a matter of public record via GitHub's DMCA transparency reporting.
The extracted content included Claude Code's system prompt and agent instruction architecture — the internal rules governing how the model behaves when given agentic capabilities and tool access.
The leaked content contained internal safety-relevant design decisions that Anthropic had not disclosed publicly — including specifics about how the model is instructed to handle sensitive agentic actions. Reported by multiple technical outlets; Anthropic has not confirmed or denied the specific content.
The DMCA action was targeted at suppressing scrutiny of the safety-relevant content specifically, not merely protecting commercial IP in the conventional sense. Inferred from the timing, scope of targets, and absence of a substantive public response to the safety questions raised.
Whether the leaked content revealed specific contradictions between Anthropic's public safety commitments and internal design decisions — as opposed to simply exposing proprietary implementation details — is contested. Technical reviewers disagree on the significance of what was found.
DMCA as transparency suppression
Copyright law exists to protect creative and commercial works. Using DMCA to limit public scrutiny of safety-relevant AI architecture is legally permissible and ethically incoherent for a company whose credibility depends on transparency claims.
No substantive response to safety questions
Anthropic's response addressed the IP violation, not the safety questions the leak raised. Suppressing the distribution of content while not engaging with its substance is a communications strategy, not a safety response.
Agentic systems require higher transparency, not less
Claude Code operates with tool access, file system permissions, and the ability to take real-world actions. The instruction architecture governing those actions is directly safety-relevant. It is the opposite of proprietary — it is the mechanism the public needs to evaluate to assess risk.
Credibility contradiction
Anthropic's public safety research programme — model cards, safety evals, responsible scaling policy — is premised on the value of external scrutiny. Using legal mechanisms to limit that scrutiny when it becomes inconvenient undermines the entire premise.
Read alongside Case BC-001, this case completes a picture. BC-001 shows Anthropic holding its safety red lines under state-level procurement pressure — a genuine commitment with a real cost. BC-002 shows the same company using legal suppression to limit external scrutiny of its internal architecture.
These are not contradictory. They are the same organisation navigating the same tension: transparency is the credibility claim, and transparency is also the liability. When external scrutiny produces reputational benefit — as with the Pentagon refusal — Anthropic leans into it. When external scrutiny produces questions the company cannot answer cleanly, legal tools become available.
This pattern is not unique to Anthropic. It is the standard operating mode for technology companies that have staked their identity on ethical conduct. The governance gap is not bad faith — it is the absence of external mechanisms that make transparency non-optional. When a company decides what to disclose, disclosure is a PR function. The DMCA case is the clearest illustration of that gap in Anthropic's record.
Framework connection: Both BC-001 and BC-002 activate Framework 04: Policy vs Enforcement. Anthropic's transparency commitments are policy. The DMCA action demonstrates that when enforcement of those commitments conflicts with the company's interests, the policy does not hold. That is the definition of a preference, not a constraint.
Mandatory disclosure of agentic system prompts and instruction architecture. For AI systems with real-world action capabilities, the internal instruction set is a safety document — not a trade secret. Regulatory frameworks for high-risk AI should require its disclosure to competent authorities and, where appropriate, the public.
Independent security research safe harbour. Researchers who extract and publish AI system architecture for safety analysis purposes should have legal protection equivalent to security researchers who disclose software vulnerabilities. DMCA should not be available as a tool against good-faith safety scrutiny.
Substantive engagement with disclosed safety concerns. When leaked content raises safety questions, the minimum adequate response is a public technical assessment of those questions — not suppression of the leak followed by silence on the substance.
Separation of IP protection and safety transparency obligations. A company can legitimately protect commercial source code while also being required to disclose the safety-relevant portions of how its systems are instructed to behave. These are not in conflict — they require policy and legal frameworks that distinguish between them.
QUESTIONS
What was leaked from Claude Code?
Claude Code shipped with minified but recoverable JavaScript source code. Researchers extracted and published the internal content, which included the system prompt and agent instruction architecture — the internal rules governing how Claude Code behaves when given agentic capabilities, tool access, and file system permissions. This content is safety-relevant because it determines how the model handles sensitive real-world actions.
Was Anthropic's DMCA action legal?
Almost certainly yes — Anthropic owns the copyright in its source code. The question this case raises is not legal permissibility but ethical consistency. A company that markets itself on AI transparency and external safety scrutiny using copyright law to suppress that scrutiny when it becomes inconvenient is not acting illegally. It is acting in contradiction to its stated values. BrokenCtrl covers conduct, not just legality.
Why does an AI system prompt matter for safety?
For a standard conversational AI, system prompts are primarily about behaviour shaping. For an agentic system like Claude Code — which can read files, write code, execute commands, and take real-world actions — the system prompt is the instruction set that governs when the model acts, when it refuses, and how it handles ambiguous or sensitive situations. It is functionally a safety document. External scrutiny of that document is how the safety research community assesses whether the model's real-world behaviour matches its stated design.
How does this case relate to Anthropic's Pentagon dispute?
The two cases document the same tension from opposite directions. In BC-001, Anthropic's safety commitments held under external pressure — at real commercial cost. In BC-002, those same commitments were superseded by the company's interest in controlling its public narrative. Together they show an organisation with genuine safety commitments that are selectively applied — strong when the cost is commercial, weaker when the cost is reputational. The gap between those two outcomes is where governance infrastructure is absent. See Framework 04: Policy vs Enforcement.
Last updated: April 2026 · Case ID: BC-002 · Methodology →