Anthropic DMCA Blunder Takes Down 8,100 GitHub Repos Over Leaked Code
April 02, 2026 · 3 min read
Anthropic found itself at the center of a rapidly escalating open-source controversy this week after a DMCA takedown request intended for a single repository ended up pulling 8,100 GitHub projects offline. The incident stems from an accidental leak of approximately 512,000 lines of Claude Code source code through NPM — what the company has described as a "packaging error caused by human mistake, not a security flaw." When Anthropic moved to contain the leak, GitHub's automated enforcement system processed the takedown against the entire network of forks, far exceeding the company's intended scope.
The collateral damage was swift and significant. Developers across the platform discovered their repositories had been taken down without warning, sparking widespread frustration and criticism of both Anthropic's approach and GitHub's enforcement mechanisms. Anthropic quickly moved to retract the mass takedown, narrowing enforcement to just one repository and 96 associated forks. GitHub subsequently restored all other affected projects, but the damage to Anthropic's reputation within the developer community was already underway.
While Anthropic scrambled to manage the fallout, security researcher Sigrid Jin, known on social media as @instructkr, seized the moment by creating "claw-code" — a clean-room reimplementation of Claude Code's functionality written in Python and later ported to Rust. Jin built the project using OpenAI's oh-my-codex as a legal foundation, carefully avoiding any direct use of the leaked proprietary code. The approach mirrors the well-established clean-room reverse engineering doctrine that has been upheld in software copyright cases for decades.
The claw-code repository shattered every existing record for growth on GitHub. It surpassed 50,000 stars in just two hours and climbed past 110,000 stars and 100,000 forks within its first days, making it the fastest repository in GitHub's history to reach the 100,000-star milestone. The explosive adoption reflects both genuine developer interest in an open alternative to Claude Code and a broader community statement against what many perceived as heavy-handed intellectual property enforcement.
The leaked source code itself has raised uncomfortable questions for Anthropic beyond the copyright issue. Researchers who examined the code before takedowns took effect reported discovering references to unannounced features, including a system called "KAIROS" — apparently a persistent background agent — and an "Undercover Mode" described as enabling stealth contributions to open-source projects. Neither feature has been publicly acknowledged by Anthropic, and their existence in the codebase has fueled speculation about the company's broader product strategy and its relationship with the open-source ecosystem.
The timing compounds Anthropic's difficulties. The Claude Code leak marks the second significant exposure of internal Anthropic materials in a matter of days, following the accidental revelation of an internal initiative known as "Project Mythos." The back-to-back incidents raise questions about the company's internal security practices and operational controls — particularly notable for a firm that has built much of its public identity around safety and responsible AI development.
The episode highlights a growing tension in the AI industry between protecting proprietary technology and maintaining trust with the developer communities that companies increasingly depend on. Anthropic's DMCA response, even if unintentionally overbroad, risks alienating the very users it needs to build its ecosystem around Claude Code. Meanwhile, the meteoric rise of claw-code suggests that attempts to contain leaked capabilities may ultimately accelerate the development of open alternatives rather than suppress them.