security

Anthropic Accidentally Exposes Claude Code Source in Public NPM Package

April 02, 2026 · 3 min read

Anthropic Accidentally Exposes Claude Code Source in Public NPM Package

Anthropic, the artificial intelligence company behind the Claude family of models, inadvertently published the complete source code of Claude Code — its AI-powered coding assistant — through a public npm package. The leak exposed approximately 512,000 lines of unminified code, offering an unprecedented look into the inner workings of one of the industry's most prominent AI development tools.

Security researchers discovered the exposure after noticing that the npm package contained the full, unminified source code rather than the intended production build. The leaked material included detailed system prompts that govern Claude's behavior, internal tool implementations for code generation and file manipulation, and proprietary logic that forms the backbone of the assistant's capabilities. For competitors and researchers alike, the exposure amounted to a rare, unfiltered blueprint of how a major commercial AI coding tool actually operates under the hood.

Anthropic moved quickly to contain the damage, pulling the offending package from the npm registry and publishing a corrected version. However, the response came too late to prevent cached copies from being downloaded and archived by third parties. Once a package is published to npm, even briefly, automated mirroring services and attentive developers can capture its contents before a takedown takes effect — a well-known limitation of the platform's distribution model.

The root cause of the incident appears to be a misconfigured CI/CD pipeline that failed to apply proper .npmignore rules before publishing. The .npmignore file acts as a gatekeeper, telling the npm toolchain which files should be excluded from a published package. Without it — or with an improperly configured version — the entire project directory, including source files never meant for public distribution, can be bundled and shipped to the registry. It is a deceptively simple mistake that has tripped up organizations of all sizes.

The leak is notable not just for its scale but for what it revealed about the architecture of modern AI assistants. System prompts, which define how an AI model should behave, interpret instructions, and handle edge cases, are typically treated as closely guarded trade secrets. Their exposure gives outside observers a concrete understanding of the engineering decisions and guardrails that shape Claude Code's responses — information that could inform both legitimate research and potential adversarial efforts to circumvent safety mechanisms.

For the broader software industry, the incident serves as a cautionary tale about the risks inherent in automated package publishing. As companies increasingly rely on continuous integration and continuous deployment pipelines to ship code at speed, the margin for configuration errors shrinks while the potential blast radius grows. A single missing file or misconfigured rule can turn a routine release into a significant intellectual property exposure.

Anthropic has not publicly commented on the full scope of the leak or whether it plans to rotate any of the exposed internal logic. The company, which has raised billions in funding and positions itself as a leader in AI safety, now faces questions about the security of its own development processes — an ironic twist for a firm that has built its brand on careful, safety-conscious engineering.