security

Anthropic Accidentally Leaks Claude Code Source in Public NPM Package

April 02, 2026 · 3 min read

Anthropic Accidentally Leaks Claude Code Source in Public NPM Package

Anthropic, the San Francisco-based AI safety company behind the Claude family of models, inadvertently published the complete source code of its Claude Code developer tool through a public npm package, exposing hundreds of thousands of lines of proprietary code to the open internet. The incident, first reported by BleepingComputer, revealed internal system prompts, tool definitions, and architectural details that the company had intended to keep private.

The leaked npm package contained approximately 512,000 lines of unminified source code — the full, human-readable version of the software rather than the compiled and obfuscated distribution typically shipped to end users. Among the exposed materials were system-level prompts that govern Claude Code's behavior, internal tool implementations that define how the assistant interacts with codebases and external services, and the complete architectural logic underpinning one of the most widely used AI coding assistants on the market.

The exposure is particularly significant because system prompts and internal instructions are considered core intellectual property for AI companies. These prompts shape the personality, capabilities, and safety guardrails of AI assistants, and competitors could theoretically study them to replicate or counter Anthropic's approach. Security researchers and AI enthusiasts were quick to archive cached copies of the package before Anthropic could fully retract it, meaning the source code is likely to remain accessible through unofficial channels indefinitely.

Anthropic moved to pull the package once the error was discovered, but the nature of public package registries makes full containment difficult. NPM packages are routinely mirrored, cached, and archived by third-party services and individual developers. Once published, even briefly, a package's contents can propagate across multiple systems within minutes — a well-known challenge in the open-source software ecosystem that has previously affected companies ranging from startups to major technology firms.

The incident highlights a recurring tension in the AI industry between the operational need to distribute software through public package managers and the desire to protect proprietary implementation details. Claude Code, which functions as an AI-powered coding assistant available through the command line, IDE extensions, and web interfaces, necessarily ships executable code to users' machines. The line between what should be included in a public distribution and what should remain server-side is a packaging decision that, in this case, appears to have gone wrong.

For the broader AI development community, the leak offers an unusually transparent look at how a leading AI company structures its assistant tooling. Researchers and developers have already begun analyzing the exposed code for insights into prompt engineering techniques, safety mechanisms, and the software architecture that connects large language models to real-world developer workflows. While Anthropic has not issued a detailed public statement on the scope of the exposure, the company is expected to rotate any compromised credentials or API keys that may have been embedded in the source.

The episode serves as a cautionary tale for AI companies that increasingly rely on public infrastructure to distribute their tools. As AI assistants become more deeply integrated into software development pipelines, the stakes of accidental exposure grow correspondingly higher — not just for intellectual property, but for the security of the systems these tools are designed to interact with.