On Anthropic’s source code leak, the supply chain trap hiding in plain sight, and what 512,000 lines of TypeScript can and cannot tell you about how a language model thinks
Someone at Anthropic had a bad day at the end of March 2026. Not a catastrophic day — no servers went down, no model weights escaped, no customer records leaked. But somebody changed a build configuration, probably in the ordinary course of trying to ship a software release, and accidentally included a file they weren’t supposed to include. The file was a source map. Source maps are mundane things. They exist so that developers can debug minified, compressed JavaScript by tracing cryptic gibberish back to the readable TypeScript it was compiled from. They are the footnotes of the software world, meant only for the author’s eyes.
Anthropic source code leak: Why the supply chain threat matters more than the code
This particular source map pointed to a zip archive sitting on Anthropic’s Cloudflare storage. The archive contained 1,900 files and 512,000 lines of TypeScript. It was the complete source code for Claude Code, the company’s command-line AI coding assistant. By the time anyone noticed, it had been in the wild for hours. GitHub repositories mirroring it multiplied faster than DMCA takedown requests could extinguish them.
By April 1, 2026, Anthropic had confirmed the leak and described it as a release packaging issue caused by human error. Over 8,100 repositories had been pulled. Hundreds more kept appearing. The Internet, once it gets its hands on something interesting, is not easily discouraged.
Here is what you need to know — what was actually exposed, why the secondary story about supply chain attacks is arguably more important, and what this whole episode reveals about the strange anatomy of a modern AI product.
The trap that opened alongside the door
Before we get to the source code itself, there is a more urgent story. Within hours of the leak becoming public, someone — entirely separate from Anthropic and from whoever leaked the code — launched a supply-chain attack against the Axios npm package. Axios is a popular HTTP client library that ships in a significant fraction of all JavaScript projects on earth. The attack targeted versions 1.14.1 and 0.30.4, embedding a Remote Access Trojan that gives an attacker shell-level access to any machine that installs it.
The timing was not coincidental. When a high-profile repository leaks, developers rush to clone it and get it running. The attackers counted on exactly this behavior. Anyone who ran npm install on any of the mirrored Claude Code repositories between approximately 00:21 and 03:29 UTC on March 31 may have installed the compromised Axios package. The attack also extended to typosquatting: fake packages with names close to Anthropic’s internal dependencies, waiting for developers to mistype a package name at exactly the wrong moment.
Note that if you cloned anything that night and ran npm install, stop reading this article and go check your lockfiles. Look for Axios versions 1.14.1 or 0.30.4. Look for a dependency called plain-crypto-js. If either is present, treat the host as compromised.
This is a pattern with a name: an opportunistic supply chain attack riding the coattails of a legitimate disclosure event. The mechanism is almost elegant in its cynicism. A high-visibility leak creates urgency and excitement. Developers, ordinarily cautious, rush to explore. The attackers insert themselves into the gap between curiosity and caution. They are not exploiting a vulnerability in the leaked software. They are exploiting a vulnerability in human psychology — specifically, the combination of technical confidence and rushed curiosity that leads an otherwise careful engineer to type “npm install” without thinking twice.
The lesson is not that you should never clone open-source repositories. The lesson is that when a high-profile leak generates a wave of hastily created mirrors, those mirrors are not all what they appear to be. Some of them have been modified. Some include injected prompts designed to manipulate AI-assisted development tools. Some are vehicles for the kind of dependency-confusion attack that has been standard in the attacker’s playbook since at least 2021. Treat leaked source repositories as read-only reference material. Clone them on an isolated machine. Do not run their package managers. Do not execute their code.
The supply chain problem is, in a sense, older and more persistent than anything in the leaked Anthropic source. It predates large language models entirely. It will outlast this particular incident. The irony is that it took a leak of an AI coding tool’s source code to remind many developers of a threat that has been quietly working in the background for years.

What actually leaked: The client, not the engine
Now to the source code itself. The first and most important thing to understand about what leaked is what it is not. It is not the Claude model. There are no weights in these 512,000 lines. There is no training code, no alignment infrastructure, no API backend, no customer data. What leaked is the Claude Code CLI tool — the client application that runs on your laptop and communicates with Claude via an API. Anthropic put it well when they described it as a packaging issue. This is the packaging, not the product inside.
If Claude is a car, what leaked is the dashboard, the steering column, the instrument cluster, the transmission logic, and the onboard software that decides when to turn on the heated seats. The engine — the actual language model, the thing doing the reasoning — is running on Anthropic’s servers and did not go anywhere.
With that caveat firmly in place, the source is genuinely interesting, for reasons that have less to do with Anthropic’s secrets and more to do with what well-engineered AI tooling at scale actually looks like from the inside.
Ninety percent TypeScript, ten percent magic
The most striking thing about the leaked codebase, for anyone who has spent time thinking about how AI applications work versus how people imagine they work, is the ratio. By volume, roughly 90% of Claude Code is conventional software engineering: TypeScript interfaces, Zod validation schemas, React UI components, file I/O handlers, error-recovery logic, cost tracking, and session persistence. The kind of code a senior engineer at any well-run software company would recognize on sight.
The AI — the part that actually invokes Claude — is perhaps ten percent of the surface area. Maybe less. But that ten percent is where almost all of the user-facing value lives, which means the ninety percent exists to serve, constrain, and manage it. Context management: the system that decides when the conversation window is getting full and compresses it, or asks the human what to drop. Permission architecture: the layered system that determines whether a given tool call is allowed at all, and what level of user confirmation is required before it executes. Tool orchestration: the registry of forty-plus tools that Claude Code can invoke, from bash execution to web search to file editing, each with its own security validation stack.
There is a file in the codebase called bashSecurity.ts that contains 23 numbered security checks. They cover Zsh builtins, unicode injection attacks, IFS null-byte exploits, and findings pulled directly from HackerOne reports — meaning real vulnerabilities discovered by real researchers that the team then hardened against. This is not the work of people who shipped a prompt and called it a product. This is the work of people who have been running a tool that executes shell commands on users’ machines and have learned the hard way what can go wrong.
The features that were waiting behind the compile flags
The leak also exposed 44 feature flags for capabilities that have not yet shipped. This is where most of the coverage has focused, because the names are evocative. KAIROS is described as an always-on background agent — a system that continuously monitors your development environment rather than waiting for you to ask it a question. ULTRAPLAN is a thirty-minute autonomous planning session during which Claude Code can generate and evaluate strategies before the human sees any output. There is a multi-agent coordinator mode in which multiple Claude instances divide work across research, synthesis, implementation, and verification phases.
There is also BUDDY, which is a Tamagotchi. Eighteen species. Rarity tiers. Shiny variants. RPG statistics. A hardcoded April 1 teaser window. Someone at Anthropic spent real engineering time building a virtual pet system into a professional coding tool, and there is something almost moving about their choice to hide it behind a feature flag.
Whether these features represent a deliberate product roadmap, an experimental sandbox, or — as some commentators have speculated — a convenient marketing moment, they tell a consistent story. The team building Claude Code has been thinking beyond the prompt-and-response paradigm for a while. The tools for autonomous operation, for multi-agent coordination, and for continuous background presence are not sketched-out ideas. They are built, tested, sitting behind switches, waiting.
Frustration detection via regex, and what that actually means
One finding in the leak has attracted more wry amusement than anything else, and it deserves its own moment. The file utils/userPromptKeywords.ts, lines 7 and 8, contains a regex — a regular expression, the bluntest possible text-matching instrument — designed to detect user frustration. An AI company, one of the most sophisticated language model labs in the world, is using a pattern match to figure out when you are annoyed.
The joke practically tells itself. And yet there is something honest about it. Regex is fast. Regex is predictable. Regex does not hallucinate or drift. If your goal is to detect a specific set of frustration keywords and trigger a particular response, a regex is actually a reasonable tool. The irony is not that it is naive — it is that we expected something more mysterious, and the reality is that real software is full of pragmatic decisions that look mundane from the outside. The 90% is all pragmatic decisions. Some of them are regex.

Anti-distillation and the war over training data
One finding is less funny and more significant. In services/api/claude.ts, there is a flag that, when triggered, injects fake tool definitions into the prompt sent to the model. The stated purpose is anti-distillation: if a competitor is recording API traffic to train their own models, the fake tools corrupt the training data they collect. Claude Code is, under certain conditions, designed to lie to people recording its outputs.
This is a defensive move in a context where the norms are still being written. The question of whether API outputs can be used to train competing models is unresolved legally and ethically. The fake tool injection is Anthropic’s technical answer to a legal question they have not yet won. It is worth noting because it is the kind of behavior that, discovered in a different context, would generate significant concern. Here it sits quietly in a flagged function, waiting.
The dashboard is not the car, but it tells you who built it
Here is what this event actually is, stripped of the drama: A build configuration error exposed the client-side source code of an AI coding tool, triggering a predictable chain of events — mass mirroring, legal takedowns, opportunistic supply chain attacks, breathless coverage — before settling into the Internet’s background noise.
The source code is interesting. It is well-engineered, more cautious than expected in some places, and more pragmatic in others. The features waiting behind flags represent a genuine vision of where autonomous coding assistance is going, not a fantasy. The security work is real.
But what matters most about this event is not in the leaked files. It is in the behavior of the ecosystem around the leak. The supply chain attackers moved within hours. The repositories are seeded with injected prompts. The developers who cloned first and asked questions later. These are the forces that shape what happens when software escapes its container, and they will be present at the next incident, and the one after that, regardless of what the source code says.
The Claude model itself — the thing that actually reads your code, reasons about it, and generates suggestions — is still on Anthropic’s servers and unreachable. The dashboard leaked. The engine kept running.
Follow us on X, Facebook, or Pinterest