(April 1): Anthropic PBC inadvertently released source code for its popular Claude AI agent, raising questions about its operational security and sending developers on a search for clues about the startup’s plans.
“Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed,” an Anthropic spokesperson said in an emailed statement. “This was a release packaging issue caused by human error, not a security breach.”
The company’s second security slip-up in just a week compromised approximately 1,900 files and 512,000 lines of code related to Claude Code, an agentic coding tool that runs directly inside developer environments and has access to sensitive information, according to cybersecurity analysts. The release first came to light in a post on X, which purported to share a link to the code and garnered more than 30 million views.
Developers said they were poring through the details to try and figure out how the agent works as well as how the startup intended to evolve the platform. Several experts also raised concerns about potential security vulnerabilities in light of the unintended exposure.
“Attackers can now study and fuzz exactly how data flows through Claude Code’s four-stage context management pipeline and craft payloads designed to survive compaction, effectively persisting a backdoor across an arbitrarily long session,” said AI cybersecurity firm Straiker in a blog post.
Days ago, Fortune reported that Anthropic accidentally made thousands of files publicly available, including a draft blog post that detailed a powerful upcoming model known internally as both “Mythos” and “Capybara” that presents cybersecurity risks.
See also: Energy shock clouds US$800 bil of Asian data centre financing
“We’re rolling out measures to prevent this from happening again,” the Anthropic spokesperson said.
Uploaded by Arion Yeow
