Claude Code flaws expose new risks in AI dev tools
Check Point Research has disclosed a set of security flaws in Anthropic's Claude Code that could allow unauthorised actions when a developer opens a project repository, including the risk of API key exposure before a user confirms they trust the directory.
Claude Code is part of a growing class of AI-assisted software development tools used in coding workflows. Check Point said adoption is rising across the Asia-Pacific region, with usage particularly strong in South Korea, Japan and India. Many organisations use it for code reviews, testing and DevOps work.
The findings focus on how automation and tool integrations behave at the start of a session. Check Point described scenarios in which routine actions, such as opening a repository, could be used as an entry point into a development environment.
Opening a repository
Check Point said Claude Code includes automation features that can run predefined actions when a session begins. This design, it said, could allow a malicious repository to trigger execution on a developer's machine when the project is launched, without any step beyond opening the project.
The report also described a second weakness involving tool integrations. Claude Code integrates with external tools via MCP, and repository-controlled configuration settings could, in some circumstances, override safeguards. Check Point identified the issue as CVE-2025-59536.
Check Point said the weaknesses reflect a broader shift in how security teams need to assess AI tooling in development environments, where code or configuration can run before trust has been established.
Credential exposure
Check Point also described a scenario in which an attacker could attempt to obtain credentials used for Claude Code's connection to Anthropic services. Claude Code communicates with Anthropic using an API key.
In its write-up, Check Point said it demonstrated redirection of API traffic to an attacker-controlled server before the user confirmed trust in the project directory. That traffic, it said, included the full authorisation header.
Check Point did not say whether it had observed exploitation in the wild, nor did it describe mitigation steps in the material released alongside its findings. Anthropic did not comment in the information published by Check Point.
Security specialists have increasingly focused on software supply chain risks and developer tooling, as attackers look for routes into build pipelines and internal environments. Repository-based attacks remain a persistent feature of that landscape, spanning malicious dependencies, poisoned updates, and developer workstation compromise.
AI coding tools introduce a different set of risks because they sit inside daily workflows and may trigger actions based on context and configuration. They also rely on credentials and tokens that can provide access to paid services or internal resources. The combination of automation and privileged access makes configuration handling and trust prompts particularly sensitive.
Check Point said its findings show AI tools are moving from optional assistants to embedded components in software development, changing assumptions about trust boundaries and execution control in enterprise engineering environments.
Oded Vanunu, Head of Product Vulnerability Research at Check Point, said the findings should prompt organisations to reassess how they manage risk for AI development tools.
"This research highlights a fundamental shift in how we need to think about risk in the AI era. AI development tools are no longer peripheral utilities - they are becoming infrastructure. When automation layers gain the ability to influence execution and environment behavior, trust boundaries change. Organisations accelerating AI adoption must ensure their security models evolve at the same pace," Vanunu said.
Organisations reviewing their developer security posture are likely to examine how AI coding tools handle repository-level configuration, how and when integrations are initialised, and what safeguards exist around credentials used for external services.
Check Point said the research underscores a broader shift in the threat model for enterprise AI tools as adoption continues and engineering teams make AI systems a routine part of the software development chain.