Three significant security vulnerabilities have been identified in Anthropic’s AI tool, Claude Code, which potentially expose developers to silent hacking. These issues could enable remote code execution on user machines or the theft of sensitive API keys, according to a report from Check Point.
Researchers at Check Point reported the flaws to Anthropic, which has since issued fixes for all identified issues, including two that received CVEs. The flaws highlight a troubling supply chain threat as more enterprises adopt AI coding tools like Claude, effectively turning configuration files into potential attack vectors.
The vulnerabilities arise from the tool’s design, aimed at enhancing collaboration among development teams. Hackers could exploit these flaws by injecting malicious configurations into public repositories, waiting for developers to clone and utilize compromised projects, thereby creating significant risks within the supply chain.