Anthropic has opted not to launch its latest AI model, Mythos, due to serious concerns about global cybersecurity threats. The decision was made public on Wednesday alongside the announcement of an investigation into allegations of unauthorized access to the model. Mythos, which was introduced on April 7, is designed to detect critical vulnerabilities in IT systems, a capability that raises alarms regarding its potential misuse by malicious entities.
The model's ability to identify “zero-day” vulnerabilities, which are unaddressed flaws in major operating systems and browsers, has been highlighted as particularly dangerous. These vulnerabilities can remain hidden until exploited, posing significant risks to organizations. Anthropic has described the introduction of Mythos as a pivotal moment for cybersecurity, claiming that it can uncover flaws that have been ignored for many years.
As part of a risk assessment initiative called Project Glasswing, launched on April 8, Anthropic has allowed select partners, including tech leaders like Apple and Goldman Sachs, to evaluate the model. The concerns surrounding Mythos reflect wider anxieties in the cybersecurity community, where experts warn about the rapid development of AI technologies and their potential implications. UK officials have called for businesses to brace for the swift advancement of AI capabilities in the coming year, emphasizing the dual-edged nature of these technologies.