Recent findings indicate that over half of the AI models assessed in mid-2025 did not achieve basic vulnerability detection standards. This contrasts sharply with a more recent study that reported all models now excel in identifying and exploiting vulnerabilities, highlighting a significant improvement in AI efficiency for malicious purposes.
Rik Ferguson, VP of security intelligence at Forescout, shared insights during a media briefing at the company’s Vedere Labs in Eindhoven. He noted a notable shift in the cybercriminal landscape, where hackers are increasingly adopting mainstream commercial AI models, such as Anthropic’s Claude, instead of relying on underground options like WormGPT.
Ferguson observed that conversations on underground forums have evolved, with previous skepticism towards AI giving way to recommendations and tutorials for its implementation in cyber attacks. Both Anthropic and OpenAI are aware of the challenges posed by this trend and are taking measures to mitigate misuse, with Anthropic issuing warnings last September regarding the weaponization of its tools.