The recent findings from the Google Threat Intelligence Group (GTIG) reveal a disturbing trend in the use of artificial intelligence (AI) by cybercriminals for malicious purposes. Among the technologies being exploited is Google’s own Gemini, which has been utilized in various phishing schemes aimed at stealing sensitive data. GTIG identified several groups of "threat actors" who have attempted to leverage AI for activities such as intellectual property theft and the creation of sophisticated malware.
One notable example involved the group “UNC6418,” which targeted individuals in Ukraine’s defense sector to gather sensitive information through a phishing attack. The ability of AI to generate realistic content has increased the effectiveness of these scams, allowing hackers to produce emails that resemble legitimate communications. For instance, another group, “UNC2970,” associated with the North Korean government, used AI to impersonate recruiters to attract cybersecurity professionals.
Moreover, GTIG highlighted the emergence of the COINBAIT phishing kit, specifically designed to extract credentials from cryptocurrency investors. Cybercriminals are also utilizing AI to automate the development of malware, employing what they refer to as “agentic AI capabilities” to create malicious software with minimal human oversight, as demonstrated by the attempts of “UNC795” to develop an AI-integrated code auditing tool.