A recent study reveals that artificial intelligence (AI) is increasingly empowering malicious hackers to identify anonymous social media users through publicly accessible information. Researchers Simon Lermen and Daniel Paleka demonstrated that large language models (LLMs), which drive technologies like ChatGPT, can match anonymous accounts to real identities with surprising accuracy.
The researchers conducted an experiment where they input anonymous profiles into an AI system, showcasing how personal details—such as conversations about academic challenges or local parks—could be utilized to unmask individuals. Though their scenarios were hypothetical, the potential repercussions are significant. The study raises urgent questions about privacy in the digital age and suggests that governments may use AI for monitoring dissidents.
Moreover, the findings indicate that hackers could employ these advanced tools for targeted scams, exploiting the ease with which LLMs can aggregate information. Lermen emphasized that the mere availability of public data can be easily manipulated for malicious purposes, including spear-phishing attacks. Peter Bentley, a professor at University College London, warned of the risks associated with products aimed at de-anonymizing users, highlighting the danger of wrongful accusations stemming from LLM inaccuracies.