Healthcare organizations are increasingly recognizing the potential of generative AI to improve operational efficiency while facing significant cybersecurity challenges. As these organizations adopt more internet-connected tools, they become prime targets for cyberattacks, raising concerns about the security of AI products themselves. Taylor Lehmann, director of the office of the chief information security officer at Google Cloud, discussed the implications of AI on cybersecurity in a recent interview, emphasizing the dual threats posed by both cybercriminals and the technologies they exploit.
Lehmann pointed out that as AI becomes integral to healthcare, the need for organizations to identify and mitigate risks associated with these systems is paramount. He expressed concern over the difficulty in detecting inaccuracies in AI outputs, especially if they result from malicious interference. To address these challenges, he highlighted the significance of maintaining transparency in AI models through methods like model cards and cryptographic binary signing.
Looking ahead, Lehmann envisions a transformation in the skill sets required within healthcare to manage these risks effectively. He anticipates new security roles will emerge as organizations seek to enhance their defenses against AI-related vulnerabilities, advocating for robust identity controls and the establishment of dedicated AI red teams to proactively test system security.