Concerns regarding the integration of generative AI into machine-learning systems have been raised by Micheal Lones, a computer scientist at Heriot-Watt University. His recent publication in the journal Cell Press Patterns discusses the potential risks associated with the use of large language models (LLMs), suggesting that their implementation may complicate systems and introduce new vulnerabilities.
Lones emphasizes that while generative AI can enhance efficiency in tasks like coding and data labeling, its unpredictable interactions within machine-learning workflows can lead to significant complications. He identifies four key areas where generative AI plays a role: decision-making, pipeline design, data generation, and result analysis. The cumulative effect of these roles can create opaque systems that are difficult to audit.
He warns developers to maintain a balance between the advancements offered by generative AI and the inherent risks, particularly in critical fields such as healthcare and finance. Lones illustrates the dangers with examples of a hospital triage system and a loan approval process that utilize generative AI, highlighting the potential for undetected errors that could have severe consequences.