The UK government is contemplating a standardized testing framework for general-purpose AI systems utilized by financial institutions. This consideration emerged after discussions involving Harriet Rees, Chief Information Officer at Starling Bank and the designated AI champion for financial services, and the Department for Science, Innovation and Technology last month.
Concerns about the adequacy of AI model assessments were highlighted by the Bank of England (BoE), which underscored that monitoring of AI models within banks is insufficiently frequent. Rees, who also co-chairs the BoE’s AI task force, pointed out that while many firms employ AI models, there has yet to be a comprehensive independent evaluation of these systems.
The proposed initiative aims to create consistency in testing practices, minimize redundancies among institutions, and verify that US-developed algorithms align with necessary standards. Currently, there is no legal requirement mandating that AI systems undergo independent evaluations before being deployed in regulated areas, though internal reviews are commonplace among banks.
Rees suggested that an independent entity, specifically the AI Security Institute (AISI), should oversee the assessment of these AI models, rather than relying on a single regulatory body, as the use of AI extends beyond financial services. Following a meeting in early March, the proposal received a positive response from Ollie Ilott, the director-general for AI and founder of AISI.