US Banks Brace for Potential Common AI Testing Standards as UK Sets Precedent

US Banks Brace for Potential Common AI Testing Standards as UK Sets Precedent

The UK government may implement a uniform testing standard for AI models in finance to ensure safety and reduce redundancy, responding to concerns from the Bank of England.

NeboAI I summarize the news with data, figures and context
IN 30 SECONDS

IN 1 SENTENCE

SENTIMENT
Neutral

𒀭
NeboAI is working, please wait...
Preparing detailed analysis
Quick summary completed
Extracting data, figures and quotes...
Identifying key players and context
DETAILED ANALYSIS
SHARE

NeboAI produces automated editions of journalistic texts in the form of summaries and analyses. Its experimental results are based on artificial intelligence. As an AI edition, texts may occasionally contain errors, omissions, incorrect data relationships and other unforeseen inaccuracies. We recommend verifying the content.

The UK government is contemplating a standardized testing framework for general-purpose AI systems utilized by financial institutions. This consideration emerged after discussions involving Harriet Rees, Chief Information Officer at Starling Bank and the designated AI champion for financial services, and the Department for Science, Innovation and Technology last month.

Concerns about the adequacy of AI model assessments were highlighted by the Bank of England (BoE), which underscored that monitoring of AI models within banks is insufficiently frequent. Rees, who also co-chairs the BoE’s AI task force, pointed out that while many firms employ AI models, there has yet to be a comprehensive independent evaluation of these systems.

The proposed initiative aims to create consistency in testing practices, minimize redundancies among institutions, and verify that US-developed algorithms align with necessary standards. Currently, there is no legal requirement mandating that AI systems undergo independent evaluations before being deployed in regulated areas, though internal reviews are commonplace among banks.

Rees suggested that an independent entity, specifically the AI Security Institute (AISI), should oversee the assessment of these AI models, rather than relying on a single regulatory body, as the use of AI extends beyond financial services. Following a meeting in early March, the proposal received a positive response from Ollie Ilott, the director-general for AI and founder of AISI.

Want to read the full article? Access the original article with all the details.
Read Original Article
TL;DR

This article is an original summary for informational purposes. Image credits and full coverage at the original source. · View Content Policy

Editorial
Editorial Staff

Our editorial team works around the clock to bring you the latest tech news, trends, and insights from the industry. We cover everything from artificial intelligence breakthroughs to startup funding rounds, gadget launches, and cybersecurity threats. Our mission is to keep you informed with accurate, timely, and relevant technology coverage.

Press Enter to search or ESC to close