CISA Faces Scrutiny as Official Compromises Security by Using ChatGPT for Sensitive Files

CISA Faces Scrutiny as Official Compromises Security by Using ChatGPT for Sensitive Files

A senior CISA official's upload of sensitive documents to ChatGPT exposes a serious breach in federal cybersecurity protocols, raising urgent questions about AI data safety.

NeboAI I summarize the news with data, figures and context
IN 30 SECONDS

IN 1 SENTENCE

SENTIMENT
Neutral

𒀭
NeboAI is working, please wait...
Preparing detailed analysis
Quick summary completed
Extracting data, figures and quotes...
Identifying key players and context
DETAILED ANALYSIS
SHARE

NeboAI produces automated editions of journalistic texts in the form of summaries and analyses. Its experimental results are based on artificial intelligence. As an AI edition, texts may occasionally contain errors, omissions, incorrect data relationships and other unforeseen inaccuracies. We recommend verifying the content.

In a recent incident that raises alarms about cybersecurity protocols, documents marked "for official use only" were uploaded to OpenAI's ChatGPT by Madhu Gottumukkala, a senior official at the Cybersecurity and Infrastructure Security Agency (CISA). This breach highlights significant concerns regarding the adherence to security measures within government agencies, particularly as they explore the use of generative AI technologies.

The uploaded documents related to government contracting and were submitted to the consumer version of ChatGPT, which is designed to use input data for model enhancement. This situation could allow sensitive government information to be unwittingly included in OpenAI's training datasets, accessible to company employees and potentially exposed to other users. The Department of Homeland Security has recognized specific AI platforms to mitigate such risks.

When using the free version of ChatGPT, users may unknowingly contribute their documents to OpenAI's ecosystem, as opting out is not commonly understood among government staff. In contrast to enterprise versions that ensure data isolation, the consumer platform grants OpenAI extensive rights to use submitted data, creating serious concerns regarding the handling of government documents.

Moreover, once information enters the training pipeline of large language models, complete removal becomes extremely challenging. Research indicates that these models can occasionally disclose training data, depending on how it was integrated, further complicating the situation for CISA, which is meant to safeguard federal networks from digital threats.

Want to read the full article? Access the original article with all the details.
Read Original Article
TL;DR

This article is an original summary for informational purposes. Image credits and full coverage at the original source. · View Content Policy

Editorial
Editorial Staff

Our editorial team works around the clock to bring you the latest tech news, trends, and insights from the industry. We cover everything from artificial intelligence breakthroughs to startup funding rounds, gadget launches, and cybersecurity threats. Our mission is to keep you informed with accurate, timely, and relevant technology coverage.

Press Enter to search or ESC to close