RamRajya News

US Cybersecurity Chief Shared Sensitive Files on ChatGPT: Report

In a development that has sparked concern across Washington’s cybersecurity establishment, the acting head of the United States’ top cyber defence agency reportedly uploaded sensitive internal documents to the public version of ChatGPT. According to a media report, Madhu Gottumukkala, the Indian-origin acting director of the Cybersecurity and Infrastructure Security Agency (CISA), shared contracting and cybersecurity-related materials with the AI platform last summer for official work purposes.
The incident has drawn attention due to the irony involved, as CISA is the federal agency tasked with safeguarding US government networks from sophisticated cyber threats, including those backed by hostile states. The reported use of a publicly accessible AI tool for handling sensitive material triggered automated security alerts and prompted an internal review within the agency.

What the Report Claims

According to a report by Politico, Gottumukkala uploaded documents related to contracting and cybersecurity operations onto ChatGPT to assist with work-related tasks. While there is no indication that the materials were classified at the highest level, they were reportedly sensitive enough to raise red flags within internal monitoring systems.

The uploads reportedly occurred last summer and were flagged by automated safeguards designed to detect potential data leaks or policy violations. Following these alerts, CISA initiated an internal review to assess whether agency protocols governing the use of external digital tools had been breached.

Why the Incident Matters

The episode has generated debate because of CISA’s central role in protecting federal civilian networks and critical infrastructure from cyberattacks. As acting director, Gottumukkala is responsible for overseeing defences against advanced cyber operations, including those linked to foreign governments and organised hacking groups.

Cybersecurity experts have repeatedly warned government officials against uploading sensitive or proprietary information to publicly available AI platforms. Such tools often store user inputs for system improvement, raising concerns about data retention, access, and potential exposure.

Growing Scrutiny of AI Use in Government

The reported incident comes amid increasing scrutiny of how artificial intelligence tools are used within government agencies. In recent years, US federal departments have issued advisories and internal guidelines restricting the use of generative AI platforms like ChatGPT for official work, especially when it involves sensitive or non-public information.

Several agencies have either banned or limited access to such tools on official networks, citing risks related to data security, confidentiality, and compliance with federal information protection standards.

CISA’s Role and Responsibilities

The Cybersecurity and Infrastructure Security Agency operates under the US Department of Homeland Security and plays a pivotal role in coordinating national responses to cyber incidents. Its mandate includes securing election infrastructure, protecting critical sectors such as energy and transportation, and assisting both public and private entities in strengthening cyber resilience.

Any perceived lapse in internal cybersecurity practices by its leadership, therefore, attracts heightened attention and criticism, even if no actual data breach has been confirmed.

No Evidence of External Breach So Far

As of now, there is no public indication that the documents shared on ChatGPT were accessed by unauthorised parties or misused. The report does not suggest malicious intent, and it remains unclear whether the internal review has led to any disciplinary action.

However, the incident has renewed calls within policy circles for clearer, stricter rules on the use of generative AI in sensitive government environments.

Broader Implications

The controversy highlights the challenges governments face as they adapt to rapidly evolving AI technologies. While tools like ChatGPT offer efficiency and analytical support, their use raises complex questions about data governance, accountability, and security.

For India and other countries observing the US experience, the episode serves as a cautionary tale on balancing innovation with robust safeguards. Official guidance on digital tools, including AI platforms, is increasingly seen as essential to prevent inadvertent security lapses.

Exit mobile version