Hacker Breaches OpenAI’s Internal Systems, Steals Sensitive AI Design Information

A hacker accessed OpenAI’s internal messaging systems last year, extracting sensitive information about AI technologies, but did not compromise core systems or customer data. OpenAI has since taken measures to counter covert operations exploiting its AI models, amidst ongoing governmental efforts to safeguard advanced AI technologies.


5 July 2024 – In a report by The New York Times on Thursday, it was revealed that a hacker infiltrated OpenAI’s internal messaging systems last year, obtaining sensitive information about the company’s artificial intelligence technologies. The breach involved the extraction of details from an internal online forum where OpenAI employees discussed recent advancements in AI technology, according to sources familiar with the matter.

Despite the security breach, the hacker did not access the core systems where OpenAI develops and maintains its AI models, including the widely known ChatGPT. OpenAI, which is backed by Microsoft Corp, did not provide an immediate comment to Reuters regarding the incident. The company’s executives disclosed the breach to employees during an all-hands meeting in April last year and also informed the board. However, they chose not to make the incident public, as no customer or partner information was compromised.

The executives did not view the breach as a national security threat, suspecting the hacker to be an independent actor without connections to any foreign government. Consequently, OpenAI did not report the incident to federal law enforcement. Additionally, OpenAI disclosed in May that it had thwarted five covert influence operations that aimed to exploit its AI models for deceptive purposes, highlighting ongoing concerns about the misuse of AI technology.

The Biden administration is preparing to implement new measures to protect U.S. AI technology from adversaries such as China and Russia. Preliminary plans suggest the establishment of safeguards around advanced AI models, including ChatGPT. In May, sixteen companies involved in AI development committed to ensuring the safe advancement of the technology during a global meeting, as regulators worldwide struggle to address the rapid pace of innovation and emerging risks.

[source]

Author: Terry KS

Share This Post On