OpenAI was hacked year-old breach wasn't reported to the public
Hackers breached OpenAI's internal messaging systems, exposing AI technology details, raising national security concerns. OpenAI enhanced security measures, dismissed a manager, and established a Safety and Security Committee to address the breach.
Read original articleHackers breached OpenAI's internal messaging systems, exposing details of the company's AI technologies. The incident, which occurred in an online forum, raised concerns about national security and the potential for leaks to foreign adversaries. While sensitive information was exposed, OpenAI's core systems remained uncompromised. The breach, disclosed internally in April 2023 but not made public, led to criticism of the company's security measures. OpenAI dismissed a program manager for leaking information, denying any political motivation. In response, OpenAI has enhanced security measures and established a Safety and Security Committee. The incident highlighted fears of AI technology leaks to countries like China. Despite concerns, OpenAI believes its AI systems do not currently pose a significant national security threat. The breach has prompted discussions on regulating AI technologies and imposing penalties for misuse. Meanwhile, Chinese AI researchers are rapidly advancing, prompting calls for tighter controls on AI development to mitigate future risks and maintain technological competitiveness.
Related
OpenAI's ChatGPT Mac app was storing conversations in plain text
OpenAI's ChatGPT Mac app had a security flaw storing conversations in plain text, easily accessible. After fixing the flaw by encrypting data, OpenAI emphasized user security. Unauthorized access concerns were raised.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. The incident raised concerns about foreign theft. OpenAI responded by enhancing security measures and exploring regulatory frameworks.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. Concerns over national security risks arose, leading to internal security debates and calls for tighter controls on A.I. labs.
Hacker Stole Secrets from OpenAI
A hacker breached OpenAI in 2023, stealing discussions but not critical data. Concerns arose about security measures. Aschenbrenner was fired for raising security concerns. The incident raised worries about AGI technology security.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing discussions on A.I. tech. No code was compromised. The incident sparked internal debates on security and A.I. risks amid global competition.
The usual is that if there's no logs saying something bad actually happened, there's certainly nothing to say that it did, even though some terribly guessable credentials were used for ages on something publicly exposed. I know, they know, but told in no uncertain terms to drop it.
Nothing to see here, move along. Work to be done, money to be made.
Right now my ChatGPT4 history is full of chats I didn't create, on subjects ranging from corporate governance to Roblox scripting to somebody's math homework. It will be only a matter of time before this bug causes them to leak sensitive personal data. I spent 10 minutes looking for a way to report it, but they have successfully insulated themselves from any contact with their (paying) customers.
Pretty annoying, and not something you expect from a supposedly security-savvy company... although that expectation is certainly changing.
Actual article: https://www.nytimes.com/2024/07/04/technology/openai-hack.ht...
More discussion: https://news.ycombinator.com/item?id=40887619
A poorly written article regurgitating the NYT story with uninformed alarmist shitty podcast tier ‘analysis’.
Jog on.
If the internal culture is to keep problems under wraps to maintain appearances, this seems like it might backfire at some point.
Article just rambles about some unnamed uninformed AI-phobes being concerned about US national security in relation to China because of some unknown OpenAI internal information that might have leaked.
Related
OpenAI's ChatGPT Mac app was storing conversations in plain text
OpenAI's ChatGPT Mac app had a security flaw storing conversations in plain text, easily accessible. After fixing the flaw by encrypting data, OpenAI emphasized user security. Unauthorized access concerns were raised.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. The incident raised concerns about foreign theft. OpenAI responded by enhancing security measures and exploring regulatory frameworks.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. Concerns over national security risks arose, leading to internal security debates and calls for tighter controls on A.I. labs.
Hacker Stole Secrets from OpenAI
A hacker breached OpenAI in 2023, stealing discussions but not critical data. Concerns arose about security measures. Aschenbrenner was fired for raising security concerns. The incident raised worries about AGI technology security.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing discussions on A.I. tech. No code was compromised. The incident sparked internal debates on security and A.I. risks amid global competition.