SAPwned: SAP AI vulnerabilities expose customers' cloud environments and privat
The Wiz Research Team identified vulnerabilities in SAP AI Core, enabling unauthorized access to customer data. Reported issues included network bypass, AWS token leaks, and exposure of sensitive information. SAP addressed and resolved all vulnerabilities.
Read original articleThe Wiz Research Team discovered vulnerabilities in SAP AI Core that could allow malicious actors to compromise the service and access customer data. By exploiting these vulnerabilities, attackers could gain access to customers' private files, cloud credentials, and internal artifacts. The vulnerabilities found in SAP AI Core included issues such as bypassing network restrictions, leaking AWS tokens, exposing user files through unauthenticated EFS shares, compromising internal Docker Registry and Artifactory, and exposing Google access tokens and customer secrets through an unauthenticated Helm server. These vulnerabilities could have led to unauthorized access to sensitive data, manipulation of AI models, and potential supply-chain attacks. The research highlights the importance of improving isolation and sandboxing standards in AI infrastructure to prevent such security breaches. All vulnerabilities were reported to SAP and have been fixed. No customer data was compromised during the research.
Related
'Skeleton Key' attack unlocks the worst of AI, says Microsoft
Microsoft warns of "Skeleton Key" attack exploiting AI models to generate harmful content. Mark Russinovich stresses the need for model-makers to address vulnerabilities. Advanced attacks like BEAST pose significant risks. Microsoft introduces AI security tools.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. The incident raised concerns about foreign theft. OpenAI responded by enhancing security measures and exploring regulatory frameworks.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. Concerns over national security risks arose, leading to internal security debates and calls for tighter controls on A.I. labs.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing discussions on A.I. tech. No code was compromised. The incident sparked internal debates on security and A.I. risks amid global competition.
OpenAI was hacked year-old breach wasn't reported to the public
Hackers breached OpenAI's internal messaging systems, exposing AI technology details, raising national security concerns. OpenAI enhanced security measures, dismissed a manager, and established a Safety and Security Committee to address the breach.
- Some commenters emphasize that the vulnerabilities are due to poor cloud computing platform security, not the AI product itself.
- There are concerns about the ethics of companies like Wiz conducting unauthorized network penetration to find vulnerabilities.
- Commenters are surprised by outdated software configurations, such as the presence of deprecated Tiller instances.
- Questions are raised about SAP's internal security measures and alert systems, suggesting a need for better monitoring and response.
- Some see the incident as a promotional opportunity for SAP's AI products, despite the security flaws.
All the major clouds use vm boundaries and separate K8s clusters between customers. Microsoft was similarly bitten a few years ago with one of their function products that expected K8s to be the primary security boundary.
“We thanked them for their co-operation”. Sounds kinda like extortion.
It's possibly the fastest rocket for an enterprise software company ever.
$100M in just 1.5 years time
$350M at end of 3-year
https://www.wiz.io/blog/100m-arr-in-18-months-wiz-becomes-th...
And the first test is running, and no one is screaming yet, so fingers crossed.
https://www.bleepingcomputer.com/news/security/researcher-re...
Related
'Skeleton Key' attack unlocks the worst of AI, says Microsoft
Microsoft warns of "Skeleton Key" attack exploiting AI models to generate harmful content. Mark Russinovich stresses the need for model-makers to address vulnerabilities. Advanced attacks like BEAST pose significant risks. Microsoft introduces AI security tools.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. The incident raised concerns about foreign theft. OpenAI responded by enhancing security measures and exploring regulatory frameworks.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. Concerns over national security risks arose, leading to internal security debates and calls for tighter controls on A.I. labs.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing discussions on A.I. tech. No code was compromised. The incident sparked internal debates on security and A.I. risks amid global competition.
OpenAI was hacked year-old breach wasn't reported to the public
Hackers breached OpenAI's internal messaging systems, exposing AI technology details, raising national security concerns. OpenAI enhanced security measures, dismissed a manager, and established a Safety and Security Committee to address the breach.