The Rabbit R1 has been logging users' chats – with no way to wipe them
The Rabbit R1 AI assistant device stored chat logs without deletion option. A recent update adds Factory Reset, enhances security, and prevents data access, addressing privacy concerns and a security breach.
Read original articleThe Rabbit R1 AI assistant device has been storing users' chat logs without a way to delete them, as reported in a company security bulletin. A recent software update now includes a Factory Reset option to wipe the device completely, addressing this privacy concern. Additionally, the update prevents stored pairing data from accessing the Rabbithole journal, reducing the risk of exposing users' saved requests and photos in case of theft or hacking. Rabbit has also minimized the amount of log data stored on the device. The company acknowledged a security breach involving leaked API keys traced back to an employee, who has been terminated. Rabbit is committed to enhancing security measures and conducting a thorough review of its device logging practices to prevent similar incidents in the future. The company assures users that there is no evidence of misuse of pairing data to access former device owners' journal data.
Related
Rabbit data breach: all r1 responses ever given can be downloaded
A data breach at Rabbit Inc. exposed critical API keys for ElevenLabs, Azure, Yelp, and Google Maps, compromising personal information and enabling malicious actions. Rabbit Inc. has not addressed the issue, urging users to unlink Rabbithole connections.
Researchers Prove Rabbit AI Breach by Sending Email to Us as Admin
Researchers found a security flaw in Rabbit R1 AI assistant, exposing hardcoded API keys. Hackers could access sensitive data, impersonate the company, and send emails. Rabbitude group aims to improve security and functionality.
Rabbit failed to properly reset keys: emails can be sent from rabbit.tech domain
Rabbit Inc. failed to reset all keys, leaving a fifth API key active, potentially exposing email history and user data. Despite investigations, no evidence of data breaches or system compromises found.
R1 jailbreakers find security flaw in Rabbit's code
A group of R1 jailbreakers discovered a security flaw in Rabbit's code, exposing hardcoded API keys. Rabbit took action after a month, revoking most compromised keys. The breach complicates Rabbit's recovery from R1 AI gadget issues.
OpenAI's ChatGPT Mac app was storing conversations in plain text
OpenAI's ChatGPT Mac app had a security flaw storing conversations in plain text, easily accessible. After fixing the flaw by encrypting data, OpenAI emphasized user security. Unauthorized access concerns were raised.
Nothing about this feels like a well-run engineering team. I understand it’s a startup, but all of this is just weird.
But stepping back, it is not at all surprising to hear about this type of flaw. MOST startups have these kinds of flaws. They spend as little as possible on things like security or privacy. Enough to meet some checklist minimally but not enough to actually respect customers and their data. In some ways it is understandable since they have to do whatever they can to survive, and the chances of that are low to begin with - so saving their energy and time for other things is what happens.
I bet most customers would be frightened if they knew exactly how cavalier most startups truly are.
I'll make a short shameless plug: If you would like to use generative AI (OpenAI/Google/Anthropic/Open source) but don't want to run everything yourself, Cognos[0] stores your conversations encrypted so there are no risks of hacks, leaks or your data being used for training.
Related
Rabbit data breach: all r1 responses ever given can be downloaded
A data breach at Rabbit Inc. exposed critical API keys for ElevenLabs, Azure, Yelp, and Google Maps, compromising personal information and enabling malicious actions. Rabbit Inc. has not addressed the issue, urging users to unlink Rabbithole connections.
Researchers Prove Rabbit AI Breach by Sending Email to Us as Admin
Researchers found a security flaw in Rabbit R1 AI assistant, exposing hardcoded API keys. Hackers could access sensitive data, impersonate the company, and send emails. Rabbitude group aims to improve security and functionality.
Rabbit failed to properly reset keys: emails can be sent from rabbit.tech domain
Rabbit Inc. failed to reset all keys, leaving a fifth API key active, potentially exposing email history and user data. Despite investigations, no evidence of data breaches or system compromises found.
R1 jailbreakers find security flaw in Rabbit's code
A group of R1 jailbreakers discovered a security flaw in Rabbit's code, exposing hardcoded API keys. Rabbit took action after a month, revoking most compromised keys. The breach complicates Rabbit's recovery from R1 AI gadget issues.
OpenAI's ChatGPT Mac app was storing conversations in plain text
OpenAI's ChatGPT Mac app had a security flaw storing conversations in plain text, easily accessible. After fixing the flaw by encrypting data, OpenAI emphasized user security. Unauthorized access concerns were raised.