Headline
Grok chats show up in Google searches
Grok AI chats that users wanted to share with individual people were in fact shared with the broader web and searchable by everyone.
I’m starting to feel like a broken record, but I feel you should know that yet another AI has been found sharing private conversations so that Google was able to index them, and now they can be found in search results.
It’s déjà vu in the world of AI: another day, another exposé about chatbot conversations being leaked, indexed, or made public. We have written about the share option in ChatGPT that was swiftly removed because users seemed oblivious to the consequences, and about Meta AI first making conversations discoverable via search engines and later exposing them due to a bug. In another leak we looked at an AI bot used by McDonalds to process job applications. And, not to forget, the AI girlfriend fiasco where a hacker was able to steal a massive database of users’ interactions with their sexual partner chatbots.
In some of these cases the developers thought it was clear to the users that by using a “Share” option, their conversations were publicly accessible, but in reality, the users were just as surprised as the people that found their conversations.
This same thing must have happened at Grok, the AI chatbot developed by xAI and launched in November 2023 by Elon Musk. When Grok users press a button to share a transcript of their conversation, this also made those conversations searchable, and, according to Forbes, this was sometimes done without users’ knowledge or permission.
For example, when a Grok user wants to share their conversation with another person, they can use the “Share” button to create a unique URL which they can then send to that person. But without many users being aware, pressing that “Share” button also made the conversation available to search engines, like Google, Bing, and DuckDuckGo. And that made them available for anyone to find.
Even though the account details may be hidden in the shared chatbot transcripts, the prompts—the instructions written by the user–may still contain personal or sensitive information about someone.
Forbes reported that it was able to view “conversations where users asked intimate questions about medicine and psychology.” And in one example seen by the BBC, the chatbot provided detailed instructions on how to make a Class A drug in a lab.
I have said this before, and I’ll probably have to say it again until privacy is baked deeply into the DNA of AI tools, rather than patched on as an afterthought: We have to be careful about what we share with chatbots.
How to safely use AI
While we continue to argue that the developments in AI are going too fast for security and privacy to be baked into the tech, there are some things to keep in mind to make sure your private information remains safe:
- If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you are not logged in on that social media platform. Your conversations could be tied to your social media account which might contain a lot of personal information.
- When using AI, make sure you understand how to keep your conversations private. Many AI tools have an “Incognito Mode.” Do not “share” your conversations unless needed. But always keep in mind that there could be leaks, bugs, and data breaches revealing even those conversations you set to private.
- Do not feed any AI your private information.
- Familiarize yourself with privacy policies. If they’re too long, feel free to use an AI to extract the main concerns.
- Never share personally identifiable information (PII).
We don’t just report on threats – we help protect your social media
Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.