Headline
Leaked ChatGPT Chats: Users Treat AI as Therapist, Lawyer, Confidant
Leaked ChatGPT chats reveal users sharing sensitive data, resumes, and seeking advice on mental health, exposing risks of…
Leaked ChatGPT chats reveal users sharing sensitive data, resumes, and seeking advice on mental health, exposing risks of oversharing with AI chatbots.
When thousands of ChatGPT conversations appeared online in August 2025, many people assumed the leak was technical. Instead, the real issue was human behaviour combined with confusing product design. A now-removed feature that allowed users to “make chats discoverable” had turned private conversations into public webpages, indexed by search engines for anyone to find.
Researchers at SafetyDetective analysed a dataset of 1,000 of these leaked conversations, totalling more than 43 million words. Their findings show that people are treating AI tools like therapists, consultants, and even confidants, often sharing information they would normally keep private.
Example of leaked ChatGPT chats on Google (Image via PCMag and Google)
****What People Are Sharing With ChatGPT****
Some of the content revealed in these conversations goes further than casual prompts. Users disclosed personally identifiable information such as full names, phone numbers, addresses, and resumes. Others spoke about sensitive topics, including suicidal thoughts, drug use, family planning, and discrimination.
The research also showed that a small fraction of conversations accounted for most of the data. Out of 1,000 chats, just 100 contained more than half of the total words analysed. One marathon conversation stretched to 116,024 words, which would take nearly two full days to type out at average human speed.
****Professional Advice or Privacy Risk?****
Nearly 60% of the flagged chats fell under what researchers categorised as “professional consultations.” Instead of calling a lawyer, teacher, or counsellor, users asked ChatGPT for guidance on education, legal issues, and even mental health. While this shows the trust people place in AI, it also highlights the risks when the chatbot’s responses are inaccurate or when private details are left exposed.
In one case, the AI mirrored a user’s emotional state during a conversation about addiction, escalating rather than de-escalating the tone.
Shipra Sanganeria – SafetyDetective
The study highlighted cases where users uploaded entire resumes or sought advice on mental health struggles. In one example, ChatGPT prompted someone to share their full name, phone numbers, and work history while generating a CV, exposing them to identity theft if the chat link was made public. In another case, the AI mirrored a user’s emotional state during a conversation about addiction, escalating rather than de-escalating the tone.
Top 20 Keywords
****Why This is a Serious Issue****
The incident points to two problems. First, many users did not fully understand that by making chats “discoverable,” their words could be crawled by search engines and made public. Second, the design of the feature made it too easy for private conversations to end up online.
On top of this, the study showed that ChatGPT often “hallucinates” actions, such as claiming it saved a document when it did not. These inaccuracies may seem normal in casual chats, but they become dangerous when people treat the AI as a reliable professional tool.
Another issue is that when conversations containing sensitive details are publicly available, they can be maliciously exploited. Personal information might be used in scams, identity theft, or doxxing. Even without direct PII, emotionally vulnerable exchanges could be misused in blackmail or harassment.
The researchers claim that OpenAI has never made strong privacy guarantees about how shared conversations are handled. While the feature responsible for this leak has been removed, the basic behaviour of users, treating AI like a safe confidant, remains unchanged.
What Needs to Change
SafetyDetective recommends two main actions. First, users should avoid putting sensitive personal details into chatbot conversations, no matter how private the interface feels. Second, AI companies need to make their warnings clearer and their sharing features more intuitive. Automatic redaction of PII before a chat is shared could prevent accidental leaks.
The researchers also called for more work on understanding user behaviour. Why do some people pour tens of thousands of words into a single chat? How often do users treat AI like therapists or legal experts? And what are the consequences of trusting a system that can mirror tone, generate misinformation, or fail to protect private data?
****Surprised? Don’t Be!****
These findings should not come as a surprise. Back in February 2025, Hackread.com reported on a major OmniGPT data breach where hackers exposed highly sensitive information online.
The leaked dataset contained more than 34 million lines of user conversations with AI models such as ChatGPT-4, Claude 3.5, Perplexity, Google Gemini, and Midjourney, since OmniGPT combines several advanced models into a single interface.
Alongside the conversations, the breach also exposed around 30,000 user email addresses, phone numbers, login credentials, resumes, API keys, WhatsApp chat screenshots, police verification certificates, academic assignments, office projects, and much more.
What’s worse, OmniGPT did not even bother to respond or address the issue when alerted by Hackread.com. Tells you a lot about how little regard the company has for user privacy and security.
****The Main Thing****
Nevertheless, SafetyDetective’s analysis and ChatGPT chat leak are less about hacking or data breaches and more about people trusting AI with secrets they would hesitate to tell another person, and when those chats fall into the public domain, the consequences are immediate and personal.
Until AI platforms offer stronger privacy protections and people are more careful with what they share, it will be hard to tell the difference between a private chat and something that could end up public.