Security
Headlines
HeadlinesLatestCVEs

Headline

OpenAI kills “short-lived experiment” where ChatGPT chats could be found on Google

OpenAI removed a short-lived experiment that allowed ChatGPT users to make their conversations discoverable by search engines

Malwarebytes
#google#intel

A little-known ChatGPT “feature” is now gone. It could be a good thing.

On X, OpenAI Chief Information Security Officer Dane Stuckey announced that OpenAI “removed a feature from ChatGPT that allowed users to make their conversations discoverable by search engines, such as Google.” Stuckey called the whole thing a “short-lived experiment to help people discover useful conversations.”

The feature was entirely opt-in, meaning users had to make certain selections to participate, including “picking a chat to share, then by clicking a checkbox for it to be shared with search engines.”

As Stuckey explained for why the company rolled back the experiment:

Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option. We’re also working to remove indexed content from the relevant search engines. This change is rolling out to all users through tomorrow morning.

Security and privacy are paramount for us, and we’ll keep working to maximally reflect that in our products and features.”

I was unable to find out when the option was officially introduced, which, I guess, might be a reason for the following uproar, as there was no big announcement.

But, such an announcement might have have helped users make informed decisions. The absence of this guidance or of any firm information about the feature during its short-lived life also highlights the way Artificial Intelligence (AI) companies view their users. As a commenter said:

“The friction for sharing potential private information should be greater than a checkbox or not exist at all.”

Many users are conditioned to check checkboxes before being able to use something new, and they don’t read EULAs and other warnings. They just rapidly tick every box they think they need to tick to get to the result they have in mind as fast as possible.

Even though this attempt might have had the right intention, we are reminded of other leaked private conversations, whether they were caused by a bug, or not a bug. Either way, it does not help efforts to get the general public to trust AI chatbots.

Many people confide deeply personal secrets to chatbots and seek support for issues that could typically require hours of professional counseling.

OpenAI removed the option that allowed conversations with ChatGPT to be indexed, so newly shared chats will not appear in search results going forward. Still, OpenAI warns that some conversations already indexed may remain visible temporarily because of search engine caching, even as they work to have this content removed.

Tips to use AI chatbots safer

Besides the obvious (but often ignored) advice of reading any warnings and privacy policies before using these apps, there are some additional precautions and habits that can help keep your personal conversations private:

  • Don’t share without knowing all the consequences and implications.
  • Anonymize your input. Don’t use (real) names or other Personally Identifiable Information (PII) in your conversations.
  • Don’t share sensitive work or client data.
  • Use up-to-date active anti-malware protection.
  • Limit the data you provide and delete it when possible.

In short, trust an AI chatbot with your private info the same way you would trust a “blabbermouth”—not a whole lot.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Malwarebytes: Latest News

Apple ID scam leads to $27,000 in-person theft of Ohio man