Headline
Chrome extension slurps up AI chats after users installed it for privacy
The extension disclosed its AI data collection, but not in a way most users would recognize—or knowingly agree to.
This case highlights a growing grey area in consumer privacy: data collection that is technically disclosed, but so far outside user expectations that most people would never knowingly agree to it.
The next time you tell an AI chat assistant your deepest secrets, think twice; you never know who (or what) might be listening. More than seven million users of a VPN extension for Google Chrome and Microsoft Edge found that out the hard way this week after researchers at Koi Security revealed the browser extension had been logging users’ AI chats and sending them to a data broker.
Urban VPN Proxy looked like a reputable program. It sat in the Chrome Web Store with a 4.7-star rating and Google’s “Featured” badge, which is meant to indicate that an extension meets higher standards for user experience and design. More than six million people downloaded the Chrome version of the tool, which ironically claims to protect users’ online privacy. Another 1.3 million installed it on Microsoft Edge.
The extension was originally benign, but then on July 9, 2025 its publisher, Urban Cybersecurity, shipped version 5.5.0. According to Koi Security, that update introduced code that intercepted every conversation users had with eight AI assistant platforms: ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI), and Meta AI.
The extension intercepted chat prompts from the user’s browser, along with responses from the AI. It then reportedly packaged them up and sent them to Urban Cybersecurity’s parent company, BiScience (B.I Science (2009) Ltd). That is a data broker company, which collects browsing history and device IDs from millions of users.
As Koi Security points out in its report, Urban Cybersecurity does mention AI data collection in the consent screen that it shows during product setup. It says:
“…we process certain browsing data such as pages you visit, your network connection, ChatAI communication, and security signals, as outlined in our Privacy Policy.”
The privacy policy further explains what data it collects and says that it discloses the data for marketing analytics purposes.
There are two problems with that.
First, the extension silently auto-updates on Chrome and Microsoft Edge browsers, as Koi Security points out. That means users who installed an earlier, non-harvesting version installed would have been automatically upgraded to the chat-slurping version in July and been none the wiser—unless they happened to read updated privacy policies for fun.
fromSecond, the Chrome Web Store listing reportedly described the product as protecting people from entering personal information into AI chatbots. That claim is hard to square with the fact that the extension captures and exfiltrates AI chats regardless of whether its protection features are turned on, according to Koi.
The researchers also found that seven other extensions from the same publisher contained identical harvesting code, bringing the total number of affected users to more than eight million. Koi Security published a list of those extensions in its report.
Why data brokers love AI chat
With people inclined to tell AI sessions increasingly personal things, this secretive harvesting should worry us. Set aside for a minute those who confess to crimes, or those who pour out deeply private thoughts while using AI as a form of therapy. Many other chats are filled with things that feel more mundane but are still highly sensitive: job advice (people have shared full resumes with AI chats), medical symptoms, family planning questions, study materials, and legal queries.
Those conversations are often far more detailed and revealing than traditional search engine queries. If data brokers obtain them, they can mine the content for personal insights and link it with other data they already hold about you.
This episode reinforces two pieces of advice that we’ve given before. The first is to be careful which browser extensions you install, and which VPN services you use. Not all are what they seem.
Second, be careful what you tell AI assistants. Even if nothing is intercepting the conversation locally, the company operating the AI itself might be made to hand over chat data, as OpenAI was earlier this month.
Google’s “Featured” badge says extensions “follow our technical best practices and meet a high standard of user experience and design” but this seems to have been one that slipped through the next. As of last night, Urban Proxy VPN and Urban Cybersecurity’s other apps appeared to have been removed from the Chrome Web Store. The ones identified by Koi Security on the Microsoft Edge Add-ons store were still available, though.
If you used any of these extensions, were unaware of the situation and are unhappy about it, you should assume that any AI chats since July 9 this year may have been compromised. To remove extensions, you can do so in Chrome by visiting chrome://extensions and in Edge by visiting edge://extensions. You may also want to reset passwords and clear browser caches and cookies.
We don’t just report on privacy—we offer you the option to use it.
Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.
About the author
Danny Bradbury has been a journalist specialising in technology since 1989 and a freelance writer since 1994. He covers a broad variety of technology issues for audiences ranging from consumers through to software developers and CIOs. He also ghostwrites articles for many C-suite business executives in the technology sector. He hails from the UK but now lives in Western Canada.