Headline
Eurostar Accused Researchers of Blackmail for Reporting AI Chatbot Flaws
Researchers discovered critical flaws in Eurostar’s AI chatbot including prompt injection, HTML injection, guardrail bypass, and unverified chat IDs - Eurostar later accused them of blackmail.
The rush to add AI to customer service, which we have been witnessing lately in almost every sector, can sometimes come at a high price for security. On December 22, 2025, the team of ethical hackers at Pen Test Partners (PTP) went public with a series of flaws they found in the new AI chatbot for Eurostar.
For your information, Eurostar is the famous high-speed rail operator that connects the UK to mainland Europe through the Channel Tunnel, carrying millions of travellers between major hubs like London, Paris, and Amsterdam.
****How The Flaws Were Discovered****
What started as a researcher planning a simple train trip from London turned into the discovery of “weak guardrails” that left the system open to manipulation. For your information, guardrails are the digital “safety brakes” that stop an AI from going off-topic or leaking secrets.
According to PTP researchers, Eurostar’s bot had a major design flaw; it only checked the very last message in a chat for safety. By simply editing earlier messages in the conversation on their own screen, the researchers found they could trick the AI into ignoring its own rules.
The technical side of the “hack” was surprisingly simple. Once the safety checks were bypassed, the researchers used prompt injection to make the bot reveal its internal instructions and the type of AI model it was using.
Eurostar AI Chatbot Revealing Model (source: Pen Test Partners)
Further probing revealed two other critical issues. First, the chatbot was vulnerable to HTML injection and could be forced to display malicious code or fake links directly in the user’s chat window. Secondly, conversation and message IDs were not verified.
This means the system didn’t properly check if a chat session truly belonged to the user, possibly allowing an attacker to “replay” or inject malicious content into someone else’s conversation.
****Fixing the Flaws****
This research, which was shared with Hackread.com, reveals that finding the vulnerabilities was actually easier than getting them fixed. The team first alerted Eurostar on June 11, 2025, but there was no response. Finally, after a month of chasing, they tracked down Eurostar’s Head of Security on LinkedIn on July 7.
Researchers later learned that Eurostar had apparently outsourced their security reporting process right when the bugs were reported, leading them to claim they had “no record” of the warnings.
At one point, the rail operator even accused PTP’s security team of “blackmail” just for trying to flag the issues. The accusation came despite the company having a publicly accessible vulnerability disclosure program available here.
(Source: Pen Test Partners)
“We had disclosed a vulnerability in good faith,” the researchers noted, expressing their surprise at the hostile response.
While the flaws have now been patched, the team warned that this should be a wake-up call for big brands. Just because a tool is AI-powered doesn’t mean the old rules of web security don’t apply, and if the backend isn’t solid, the fancy AI features are little more than “theatre.”