Headline
AgentSmith Flaw in LangSmith’s Prompt Hub Exposed User API Keys, Data
A CVSS 8.8 AgentSmith flaw in LangSmith’s Prompt Hub exposed AI agents to data theft and LLM manipulation. Learn how malicious AI agents could steal API keys and hijack LLM responses. Fix deployed.
Cybersecurity researchers at Noma Security have disclosed details of a critical vulnerability within LangChain‘s LangSmith platform, specifically affecting its public Prompt Hub. This critical flaw, dubbed AgentSmith with a CVSS score of 8.8 (indicating a high severity), could allow malicious AI agents to steal sensitive user data, including valuable OpenAI API keys, and even manipulate responses from large language models (LLMs).
****The AgentSmith Threat Explained****
LangSmith is a popular LangChain platform used by major companies like Microsoft and DHL for managing and collaborating on AI agents. The Prompt Hub, its key feature, is a public library for sharing/reusing pre-configured AI prompts, many of which function as agents.
The AgentSmith vulnerability exploited how these public agents could be set up with harmful proxy configurations. A proxy server acts as an intermediary for network requests. In this case, an attacker could create an AI agent with a hidden malicious proxy.
When an unsuspecting user adopts and runs this agent from the Prompt Hub, all their communications, including private data like OpenAI API keys, uploaded files, and even voice inputs, would be secretly sent through the attacker’s server.
As per Noma Security’s investigation, shared with Hackread.com, this Man-in-the-Middle (MITM) interception could lead to severe consequences. Attackers could gain unauthorized access to a victim’s OpenAI account, potentially downloading sensitive datasets, inferring confidential information from prompts, or even causing financial losses by exhausting API usage quotas.
In more advanced attacks, the malicious proxy could alter LLM responses, potentially leading to fraud or incorrect automated decisions.
****AgentSmith’s Proof of Concept (PoC)****
****Prompt Response and Future Safeguards****
Noma Security responsibly disclosed the vulnerability to LangChain on October 29, 2024. LangChain confirmed the issue and swiftly deployed a fix on November 6, 2024, prior to this public disclosure.
Along with that, the company introduced also new safety measures, warning messages, and a persistent banner on agent description pages for users attempting to clone agents with custom proxy settings.
Both Noma Security and LangChain found no evidence of active exploitation, and only users who directly engaged with a malicious agent were at risk. LangChain also clarified that the vulnerability was limited to the Prompt Hub’s public sharing feature and did not affect their core platform, private agents, or broader infrastructure.
The incident highlights the need for organizations to enhance their AI security practices. Researchers suggest organizations should maintain a centralized inventory of all AI agents, using an AI Bill of Materials (AI BOM) to track their origins and components. Implementing runtime protections and strong security governance for all AI agents is also crucial to protect against evolving threats.