Security
Headlines
HeadlinesLatestCVEs

Headline

Malware Hidden in AI Models on PyPI Targets Alibaba AI Labs Users

ReversingLabs discovers new malware hidden inside AI/ML models on PyPI, targeting Alibaba AI Labs users. Learn how attackers…

HackRead
#mac#git#intel#alibaba

ReversingLabs discovers new malware hidden inside AI/ML models on PyPI, targeting Alibaba AI Labs users. Learn how attackers exploit Pickle files and the growing threat to the software supply chain.

Cybersecurity experts from ReversingLabs (RL) have discovered a new trick used by cybercriminals to spread harmful software, this time by hiding it within artificial intelligence (AI) and machine learning (ML) models.

Researchers discovered three dangerous packages on the Python Package Index (PyPI), a popular platform for Python developers to find and share code, which resembled a Python SDK for Aliyun AI Labs services and targeted users of Alibaba AI labs.

Alibaba AI labs is a significant investment and research initiative within Alibaba Group and a part of Alibaba Cloud’s AI and Data Intelligence services, or Alibaba DAMO Academy.

****New Software Threat Hides in AI Tools****

These malicious packages, named aliyun-ai-labs-snippets-sdk, ai-labs-snippets-sdk, and aliyun-ai-labs-sdk, had no real AI functionality, explained ReversingLabs reverse engineer Karlo Zanki in the research shared with Hackread.com.

“The ai-labs-snippets-sdk package accounted for the majority of downloads, due to it being available for download longer than the other two packages,” the blog post revealed.

A Package’s Readme Page (Source: ReversingLabs)

Instead, once installed, they secretly dropped an infostealer (malware designed to steal information). This harmful code was hidden inside a PyTorch model. For your information, PyTorch models are often used in ML and are essentially zipped Pickle files. Pickle is a common Python format for saving and loading data, but it can be risky because malicious code can be hidden inside. This particular infostealer collected basic details about the infected computer and its .gitconfig file, which often contains sensitive user information for developers.

The packages were available on PyPI starting May 19th for less than 24 hours but were downloaded about 1,600 times. RL researchers believe the attack might have started with phishing emails or other social engineering tactics to trick users into downloading the fake software. The fact that the malware looked for details from the popular Chinese app AliMeeting, and .gitconfig files suggests developers in China might be the main targets.

****Why ML Models are being Targeted?****

The rapid rise in the use of AI and ML in everyday software makes them a part of the software supply chain, creating new opportunities for attackers. ReversingLabs has been tracking this trend, previously warning about the dangers of the Pickle file format.

ReversingLabs product management director Dhaval Shah had noted earlier that Pickle files could be used to inject harmful code. This was proven true in February with the nullifAI campaign, where malicious ML models were found on Hugging Face, another platform for ML projects.

This latest discovery on PyPI shows that attackers are increasingly using ML models, specifically the Pickle format, to hide their malware. Security tools are only just beginning to catch up to this new threat, as ML models were traditionally seen as just data carriers, not places for executable code. This highlights the urgent need for better security measures for all types of files in software development.

HackRead: Latest News

Fake ChatGPT and InVideo AI Downloads Deliver Ransomware