Headline
GHSA-36fq-jgmw-4r9c: Keras is vulnerable to Deserialization of Untrusted Data
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True.
One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True.
One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.
References
- https://nvd.nist.gov/vuln/detail/CVE-2025-9906
- keras-team/keras#21429
- keras-team/keras@713172a