Tag
#git
At New Zealand's Kawaiicon cybersecurity convention, organizers hacked together a way for attendees to track CO2 levels throughout the venue—even before they arrived.
A China-nexus threat actor known as APT24 has been observed using a previously undocumented malware dubbed BADAUDIO to establish persistent remote access to compromised networks as part of a nearly three-year campaign. "While earlier operations relied on broad strategic web compromises to compromise legitimate websites, APT24 has recently pivoted to using more sophisticated vectors targeting
### Overview OpenFGA v1.4.0 to v1.11.0 (openfga-0.1.34 <= Helm chart <= openfga-0.2.48, v.1.4.0 <= docker <= v.1.11.0) are vulnerable to improper policy enforcement when certain Check and ListObject calls are executed. ### Am I Affected? You are affected by this vulnerability if you meet the following preconditions: - You are using OpenFGA v1.4.0 to v1.11.0 - The model has a a relation directly assignable by a [type bound pubic access](https://openfga.dev/docs/concepts#what-is-type-bound-public-access) with [condition](https://openfga.dev/docs/modeling/conditions) - The same relation is not assignable by a type bound public access without condition - You have a type assigned for the same relation that is a type bound public access without condition ### Fix Upgrade to v1.11.1. This upgrade is backwards compatible. ### Workaround None
### Impact Minder users may fetch content in the context of the Minder server, which may include URLs which the user would not normally have access to (for example, if the Minder server is behind a firewall or other network partition). ### Patches https://github.com/mindersec/minder/commit/f770400923984649a287d7215410ef108e845af8 ### Workarounds Users should avoid deploying Minder with access to sensitive resources. Unfortunately, this could include access to systems like OpenFGA or Keycloak, depending on the deployment configuration. ### References Sample ruletype: ```yaml version: v1 type: rule-type name: test-http-send display_name: Test that we can call http.send short_failure_message: Failed http.send severity: value: medium context: provider: github description: | ... guidance: | .... def: in_entity: repository rule_schema: type: object properties: {} ingest: type: git git: {} eval: type: rego violation_format: text rego: ...
Samsung is under fire again for shipping phones in parts of the world with a hidden system app, AppCloud, that users can’t easily remove.
In `authkit-nextjs` version 2.11.0 and below, authenticated responses do not defensively apply anti-caching headers. In environments where CDN caching is enabled, this can result in session tokens being included in cached responses and subsequently served to multiple users. Next.js applications deployed on Vercel are unaffected **unless** they manually enable CDN caching by setting cache headers on authenticated paths. ### Impact This vulnerability may lead to session caching, potentially allowing unauthorized users to obtain another user’s session token. The severity depends on deployment configuration, caching policy, and whether authenticated routes are inadvertently cached. ### Patches Patched in `authkit-nextjs` 2.11.1, which applies anti-caching headers to all responses behind authentication. ### Notes Authentication middleware should set anti-caching headers for authenticated routes as a defense in depth measure, but cannot guarantee these headers will not be overwritten els...
Due to an error in sed command parsing, it was possible to bypass the Claude Code read-only validation and write to arbitrary files on the host system. Users on standard Claude Code auto-update will have received this fix automatically. Users performing manual updates are advised to update to the latest version. Thank you to Adam Chester - SpecterOps for reporting this issue!
### Summary The /v1/chat/completions and /tokenize endpoints allow a `chat_template_kwargs` request parameter that is used in the code before it is properly validated against the chat template. With the right `chat_template_kwargs` parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests ### Details In serving_engine.py, the chat_template_kwargs are unpacked into kwargs passed to chat_utils.py `apply_hf_chat_template` with no validation on the keys or values in that chat_template_kwargs dict. This means they can be used to override optional parameters in the `apply_hf_chat_template` method, such as `tokenize`, changing its default from False to True. https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/openai/serving_engine.py#L809-L814 https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/chat_utils.py#L1602-L1610 Both serving_...
### Summary Users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct `ndim` but incorrect `shape` (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). The issue has existed ever since we added support for image embedding inputs, i.e. #6613 (released in v0.5.5) ### Details Using image embeddings as an example: - For models that support image embedding inputs, the engine crashes when scattering the embeddings to `inputs_embeds` (mismatched shape) - For models that don't support image embedding inputs, the engine crashes when validating the inputs inside `get_input_embeddings` (validation fails). This happens because we only validate `ndim` of the tensor, but not the full shape, in input processor (via `MultiModalDataParser`). ### Impact - Denial of service by crashing the engine ### Mitigation - Use API key to limit access to trusted us...
### Summary A memory corruption vulnerability that leading to a crash (denial-of-service) and potentially remote code execution (RCE) exists in vLLM versions 0.10.2 and later, in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. ### Details A vulnerability that can lead to RCE from the completions API endpoint exists in vllm, where due to missing checks when loading user-provided tensors, an out-of-bounds write can be triggered. This happens because the default behavior of `torch.load(tensor, weights_only=True)` since py...