Security
Headlines
HeadlinesLatestCVEs

Source

ghsa

GHSA-xrhh-hx36-485q: Strimzi allows unrestricted access to all Secrets in the same Kubernetes namespace from Kafka Connect and MirrorMaker 2 operands

### Impact In some situations, Strimzi creates an incorrect Kubernetes `Role` which grants the Apache Kafka Connect and Apache Kafka MirrorMaker 2 operands the `GET` access to all Kubernetes Secrets that exist in the given Kubernetes namespace. The exact scenario when this happens is when: * Apache Kafka Connect is deployed without at least one of the following options configured: * TLS encryption with configured trusted certificates (no `.spec.tls.trustedCertificates` section in the `KafkaConnect` CR) * mTLS authentication (no `type: tls` in `.spec.authentication` section of the `KafkaConnect` CR) * TLS encryption with configured trusted certificates for `type: oauth` authentication (no `.spec.authentication.tlsTrustedCertificates` section in the `KafkaConnect` CR) * Apache Kafka MirrorMaker2 is deployed without at least one of the following options configured for the target cluster: * TLS encryption with configured trusted certificates (no `.spec.target.tls.trustedCe...

ghsa
#apache#kubernetes#oauth#auth#ssl
GHSA-xrv8-2pf5-f3q7: nitro-tpm-pcr-compute may allow kernel command line modification by an account operator

## Summary Adding default PCR12 validation to ensure that account operators can not modify kernel command line parameters, potentially bypassing root filesystem integrity validation. Attestable AMIs are based on the systemd Unified Kernel Image (UKI) concept which uses systemd-boot to create a single measured UEFI binary from a Linux kernel, its initramfs, and kernel command line. The embedded kernel command line contains a dm-verity hash value that establishes trust in the root file system. When UEFI Secure Boot is disabled, systemd-boot appends any command line it receives to the kernel command line. Account operators with the ability to modify UefiData can install a boot variable with a command line that deactivates root file system integrity validation, while preserving the original PCR4 value. Systemd-boot provides separate measurement of command line modifications in PCR12. ## Impact In line with the TPM 2.0 specification and systemd-stub logic, KMS policies that do not inc...

GHSA-cmp6-m4wj-q63q: yawkat LZ4 Java has a possible information leak in Java safe decompressor

### Summary Insufficient clearing of the output buffer in Java-based decompressor implementations in lz4-java 1.10.0 and earlier allows remote attackers to read previous buffer contents via crafted compressed input. In applications where the output buffer is reused without being cleared, this may lead to disclosure of sensitive data. JNI-based implementations are *not* affected. ### Details During the decompression process, the lz4 algorithm may have to repeat data that was previously decompressed in the same input frame. In the Java implementation, this is implemented by copy operations within the output buffer. With a crafted input, an attacker may induce the Java implementation to copy from a region in the output buffer that does not contain decompressed data yet. If that region contains sensitive information because the output buffer was not cleared prior to decompression, that data will then be copied to the decompressed output. - `LZ4Factory.nativeInstance().safeDecompresso...

GHSA-4qg8-fj49-pxjh: Sigstore Timestamp Authority allocates excessive memory during request parsing

### Impact **Excessive memory allocation** Function [api.ParseJSONRequest](https://github.com/sigstore/timestamp-authority/blob/26d7d426d3000abdbdf2df34de56bb92246c0365/pkg/api/timestamp.go#L63) currently splits (via a call to [strings.Split](https://pkg.go.dev/strings#Split)) an optionally-provided OID (which is untrusted data) on periods. Similarly, function [api.getContentType](https://github.com/sigstore/timestamp-authority/blob/26d7d426d3000abdbdf2df34de56bb92246c0365/pkg/api/timestamp.go#L114) splits the `Content-Type` header (which is also untrusted data) on an `application` string. As a result, in the face of a malicious request with either an excessively long OID in the payload containing many period characters or a malformed `Content-Type` header, a call to `api.ParseJSONRequest` or `api.getContentType` incurs allocations of O(n) bytes (where n stands for the length of the function's argument). Relevant weakness: [CWE-405: Asymmetric Resource Consumption (Amplification)](h...

GHSA-f83f-xpx7-ffpw: Fulcio allocates excessive memory during token parsing

Function [identity.extractIssuerURL](https://github.com/sigstore/fulcio/blob/main/pkg/identity/issuerpool.go#L44-L45) currently splits (via a call to [strings.Split](https://pkg.go.dev/strings#Split)) its argument (which is untrusted data) on periods. As a result, in the face of a malicious request with an (invalid) OIDC identity token in the payload containing many period characters, a call to `extractIssuerURL` incurs allocations to the tune of O(n) bytes (where n stands for the length of the function's argument), with a constant factor of about 16. Relevant weakness: [CWE-405: Asymmetric Resource Consumption (Amplification)](https://cwe.mitre.org/data/definitions/405.html) Details See [identity.extractIssuerURL](https://github.com/sigstore/fulcio/blob/main/pkg/identity/issuerpool.go#L44-L45) Impact Excessive memory allocation

GHSA-2xpw-w6gg-jr37: urllib3 streaming API improperly handles highly compressed data

### Impact urllib3's [streaming API](https://urllib3.readthedocs.io/en/2.5.0/advanced-usage.html#streaming-and-i-o) is designed for the efficient handling of large HTTP responses by reading the content in chunks, rather than loading the entire response body into memory at once. When streaming a compressed response, urllib3 can perform decoding or decompression based on the HTTP `Content-Encoding` header (e.g., `gzip`, `deflate`, `br`, or `zstd`). The library must read compressed data from the network and decompress it until the requested chunk size is met. Any resulting decompressed data that exceeds the requested amount is held in an internal buffer for the next read operation. The decompression logic could cause urllib3 to fully decode a small amount of highly compressed data in a single operation. This can result in excessive resource consumption (high CPU usage and massive memory allocation for the decompressed data; CWE-409) on the client side, even if the application only requ...

GHSA-gm62-xv2j-4w53: urllib3 allows an unbounded number of links in the decompression chain

## Impact urllib3 supports chained HTTP encoding algorithms for response content according to RFC 9110 (e.g., `Content-Encoding: gzip, zstd`). However, the number of links in the decompression chain was unbounded allowing a malicious server to insert a virtually unlimited number of compression steps leading to high CPU usage and massive memory allocation for the decompressed data. ## Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier for HTTP requests to untrusted sources unless they disable content decoding explicitly. ## Remediation Upgrade to at least urllib3 v2.6.0 in which the library limits the number of links to 5. If upgrading is not immediately possible, use [`preload_content=False`](https://urllib3.readthedocs.io/en/2.5.0/advanced-usage.html#streaming-and-i-o) and ensure that `resp.headers["content-encoding"]` contains a safe number of encodings before reading the response content.

GHSA-rwjg-c3h2-f57p: Envoy's TLS certificate matcher for `match_typed_subject_alt_names` may incorrectly treat certificates containing an embedded null byte

### Summary Envoy’s mTLS certificate matcher for `match_typed_subject_alt_names` may incorrectly treat certificates containing an embedded null byte (\0) inside an `OTHERNAME` SAN value as valid matches. ### Details This occurs when the SAN is encoded as a `BMPSTRING` or `UNIVERSALSTRING`, and its UTF-8 conversion result is truncated at the first null byte during string assignment. As a result, `"victim\0evil"` may match an exact: `"victim"` rule and be accepted by Envoy. ### PoC Create a CA and a server certificate signed by that CA. Create two client certificates signed by the same CA: client_evil with OTHERNAME BMPSTRING = "evil" client_null with OTHERNAME BMPSTRING = "victim\0evil" Configure Envoy with require_client_certificate: true and a match_typed_subject_alt_names entry for the OTHERNAME OID with matcher.exact: "victim". Connect without a client cert → connection rejected. Connect with client_evil → connection rejected. Connect with client_null → connection accepted (but s...

GHSA-rj35-4m94-77jh: Envoy forwards early CONNECT data in TCP proxy mode

## Summary Forwarding of early CONNECT data in TCP proxy mode. ## Details Per [RFC 7231-4.3.6](https://www.rfc-editor.org/rfc/rfc7231#section-4.3.6) the sender of CONNECT (and all inbound proxies) switch to tunnel mode only after receiving 2xx response. However in TCP proxy mode, Envoy accepts client data before it has issued a 2xx response and eagerly proxies it to an established TCP connection. This creates possibility of a de-synchronized tunnel state if a proxy upstream from Envoy responds with a status other an 2xx. The RFC does not specify the behavior in case an early CONNECT data is received and early CONNECT data is common as a latency reduction mechanism. To prevent disruption to existing deployments Envoy will by default allow early CONNECT data. Setting the `envoy.reloadable_features.reject_early_connect_data` runtime flag to `true` will cause CONNECT requests that send data before 2xx response to be rejected. This options should be enabled if there are intermediaries ...

GHSA-mp85-7mrq-r866: Envoy crashes when JWT authentication is configured with the remote JWKS fetching

### Summary Envoy crashes when JWT authentication is configured with the remote JWKS fetching, `allow_missing_or_failed` is enabled, multiple JWT tokens are present in the request headers and the JWKS fetch fails. ### Details This is caused by a re-entry bug in the `JwksFetcherImpl`. When the first token's JWKS fetch fails, `onJwksError()` callback triggers processing of the second token, which calls fetch() again on the same fetcher object. The original callback's reset() then clears the second fetch's state (`receiver_ and request_`) which causes a crash when the async HTTP response arrives. ### PoC * `allow_missing_or_failed` or `allow_missing` is enabled * The client send 2 Authorization headers * the remote JWKS fetching failed * There will be crash ### Impact DoS and Crash ### Mitigation * Disable the `allow_missing_or_failed` or `allow_missing`