Source
ghsa
Authorization Bypass Through User-Controlled Key vulnerability in Apache ZooKeeper. If SASL Quorum Peer authentication is enabled in ZooKeeper (quorum.auth.enableSasl=true), the authorization is done by verifying that the instance part in SASL authentication ID is listed in zoo.cfg server list. The instance part in SASL auth ID is optional and if it's missing, like 'eve@EXAMPLE.COM', the authorization check will be skipped. As a result an arbitrary endpoint could join the cluster and begin propagating counterfeit changes to the leader, essentially giving it complete read-write access to the data tree. Quorum Peer authentication is not enabled by default. Users are recommended to upgrade to version 3.9.1, 3.8.3, 3.7.2, which fixes the issue. Alternately ensure the ensemble election/quorum communication is protected by a firewall as this will mitigate the issue. See the documentation for more details on correct cluster administration.
Cross-Site Request Forgery (CSRF) in GitHub repository snipe/snipe-it prior to v.6.2.3.
### Impact The MsQuic server will continue to leak memory until no more is available, resulting in a denial of service. ### Patches The following patch was made: - Fix Memory Leak from Multiple Decodes of TP - https://github.com/microsoft/msquic/commit/d364feeda0dd8b729eca6fef149c1ef98630f0cb ### Workarounds Beyond upgrading to the patched versions, there is no other workaround.
A client might overload the server by issue frequent RST frames. This can cause a massive amount of load on the remote system and so cause a DDOS attack. ### Impact This is a DDOS attack, any http2 server is affected and so you should update as soon as possible. ### Patches This is patched in version 4.1.100.Final. ### Workarounds A user can limit the amount of RST frames that are accepted per connection over a timeframe manually using either an own `Http2FrameListener` implementation or an `ChannelInboundHandler` implementation (depending which http2 API is used). ### References - https://www.cve.org/CVERecord?id=CVE-2023-44487 - https://blog.cloudflare.com/technical-breakdown-http2-rapid-reset-ddos-attack/ - https://cloud.google.com/blog/products/identity-security/google-cloud-mitigated-largest-ddos-attack-peaking-above-398-million-rps/
In the mtproto_proxy (aka MTProto proxy) component through 0.7.2 for Erlang, a low-privileged remote attacker can access an improperly secured default installation without authenticating and achieve remote command execution ability.
Improper Input Validation vulnerability in Apache Tomcat. Tomcat from 11.0.0-M1 through 11.0.0-M11, from 10.1.0-M1 through 10.1.13, from 9.0.0-M1 through 9.0.81 and from 8.5.0 through 8.5.93 did not correctly parse HTTP trailer headers. A specially crafted, invalid trailer header could cause Tomcat to treat a single request as multiple requests leading to the possibility of request smuggling when behind a reverse proxy. Users are recommended to upgrade to version 11.0.0-M12 onwards, 10.1.14 onwards, 9.0.81 onwards or 8.5.94 onwards, which fix the issue.
### Impact ZITADEL administrators can enable a setting called "Ignoring unknown usernames" which helps mitigate attacks that try to guess/enumerate usernames. While this settings was properly working during the authentication process it did not work correctly on the password reset flow. This meant that even if this feature was active that an attacker could use the password reset function to verify if an account exist within ZITADEL. ### Patches This bug has been patched in versions >2.27.2 beginning with [2.37.3](https://github.com/zitadel/zitadel/releases/tag/v2.37.3) and [2.38.0](https://github.com/zitadel/zitadel/releases/tag/v2.38.0) ### Workarounds None available we advise to updated if this is needed. ### References None
When you have transforms on the root level or single source with transforms, and the client sends the same query with different variables, the initial variables are used in all following requests until the cache evicts DocumentNode. Let's say if a token is sent via variables, the following requests will act like the same token is sent even if the following requests have different tokens. This can cause a short memory leak but it won't grow per each request but per different operation until the cache evicts DocumentNode by LRU mechanism.
# Summary Attackers with access to a users' device can gain persistent account access. This is caused by missing verification of Session Tokens after password changes and/or elapsed inactivity-periods. # Details `uptime-kuma` sets JWT tokens for users after successful authentication. These tokens have the following design flaws: - After successful login, a JWT token and it is stored in `sessionStorage` or `localStorage`. Which of the two is decided based on the `Remember Me` button. The users' token is valid without any time limitation, even after long periods of inactivity. This increases the risk of session hijacking if, for example, a user forgets to log off and leaves the PC. - sessions are only deleted on the client side after a user loggs out, meaning a local attacker could reuse said token with deep system access over the browser - If a user changes a password - any previously logged in clients are not logged out - previously issued tokens remained valid forever...
In the Harbor jobservice container, the comparison of secrets in the authenticator type is prone to timing attacks. The vulnerability occurs due to the following code: https://github.com/goharbor/harbor/blob/aaea068cceb4063ab89313d9785f2b40f35b0d63/src/jobservice/api/authenticator.go#L69-L69 To avoid this issue, constant time comparison should be used. ``` subtle.ConstantTimeCompare([]byte(expectedSecret), []byte(secret)) == 0 ``` ### Impact This attack might be possible theoretically, but no workable proof of concept is available, and access complexity is set at High. The jobservice exposes these APIs ``` Create a job task --- POST /api/v1/jobs Get job task information --- GET /api/v1/jobs/{job_id} Stop job task --- POST /api/v1/jobs/{job_id} Get job log task --- GET /api/v1/jobs/{job_id}/log Get job execution --- GET /api/v1/jobs/{job_id}/executions Get job stats --- GET /api/v1/stats Get job service configuration --- GET /api/v1/config ``` It is used to create jobs/stop job...