Security
Headlines
HeadlinesLatestCVEs

Headline

GHSA-xgpc-q899-67p8: Fleet doesn’t validate a server’s certificate when connecting through SSH

Impact

A vulnerability has been identified within Fleet where, by default, Fleet will automatically trust a remote server’s certificate when connecting through SSH if the certificate isn’t set in the known_hosts file. This could allow the execution of a man-in-the-middle (MitM) attack against Fleet. In case the server that is being connected to has a trusted entry in the known_hosts file, then Fleet will correctly check the authenticity of the presented certificate.

Please consult the associated MITRE ATT&CK - Technique - Adversary-in-the-Middle for further information about this category of attack.

Patches

Patched versions include releases v0.10.12, v0.11.7 and v0.12.2.

The fix involves some key areas with the following changes:

  • Git latest commit fetcher sources known_hosts entries from the following locations, in decreasing order of priority:

    1. Secret referenced in a GitRepo’s clientSecretName field;
    2. If no secret is referenced, in a gitcredential secret located in the GitRepo’s namespace;
    3. If that secret does not exist, in a (new) known-hosts config map installed by Fleet, populated statically with public entries shared by a few git providers: Github, Gitlab, Bitbucket, Azure DevOps;
  • Git cloner: same as above.

  • fleet apply command: same as above. The command reads entries from a FLEET_KNOWN_HOSTS environment variable. That command is typically run within a container inside a job pod created by Fleet to update bundles from a new commit. However, users may also decide to run it locally, perhaps even with multiple concurrent executions of the command on the same machine. To cater for this, fleet apply writes the contents of FLEET_KNOWN_HOSTS, if any, to a temporary file with a random name, and deletes that file once bundles have been created. This reduces the risk of conflicts between concurrent runs. This happens regardless of the git repository URL (SSH or not), since a repository may reference artifacts to be retrieved using SSH anyway.

Note about sourcing known_hosts entries: if entries are found in a supported source, whatever that source may be, then those entries will be used. For instance, if wrong entries, or an incomplete set of entries (e.g. only BitBucket entries for a GitRepo pointing to Github) are found in a secret referenced in a GitRepo’s clientSecretName field, they will still be used. This will lead to errors if strict host key checks are enabled, even if matching, correct entries are found in another source with lower priority, such as the known-hosts config map. Fleet will not use one source to complement the other.

Note: Fleet v0.9 release line does not have the fix for this CVE. The fix for v0.9 was considered too complex and with the risk of introducing instabilities right before this version goes into end-of-life (EOL), as documented in SUSE’s Product Support Lifecycle page. Please see the section below for workarounds or consider upgrading to a newer and patched version of Rancher.

Workarounds

There are no workarounds for this issue. Users are recommended to upgrade, as soon as possible, to a version of Fleet that contains the fixes.

References

If you have any questions or comments about this advisory:

ghsa
#vulnerability#mac#git#auth#ssh#bitbucket

Impact

A vulnerability has been identified within Fleet where, by default, Fleet will automatically trust a remote server’s certificate when connecting through SSH if the certificate isn’t set in the known_hosts file. This could allow the execution of a man-in-the-middle (MitM) attack against Fleet. In case the server that is being connected to has a trusted entry in the known_hosts file, then Fleet will correctly check the authenticity of the presented certificate.

Please consult the associated MITRE ATT&CK - Technique - Adversary-in-the-Middle for further information about this category of attack.

Patches

Patched versions include releases v0.10.12, v0.11.7 and v0.12.2.

The fix involves some key areas with the following changes:

  • Git latest commit fetcher sources known_hosts entries from the following locations, in decreasing order of priority:

    1. Secret referenced in a GitRepo’s clientSecretName field;
    2. If no secret is referenced, in a gitcredential secret located in the GitRepo’s namespace;
    3. If that secret does not exist, in a (new) known-hosts config map installed by Fleet, populated statically with public entries shared by a few git providers: Github, Gitlab, Bitbucket, Azure DevOps;
  • Git cloner: same as above.

  • fleet apply command: same as above. The command reads entries from a FLEET_KNOWN_HOSTS environment variable. That command is typically run within a container inside a job pod created by Fleet to update bundles from a new commit. However, users may also decide to run it locally, perhaps even with multiple concurrent executions of the command on the same machine. To cater for this, fleet apply writes the contents of FLEET_KNOWN_HOSTS, if any, to a temporary file with a random name, and deletes that file once bundles have been created. This reduces the risk of conflicts between concurrent runs.
    This happens regardless of the git repository URL (SSH or not), since a repository may reference artifacts to be retrieved using SSH anyway.

Note about sourcing known_hosts entries: if entries are found in a supported source, whatever that source may be, then those entries will be used. For instance, if wrong entries, or an incomplete set of entries (e.g. only BitBucket entries for a GitRepo pointing to Github) are found in a secret referenced in a GitRepo’s clientSecretName field, they will still be used. This will lead to errors if strict host key checks are enabled, even if matching, correct entries are found in another source with lower priority, such as the known-hosts config map. Fleet will not use one source to complement the other.

Note: Fleet v0.9 release line does not have the fix for this CVE. The fix for v0.9 was considered too complex and with the risk of introducing instabilities right before this version goes into end-of-life (EOL), as documented in SUSE’s Product Support Lifecycle page. Please see the section below for workarounds or consider upgrading to a newer and patched version of Rancher.

Workarounds

There are no workarounds for this issue. Users are recommended to upgrade, as soon as possible, to a version of Fleet that contains the fixes.

References

If you have any questions or comments about this advisory:

  • Reach out to the SUSE Rancher Security team for security related inquiries.
  • Open an issue in the Rancher repository.
  • Verify with our support matrix and product support lifecycle.

References

  • GHSA-xgpc-q899-67p8
  • rancher/fleet#3571
  • rancher/fleet#3572
  • rancher/fleet#3573
  • https://github.com/rancher/fleet/releases/tag/v0.10.12
  • https://github.com/rancher/fleet/releases/tag/v0.11.7
  • https://github.com/rancher/fleet/releases/tag/v0.12.2

ghsa: Latest News

GHSA-9fwj-9mjf-rhj3: laravel-auth0 SDK Vulnerable to Brute Force Authentication Tags of CookieStore Sessions