Security
Headlines
HeadlinesLatestCVEs

Headline

Navigating AI risk: Building a trusted foundation with Red Hat

Red Hat helps organizations embrace AI innovation by providing a comprehensive and layered approach to security and safety across the entire AI lifecycle. We use our trusted foundation and expertise in open hybrid cloud to address the challenges around AI security, helping our customers build and deploy AI applications with more trust.Understanding enterprise AI security risksAs organizations adopt AI , they encounter significant security and safety hurdles. These advanced workloads need robust infrastructure and scalable resources and a comprehensive security posture that extends across the A

Red Hat Blog
#vulnerability#linux#red_hat#git#kubernetes#oauth#auth#ssl

Red Hat helps organizations embrace AI innovation by providing a comprehensive and layered approach to security and safety across the entire AI lifecycle. We use our trusted foundation and expertise in open hybrid cloud to address the challenges around AI security, helping our customers build and deploy AI applications with more trust.

Understanding enterprise AI security risks

As organizations adopt AI , they encounter significant security and safety hurdles. These advanced workloads need robust infrastructure and scalable resources and a comprehensive security posture that extends across the AI lifecycle. Many AI projects struggle to reach production because of these significant safety and security concerns.

Some of the challenges organizations face include:

  • Evolving AI-specific threats: AI applications and models are becoming attractive targets for malicious actors. Beyond conventional software vulnerabilities, critical concerns include training data poisoning, model evasion or theft, and adversarial attacks.
  • Complex software supply chain: The AI lifecycle involves numerous components, increasing vulnerability risks. AI applications also often depend on a vast ecosystem of open source libraries, pre-trained models, and complicated data pipelines. A single vulnerability or malicious component introduced at any stage—from data ingestion and third-party libraries to the base container images—can compromise the integrity and security of the entire AI system. Recent supply chain attacks highlight the urgent industry need for verifiable integrity and provenance for all software artifacts, including AI models and their dependencies.
  • Critical AI safety requirements: Trust is built on the assurance that AI models will operate as intended and without bias. For example, a model trained on biased data could lead to discriminatory outcomes in a loan application or hiring process, potentially causing significant reputational and legal risk.
  • Visibility and governance gaps: The dynamic nature of AI can hinder security oversight and policy enforcement. Many organizations also contend with “data gravity,” where massive datasets used for AI model training and operation remain on-premises due to regulatory, compliance, or performance requirements. Moving this data to the cloud is often impractical or prohibited. Environments that place an absolute premium on systems security may also operate in disconnected or air-gapped modes.

Red Hat’s layered approach to AI security

Securing AI workloads requires a comprehensive and integrated strategy. Red Hat’s approach addresses the entire AI lifecycle, building our expertise in platform security and DevSecOps. By treating AI systems as containerized software, we can apply our decades of experience in securing Linux, containers, and Kubernetes. Our strategy integrates security from the earliest design and development stages all the way through to deployment and runtime, with the goal of helping organizations build and run AI applications with a stronger security posture on a trusted hybrid cloud platform.

Red Hat’s approach is built on these key pillars:

  • Secure foundation: This layer applies decades of Linux and container security expertise to your AI workloads. Our foundational platforms, Red Hat Enterprise Linux (RHEL) and Red Hat OpenShift, provide a solid base for developing and deploying a wide range of AI applications and models. By treating AI systems as containerized software, we make use of the security of these platforms, building on decades of Linux innovation and container security best practices.
  • Trusted AI software supply chain: We help customers integrate security capabilities directly into their AI workflows. This involves using Red Hat Trusted Profile Analyzer for enhanced visibility, increasing component reliability, validating provenance, and strengthening policy enforcement.
  • AI application operationalization with enhanced security: We extend security best practices to the deployment and runtime of AI applications in hybrid cloud environments. Our focus is on maintaining consistent security approaches, managing threats, and enabling continuous governance of deployed AI.
  • LLM and ML model evaluation, explainability, and guardrails: Red Hat OpenShift AI provides the platform and integrated tools to help assess model performance, accuracy, bias, and safety. It also provides insights into AI model behavior and feature importance using integrated tools and frameworks, and implements crucial guardrails like content moderation, bias detection, and adversarial attack mitigation.

Let’s now dive deeper into how Red Hat technologies operationalize this layered security approach across the AI lifecycle.

The proven backbone of Red Hat OpenShift

Red Hat OpenShift serves as a robust, enterprise-grade hybrid cloud application platform based on Kubernetes, providing a foundation for developing, deploying, and managing AI workloads. OpenShift’s security posture is built upon several key pillars:

  • Red Hat Enterprise Linux CoreOS: This immutable, container-optimized operating system for OpenShift nodes reduces the potential attack surface, enables SELinux by default for mandatory access control, and uses read-only system components to prevent runtime tampering.
  • Platform security:
    • Authentication and authorization: OpenShift features an integrated OAuth server and robust role-based access control (RBAC) for fine-grained permission management across users, groups, and service accounts.
    • Workload isolation: Security context constraints (SCCs) offer granular control over pod permissions, restricting access to host resources and kernel capabilities beyond standard pod security standards. Kubernetes namespaces (projects) also provide logical isolation for different AI teams or services.
    • Network security: OpenShift software-defined networking (SDN) provides an overlay network with NetworkPolicy objects for microsegmentation, controlling traffic flow between pods and services. Egress firewalls and egress IPs further restrict outbound connections, and Red Hat OpenShift Service Mesh can be deployed for mTLS and advanced traffic management.
    • Data security: etcd, OpenShift’s key-value store, is encrypted at rest, and secrets management is built-in. Red Hat OpenShift Data Foundation can provide encrypted storage volumes for AI datasets and model artifacts.
    • API server hardening: The Kubernetes API server, the control plane’s frontend, is protected with appropriate authentication and authorization.

These foundational security features enable organizations to run diverse AI workloads—from data preprocessing and model training to inference serving—with enhanced protection and isolation.

A platform for what’s next with Red Hat OpenShift AI

Red Hat OpenShift AI includes a curated set of tools and capabilities for data scientists and AI developers, facilitating the rapid development and deployment of AI models. It uses the inherent security strengths of the underlying OpenShift platform. OpenShift AI integrates powerful AI runtimes and frameworks, including efficient large language model (LLM) inference capabilities with technologies like vLLM, deriving its foundational security capabilities and operational consistency from OpenShift itself.

Red Hat OpenShift AI includes TrustyAI, which brings together open source projects such as lm-evaluation-harness and Guardrails to provide model evaluation, explainability, and safety capabilities.

TrustyAI allows users to benchmark model capabilities across a variety of tasks using tools like lm-eval. The framework assesses key metrics, including:

  • Factual accuracy (TruthfulQA)
  • Tendency to produce toxic content (Toxigen)
  • Gender bias (Winogender)
  • Stereotypical bias (CrowS-Pairs)
  • Agreement with biased assumptions (BBQ-Lite)
  • Sycophancy rate, harmful content detection (MMLU-Harmful)
  • Ethical consistency
  • Compliance with malicious instructions (Safety Prompts)

It also provides enhanced explainability through fairness metrics and explainable AI algorithms, and enables guardrails for content moderation, bias detection, and adversarial attack mitigation. An important aspect is the ability to package AI models within Open Container Initiative (OCI)-compliant container images. This approach treats models as standard software artifacts, allowing them to be versioned, scanned, and signed. This enables their management through established DevSecOps and GitOps workflows, which significantly improves their security posture. The ModelCar OCI architecture facilitates this, making models more consumable by serving platforms like KServe.

Best practices

The integrity of AI models and applications relies on a secure software supply chain. Red Hat provides tools and methodologies, including Red Hat Trusted Software Supply Chain, to better enhance the security of AI assets from their creation to deployment:

  • Automated CI/CD: OpenShift Pipelines (Tekton-based) enables automated CI/CD workflows for AI/ML. Tekton Chains can generate SLSA-compliant provenance attestations and digitally sign build artifacts, including container images embedding AI models.
  • Digital signatures and verification: The Sigstore project, with components like Cosign (for signing artifacts), Fulcio (keyless signing with OIDC), and Rekor (transparency log), is central to verifying artifact integrity and authenticity. This verifies that AI models and their dependencies originate from trusted pipelines and are not tampered with. Red Hat Trusted Artifact Signer facilitates this process.
  • Software bill of materials (SBOMs): Generating SBOMs for AI applications provides a detailed inventory of all components, essential for vulnerability management and license compliance.
  • Vulnerability scanning and analysis: Red Hat Advanced Cluster Security for Kubernetes and Red Hat Quay continuously scan container images and SBOMs for known vulnerabilities. Red Hat Trusted Profile Analyzer offers deeper insights, extending beyond CVEs to potential AI safety concerns.
  • Policy enforcement: Kubernetes admission controllers, integrated with Red Hat Advanced Cluster Security or tools like OPA Gatekeeper and Kyverno, enforce policies at deployment time. This can prevent the deployment of unsigned images, images with critical vulnerabilities, or those lacking necessary attestations (including AI model attestations).

Hardening the AI platform and deployment pipelines

The deployment phase focuses on establishing that the AI platform itself has a more secure footprintand that only validated AI workloads are admitted into the environment:

  • Immutable infrastructure: Red Hat Enterprise Linux CoreOS serves as the immutable, container-optimized host OS for OpenShift nodes, significantly reducing the attack surface with its read-only nature and minimal footprint.
  • Policy-driven configuration management: Red Hat Advanced Cluster Management enforces consistent security configurations and policies across OpenShift clusters, so AI workloads adhere to organizational baselines. Red Hat Advanced Cluster Management can manage cluster compliance with predefined or custom policies.
  • Least privilege enforcement: Strict OpenShift RBAC makes sure users, service accounts (used by AI workloads and MLOps pipelines), and applications have only the minimum necessary permissions. RBAC configurations should be audited regularly.
  • Data encryption: Data in transit is secured using OpenShift Service Mesh for mTLS encryption, with platform-level IPSec also available for inter-node traffic. Data at rest benefits from encrypted etcd (the OpenShift control plane’s datastore) and OpenShift Data Foundation capabilities, or Red Hat Enterprise Linux CoreOS network bound disk encryption for node storage.
  • Automated compliance and remediation: The OpenShift compliance operator automates compliance checks against industry benchmarks (e.g., CIS Benchmarks, Essential Eight) and security profiles. It can scan nodes and platform configurations, report non-compliance, and, in some cases, automatically remediate issues. Red Hat Advanced Cluster Security complements this by providing continuous compliance assessment and reporting against various standards.
  • Secure admission control: OpenShift’s SCCs control pod permissions, preventing privileged escalation and restricting access to host resources. Red Hat Advanced Cluster Security can be integrated with Kubernetes admission controllers to enforce policies, such as blocking pods with critical vulnerabilities or those that don’t meet defined configuration standards. OpenShift also supports Common Expression Language (CEL) for admission control, enabling fine-grained, custom policy enforcement directly through the Kubernetes API. Admin Network Policy (ANP) allows for cluster-wide network security policies that can override namespace-scoped policies.

Protecting active AI workloads and data

During the runtime phase, the focus shifts to protecting active AI workloads, detecting threats, and maintaining ongoing operational security:

  • Runtime isolation: SELinux (enforced by Red Hat Enterprise Linux CoreOS and OpenShift) provides mandatory access control to further isolate container processes. SCCs and namespaces (projects) continue to provide critical isolation boundaries at runtime. NetworkPolicies enforce microsegmentation to restrict network traffic between AI pods and other services, allowing only explicitly permitted communication paths. ResourceQuotas at the project level prevent AI workloads (which can be resource-intensive, especially during training) from causing resource exhaustion and impacting other applications or cluster stability.

  • Secure access management: Integration with enterprise identity providers using Red Hat Single Sign-On (Keycloak-based) provides unified authentication to AI applications and MLOps tooling. AI inference endpoints and MLOps APIs are secured using OpenShift routes/ingress configurations and, for more advanced management, an API gateway like Red Hat Connectivity Link.

  • Continuous monitoring and threat detection: Platform and application monitoring utilize OpenShift monitoring (Prometheus and Grafana) for real-time metrics on AI workload performance and resource utilization. Logs are collected and analyzed using OpenShift logging. Network communication is monitored using Red Hat Advanced Cluster Security to visualize network flows and identify anomalous communication patterns. Red Hat Advanced Cluster Security provides critical runtime security capabilities, including process allowlisting (baselining expected process activity and alerting on deviations), anomaly detection based on behavioral analysis, and policy-based detection of malicious activities.

    Furthermore, continuous monitoring extends to AI safety attributes, leveraging TrustyAI to provide visibility into guardrail performance and detect model accuracy and drift issues.

By managing these layered security controls and DevSecOps principles throughout the different phases of your models and applications, your organization can enhance the security posture of your AI workloads on the OpenShift platform.

Final thoughts

Securing AI workloads demands a comprehensive, integrated strategy that addresses the entire lifecycle, from data preparation and model development to deployment and ongoing operations. Red Hat OpenShift—augmented by OpenShift AI, Red Hat Advanced Cluster Security, and other components of a trusted software supply chain—provides the technical capabilities and best practices needed to build, deploy, and manage AI applications with greater confidence. By adopting a layered security approach and embracing DevSecOps principles, organizations can more effectively harness the transformative power of AI while mitigating associated risks in complex hybrid cloud environments.

Resource

Get started with AI Inference

Discover how to build smarter, more efficient AI inference systems. Learn about quantization, sparsity, and advanced techniques like vLLM with Red Hat AI.

Enter keywords here to search blogs

UI_Icon-Red_Hat-Close-A-Black-RGB

More like this

Keep exploring

Browse by channel

Automation

The latest on IT automation for tech, teams, and environments

Security

The latest on how we reduce risks across environments and technologies

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure

The latest on the world’s leading enterprise Linux platform

Applications

Inside our solutions to the toughest application challenges

Virtualization

The future of enterprise virtualization for your workloads on-premise or across clouds

Red Hat Blog: Latest News

Preparing your organization for the quantum future