Headline
9 strategic articles defining the open hybrid cloud and AI future
In this October roundup, we cut through the noise to focus on the essential technical blueprints and policy foundations required to succeed. These articles, from key platform updates and critical security integrations to the future of open source legality, represent the core strategic reading for Q4. We highlight how Red Hat Ansible Automation Platform 2.6 streamlines operations, how Red Hat AI 3 and its intelligent control plane transform GPU infrastructure, and how our strategic partnership with NVIDIA simplifies the AI software stack. This is the quarter for planning that prepares your orga
In this October roundup, we cut through the noise to focus on the essential technical blueprints and policy foundations required to succeed. These articles, from key platform updates and critical security integrations to the future of open source legality, represent the core strategic reading for Q4. We highlight how Red Hat Ansible Automation Platform 2.6 streamlines operations, how Red Hat AI 3 and its intelligent control plane transform GPU infrastructure, and how our strategic partnership with NVIDIA simplifies the AI software stack. This is the quarter for planning that prepares your organization not just for the next fiscal year, but for the next technological decade.
- What’s new in Red Hat Ansible Automation Platform 2.6
Ansible Automation Platform 2.6 is now generally available, providing new features and platform enhancements designed to help teams build resilient, trusted foundations for IT operations. Headlining the release are 3 new features that advance key outcomes: the automation dashboard to unlock value through measurement and reporting; the Red Hat Ansible Lightspeed intelligent assistant to operate more efficiently using gen AI; and the self-service automation portal to achieve new levels of scale across the enterprise. This release introduces a streamlined experience, significant architecture improvements, and easier access to opinionated reference architectures. It is also important to note that Ansible Automation Platform 2.6 is the last release supporting RPM-based installation, with future versions moving exclusively to containerized installations.
- Red Hat AI 3 delivers speed, accelerated delivery, and scale
Red Hat AI 3, generally available in November, delivers production-ready capabilities across the AI portfolio for greater enterprise efficiency and scale. The release focuses on delivering speed and predictable scale for gen AI applications, primarily through SLA-aware inference capabilities. Key features include the generally available llm-d for reliably scaling Large Language Models (LLMs) and support for the emerging Model Context Protocol (MCP) and Llama Stack API (in Developer/Technical Preview) to accelerate agentic AI development. The platform also offers an extensible toolkit for model customization, enhanced RAG (Retrieval-Augmented Generation) capabilities, and intelligent GPU-as-a-Service (GPUaaS) features for maximizing hardware efficiency across the hybrid cloud.
- Red Hat to distribute NVIDIA CUDA across Red Hat AI, RHEL, and OpenShift
Red Hat has formalized a major collaboration with NVIDIA to distribute the NVIDIA CUDA Toolkit directly across its portfolio, including Red Hat Enterprise Linux (RHEL), Red Hat OpenShift, and Red Hat AI. This agreement directly addresses operational complexity, a significant barrier to enterprise AI adoption, by allowing developers and IT teams to access the essential tools for GPU-accelerated computing from a single, trusted source. The goal is to provide a simplified, consistently supported environment for AI workloads, regardless of deployment location—on premise, public cloud, or the edge. This new level of integration simplifies the developer experience, provides operational consistency, and sets the foundation for future innovations with NVIDIA hardware and software.
- What to know before you install or upgrade to Red Hat Ansible Automation Platform 2.6
This guide provides critical information for deploying or upgrading to Ansible Automation Platform 2.6, focusing on recommended and deprecated installation methods. For new installations, the containerized installation on RHEL and the operator-based installation on Red Hat OpenShift Container Platform are the recommended paths. The RPM-based install is now deprecated and will not be available starting with the upcoming 2.7 release, making container-based methods the future standard. Existing users can upgrade directly from Ansible Automation Platform 2.4 and 2.5, but those currently using RHEL 8 or RPM installs must plan a migration to a supported RHEL version (like RHEL 9 or 10) or container type before upgrading. A PostgreSQL database version of 15, 16, or 17 is a strict requirement for a successful upgrade.
- Announcing Fedora 43
The Fedora Project has announced the general availability of Fedora Linux 43, delivering major updates to its free and open source operating system. This release brings an enhanced focus on security with RPM 6.0, which now supports OpenPGP v6 keys and multiple package signatures. Key changes to the distribution include updates to the Anaconda installer (now using DNF 5) and the default use of the Anaconda WebUI for Fedora Spins. Fedora Workstation 43 features GNOME 49 and is now entirely Wayland-only, preparing for the upcoming removal of X11 support in GNOME 50. Additionally, Fedora CoreOS is now buildable using a Containerfile from the Fedora bootc image, simplifying the build process with Podman, and Kinoite introduces unattended, background updates via Plasma Discover.
- Open source and AI-assisted development: navigating the legal issues
This piece explores the primary legal and quasi-legal concerns being debated within open source communities regarding the use of AI tools in software development. Red Hat advocates for a responsible and transparent approach to ensure AI use is reconciled with open source values, reflecting a “default to open” philosophy. Core community issues include: attribution, where marking substantial AI-assisted contributions (e.g., using an Assisted-by: commit trailer) is recommended to preserve trust and legal clarity, as opposed to requiring disclosure for trivial uses; clarity on licensing formalities, where existing license grants apply to human-authored content since AI-generated material is typically noncopyrightable; and concerns that AI models are "plagiarism machines", a manageable risk that experience suggests is not a systematic issue and can be mitigated through disclosure and human oversight. Finally, the authors affirm that the Developer Certificate of Origin (DCO) remains a practical tool for maintaining trust and legal clarity, even with AI-assisted contributions, provided human responsibility and due diligence are applied.
- From tokens to caches: How llm-d improves LLM observability in Red Hat OpenShift AI 3.0
As enterprises scale LLMs, traditional metrics are insufficient; reliability is now defined by factors like Time to First Token (TTFT), Time per Output Token (TPOT), and cache efficiency. This article explores how llm-d, an open source, Kubernetes-native project integrated into Red Hat OpenShift AI 3.0, solves this observability gap. llm-d disaggregates inference into composable services (like the Endpoint Picker for cache-aware routing) and exposes LLM-specific metrics—including cache hit ratios and token-level latencies—via Prometheus and OpenTelemetry. This deep visibility allows site reliability engineers (SREs) and platform operators to move beyond guesswork, quickly diagnose performance bottlenecks (routing, caching, or GPU saturation), and confidently meet demanding AI Service Level Objectives (SLOs) at production scale.
- Beyond the model: Why intelligent infrastructure is the next AI frontier
The critical challenge in scaling LLMs is the leap from a proof of concept (PoC) on a single server to production-grade, distributed inference, which traditional infrastructure cannot handle efficiently. This piece argues that the solution is intelligent, AI-aware infrastructure—a specialized control plane designed to manage the unpredictable and resource-intensive nature of AI workloads. The open source llm-d project, co-initiated by Red Hat and IBM Research (with partners including Google and NVIDIA), is developing this control plane. llm-d enhances Kubernetes by introducing features like semantic routing (i.e., using real-time data to route requests optimally) and workload disaggregation (i.e., separating compute-heavy prefill from memory-intensive decode) to maximize throughput, meet SLOs, and ensure efficient use of heterogeneous hardware across the open hybrid cloud.
- Red Hat OpenStack VMware Migration toolkit deep-dive
The Red Hat OpenStack VMware Migration toolkit provides an Ansible collection designed to simplify and automate the transition of VMware virtual machine (VM) workloads to Red Hat OpenStack Services on OpenShift. The toolkit can significantly reduce the complexity and downtime of migrations by offering features like discovery mode, network mapping, and warm VM migration support. A key technical advantage is its use of OpenStack APIs and Changed Block Tracking (CBT), which allows for efficient incremental synchronization of only modified data blocks, thus minimizing service disruption. The entire migration is fully automated via Ansible Playbooks, taking advantage of Ansible Automation Platform for scalability, and utilizes a Conversion Host within the target environment to optimize data transfer directly from vCenter.
**What’s next? **
The collection of content above proves that the path forward isn’t dictated by single-vendor solutions or walled gardens, but by informed choice and open collaboration. Whether you’re planning a massive virtualization migration with the OpenStack toolkit, debating the ethics of AI code generation in your community projects, or simply upgrading your automation to version 2.6, every decision reinforces your future architecture. Red Hat remains committed to providing the flexible, transparent foundation—from the Linux kernel up through the AI control plane—that allows you to integrate the industry’s best hardware (like NVIDIA’s CUDA) and open source innovations. We encourage you to use these insights as your guide, ensuring every modernization step you take builds a platform that is ready for any workload, anywhere.