Headline
Security beyond the model: Introducing AI system cards
AI is one of the most significant innovations to emerge in the last 5 years. Generative AI (gen AI) models are now smaller, faster, and cheaper to run. They can solve mathematical problems, analyze situations, and even reason about cause‑and‑effect relationships to generate insights that once required human expertise. On its own, an AI model is merely a set of trained weights and mathematical operations, an impressive engine, but one sitting idle on a test bench. Business value only emerges when that model is embedded within a complete AI system: data pipelines feed it clean, context‑
AI is one of the most significant innovations to emerge in the last 5 years. Generative AI (gen AI) models are now smaller, faster, and cheaper to run. They can solve mathematical problems, analyze situations, and even reason about cause‑and‑effect relationships to generate insights that once required human expertise.
On its own, an AI model is merely a set of trained weights and mathematical operations, an impressive engine, but one sitting idle on a test bench. Business value only emerges when that model is embedded within a complete AI system: data pipelines feed it clean, context‑rich inputs, application logic orchestrates pre‑ and post‑processing, guardrails and monitoring enforce safety, security, and compliance, and user interfaces deliver insights through chatbots, dashboards, or automated actions. In practice, end users engage with systems, not raw models, which is why a single foundational model can power hundreds of tailored solutions across domains. Without the surrounding infrastructure of an AI system, even the most advanced model remains untapped potential rather than a tool that solves real‑world problems.
What are AI model cards?
AI model cards are files that accompany and describe the model, helping AI system developers make informed decisions about which model to choose for their applications. Model cards present a concise, standardized snapshot of each model’s strengths, limitations, and training information, summarizing performance metrics across key benchmarks, detailing the data and methodology used for training and evaluation, highlighting known biases and failure modes, and spelling out licensing terms and governance contacts. With this information in one place, it’s easier to assess whether a model aligns with accuracy targets, fairness requirements, deployment constraints, and compliance obligations, reducing integration risk and accelerating responsible adoption.
Introducing AI system cards
In November 2024, we authored a paper addressing the rapidly evolving ecosystem of publicly available AI models and their potential implications for security and safety. In this paper we proposed standardization of model cards and extensions to include safety, security, and data governance and pedigree information.
Today, we extend this analogy and introduce AI system cards. An AI system card contains information about how a particular AI system is built: its architecture and components, including the models used by the system and the data used to train and augment those models. More importantly, the system card contains security and safety information of the AI system. This includes the intent and scope of the system’s security and safety posture, and a link to the security and safety issues that have been fixed and when they occurred. Similar to reading a label before buying a product, end users can read the system card before deciding to buy, subscribe, or even use the services of that AI system.
AI system cards embody the transparency ethos that drives open source software. By openly documenting each deployment—covering architecture diagrams, constituent models, training and augmentation data sources, evaluation benchmarks, and a changelog of security and safety fixes—they invite the broader community to inspect, audit, and improve the stack just as they would review code on GitHub. Additionally, open licensing, such as CC BY 4.0, and a standard schema make these cards remix‑able across tooling, enable automated policy checks and side‑by‑side comparisons of competing systems.
This radical visibility helps to lower the barrier to independent verification, accelerates collaborative hardening against novel threats, and helps users make informed choices grounded in objective facts rather than marketing claims—precisely the trust‑through‑transparency model that has made open source ecosystems thrive. As this ecosystem grows, we also envision deployment and operations tooling that can both generate and consume system cards as part of real-time pipelines and governance workflows.
Looking forward
While the concept of documenting AI systems is not entirely new, we recognize that multiple efforts are underway across the industry to define what such transparency should look like. We expect the format and surrounding ecosystem will evolve rapidly, and we encourage open collaboration toward establishing a common, interoperable, and machine-readable standard that can be broadly adopted.
Demonstrating our commitment to transparency and responsible AI development, we are introducing the AI system card for the recently released “Ask Red Hat” conversational chatbot, which can be accessed by Red Hat subscribers. This system card captures essential details about how the AI system has been built, including its core components and data sources. It also clearly articulates the system’s intent and scope, offering stakeholders a concise view into its purpose, boundaries, and trust posture.
We see this as an important step toward building AI systems that are not only powerful, but also more explainable, auditable, and aligned with user expectations. We invite the broader community to engage with this initiative and help shape a more transparent, secure, and accountable future for AI.
Learn more
- Red Hat AI
- AI on the Red Hat Blog