Security
Headlines
HeadlinesLatestCVEs

Headline

The open source paradox: Unpacking risk, equity and acceptance

Open source has always been paradoxical: it’s software developed by passionate developers and given away for free, yet it’s monetized and funded by some of the largest companies in the world. An underdog, once called “a cancer,” and yet it’s the single largest driver of innovation and technological progress we have ever seen. In the world of open source, paradox will always exist, but nowhere more so than in the understanding of security vulnerabilities.Twenty-five years ago, the Common Vulnerabilities and Exposures (CVE) program was established to standardize the naming and tracking of softw

Red Hat Blog
#vulnerability#linux#red_hat#intel#auth

Open source has always been paradoxical: it’s software developed by passionate developers and given away for free, yet it’s monetized and funded by some of the largest companies in the world. An underdog, once called “a cancer,” and yet it’s the single largest driver of innovation and technological progress we have ever seen. In the world of open source, paradox will always exist, but nowhere more so than in the understanding of security vulnerabilities.

Twenty-five years ago, the Common Vulnerabilities and Exposures (CVE) program was established to standardize the naming and tracking of software flaws. In an era where identifying a specific vulnerability was often ambiguous, with multiple issues in common software like sendmail, CVE emerged to bring clarity and organization. While early efforts like Security Focus and Bugtraq existed, MITRE’s CVE provided a much-needed global system. In its first year, 1999, there were 894 vulnerabilities cataloged, highlighting the early need for consistent identification even with a relatively smaller volume. This historical context is crucial for understanding the challenges we face with CVEs today.

The landscape of software vulnerabilities has dramatically shifted over this time. In the program’s first six years, the number of CVEs assigned surged by over 450%, driven by wider adoption. This growth continued exponentially, reaching nearly 15,000 CVEs by 2017, a 125% increase in just two years. By 2023, the volume had climbed another 50% to over 29,000. This explosive growth underscores the increasing complexity of software, the increased availability of software, and more vendors adopting CVE.

The vulnerability landscape continues to expand dramatically. In 2024, assigned CVEs surged by 39% to over 40,000, partly due to the Linux kernel’s CVE Numbering Authority (CNA) status. This growth trajectory could accelerate significantly if other software sectors, like mobile app or game development, begin formally tracking CVEs. This sheer volume necessitates a critical re-evaluation of our vulnerability management strategy.

“Patch everything” is unsustainable

We’ve long maintained that the long-standing “patch everything” mantra, while perhaps manageable in simpler times, is unsustainable and strategically unsound in today’s complex environments. It operates devoid of genuine risk assessment. Not every vulnerability warrants immediate, resource-intensive remediation. Factors like exploitability and potential impact are paramount. Treating every identified flaw identically is akin to recommending aggressive surgery for both benign and malignant growths – it ignores the actual level of threat and the risk inherent in the treatment itself.

The crucial metric for prioritizing action is actual exploitation. Data analysis, leveraging sources like Cybersecurity and Infrastructure Security Agency’s (CISA) Known Exploited Vulnerabilities (KEV) catalog, consistently shows that exploitation rates remain remarkably low – historically well under 0.5% annually. This translates to roughly 1 in 200 vulnerabilities being actively weaponized. Knowing this, we should focus on more pragmatic, risk-based approaches.

Our focus must shift to targeting the vulnerabilities most likely to be exploited that could also cause significant damage – typically those enabling remote, unauthenticated access with high privilege escalation, what we would call Critical or Important. By concentrating remediation efforts on these high risk/high impact vulnerabilities, we maximize risk reduction with available resources. This inevitably means strategically accepting the lower residual risk posed by vulnerabilities unlikely to be targeted or result in material impact if exploited, which are most Low and Moderate issues.

Effective risk management isn’t about eliminating all vulnerabilities; it’s about prioritizing those that pose a genuine, probable threat and consciously accepting manageable risk elsewhere.

What does this have to do with open source?

Concerns about unpatched vulnerabilities often center on open source, primarily due to its transparency. We can all see both the code and the CVEs. In contrast, proprietary vendors frequently don’t disclose low-impact flaws they deem unworthy of fixing, creating an opaque risk landscape. A minor flaw generating a public CVE in open source might go unreported, unfixed or be silently patched in proprietary software.

This visibility difference creates a double standard. Policies demanding “no known vulnerabilities” inherently target open source’s transparency, not necessarily its higher risk. Critically, your organization already implicitly accepts the risk of undisclosed minor flaws in the proprietary software you use daily. True risk management requires acknowledging this. We must apply a consistent, explicit risk assessment focused on likely exploitability and impact to all software, rather than penalizing the visibility inherent in open source.

True equity in the vulnerability management space requires us to acknowledge that open source is different—it’s infinitely more transparent by design (and that is a good thing!) and it will appear to have more vulnerabilities. Open source requires explicitly accepting the risk that we implicitly accept in proprietary software.

And there is some interesting data to back this up. Comparing Red Hat and the data from our 2024 Risk Report to a well-known large proprietary software vendor provides some very interesting insights that prove the aforementioned hypothesis. Unless the proprietary vendor only creates bugs that are highly impactful (read: easily exploitable), we would expect to see a similar distribution of Critical and Important to Moderate and Low issues. In other words, unless the security bugs they cause are always big, bad and ugly, you’d expect to see a higher number of less impactful security bugs. But the numbers show an almost insignificant number of low-severity issues. Incidentally, this vendor uses the same four-point severity rating scale that Red Hat does.

Vulnerability reporting metrics can paint a deceptive picture. The higher volume of lower-severity CVEs reported transparently by open source projects (92% of Red Hat’s vulnerabilities being Moderate and Low in 2024) compared to this proprietary vendor (with 5.5% rated Moderate and Low in 2024) isn’t actually illustrating relative risk—it primarily reflects differing disclosure philosophies.

Instead of chasing sheer volume or severity ratings, it’s better to focus on the critical factor: actual exploitation. In 2024, only 0.26% of open source vulnerabilities (11 from over 4,200) that affected Red Hat software were known to have been exploited on any platform in real world situations. Prioritizing based solely on counts leads to significant resource drain on issues posing minimal practical threat.

Both proprietary and open source vendors inherently prioritize fixing what matters most. The key difference is often transparency about the unpatched, lower-impact vulnerabilities. We already implicitly accept this residual risk from closed-source software; it’s time we apply this same pragmatic, risk-based assessment explicitly across all software, including open source. Philosophically, we all align: fix the things that matter, deprioritize the things that don’t.

As security leaders, we believe you should steer your program away from simply counting vulnerabilities. Champion a strategy centered on exploitability intelligence and potential business impact. Implement processes that rigorously prioritize the fraction of threats likely to be weaponized, and foster a culture that understands and consciously accepts manageable, residual risk. Direct your resources to mitigating the threats that truly endanger your organization.

If you want to dive deeper into this topic, I spoke at OpenSSF’s SOSS/Fusion conference last year on this topic using 2023 data, and this year at VulnCon 2025 updated with data reflecting 2024.

Red Hat Blog: Latest News

The open source paradox: Unpacking risk, equity and acceptance