Everyone in IT operations now knows what Log4J is, whether they wanted to or not.
Most risk managers will know of SolarWinds (which, as a reminder, is the name of a company—not a vulnerability).
But only some may be aware of malicious changes affecting open-source packages such as typo squatting or so-called “protestware,” which can potentially erase the hard drive of any server located in specific geographic areas.
What do these have in common, and more importantly, what should the GRC community pay attention to ensure that they manage and avoid similar risks?
These recent events all fall under the broader topic of software supply chain security, which brings into focus the dependency that we have on a varied patchwork of software components upon which modern businesses are constructed.
Each of these incidents (and many more like them) are the consequence of a malicious party interfering with software as it is being created and released. But managing risk in the software supply chain is, by its very nature, not something that is applicable to just software developers.
Readers may have heard the adage “everyone is part of the supply chain—and most people are in the middle.” This illustrates the key point that enterprises (and vendor program managers, in particular) must pay close attention to the software that they receive, where it comes from, and how it has been built – as well as what goes into services they deliver to end customers.
Let’s take a closer look at three areas which are critical to managing software supply chain risk: Secure software development environments, trusted component selection, and ongoing monitoring and remediation capability.
Once we decompose the software supply chain security challenge into these subtopics, we can then relate back to controls that we already know well, as well as link to some new frameworks to help evaluate an organization’s capability.
Starting with the topic of secure software development environments, an area that is often overlooked since they are not production systems and don’t host customer data. Sometimes, software builds may even reside on forgotten-about infrastructure within the depths of ‘shadow IT.’ But this need not be the case – the technical capability to implement multi factor authentication for code commits, enforce strong audit controls on every activity that occurs, and perform secondary review on changes are already well-known principles in transactional business systems and supported features of many of the tools in use.
Taking a step further, some development groups adopt a ‘zero trust’ mindset for their software factories, completely doing away with dedicated build servers and running everything as ephemeral resources initiated on the fly and disposed when the task is complete. Frameworks such as SLSA, which originated from an internal project at Google and now in the public domain, help navigate through the range of options.
The most robust secure pipeline setups give us complete traceability of who did what, when, where, and how – and automatically capture records of these actions within distributed systems to provide a single source of truth.
Understanding software origin during managed intake of components is a very similar vetting process already utilized in vendor management today, but with some differences where open source is concerned. The good news here is that efforts are underway to proactively evaluate and score the practices used by the top open-source projects, to ensure a collection of the most critical packages are following the kind of change management rigor that we’d expect from a professional development organization. In this regard, OpenSSF scorecards are a valuable open-source community resource.
In the future, software components will come with a Software Bill of Materials (SBOM). Am SBOM serves as an inventory of all the software packages included to allow correlation between vulnerabilities and other attestations. On top of the SBOM is the notion of Vulnerability Exploitability eXchange (VEX) information, which describes the exploitability and remediation status of vulnerabilities found in the software and Vulnerability Details Reports (VDR), describing what testing was performed. When combined with other operational risk indicators, these allow for much more informed decisions on component usage.
Finally – and probably the most crucial when we consider the experiences of Log4J and others –is how well equipped is the organization to respond? The field of software development has spent the last couple of decades becoming more and more agile – integrating changes more rapidly and accelerating the pace of releases. This works in favor of the security team when it comes to releasing patches too. First because necessary security checks must be fully automated (the world of DevSecOps teaches us this), including policy-based selection of components, but also because rolling out a patch should be a lot easier when we are accustomed to shorter cycle times. For a frame of reference in this domain, the DevOps community gives us DORA metrics, intended to measure the maturity level of DevOps practices.
Putting this all together hopefully clarifies that managing software supply chain risk is not necessarily something completely new – but a case of tying up loose ends in areas that we already know well.