There's a category of security failure that gets less attention than it deserves, partly because it tends to be quiet, and partly because the organisations affected can argue — with some technical justification — that they did nothing wrong. The failure mode is inherited trust: the security posture you implicitly accept every time you pull in a dependency, run a build pipeline, or deploy software that relies on components you didn't write and have never audited.

The XZ Utils backdoor, the SolarWinds compromise, the npm package hijacking campaigns — these events are the visible edge of a much broader structural problem. For every incident that makes headlines, there are dozens of quiet failures: outdated packages with known vulnerabilities running in production, build pipelines with write access to source repositories, open-source maintainers with commit rights to widely-deployed code who haven't been heard from in two years. The failure mode isn't dramatic. It accumulates.

What Inherited Trust Actually Means

When you include a dependency in your software, you are not merely including code. You are inheriting the entire trust chain of that code — its maintainers, their operational security practices, their personal threat surface, the infrastructure their project depends on, and every transitive dependency they've pulled in. This inheritance happens without a handshake, without diligence, and usually without awareness.

The typical enterprise application has hundreds of direct dependencies. Each of those dependencies has its own dependency tree. The total count of packages a single application trusts implicitly — transitively — often runs into the thousands. You have, in effect, granted implicit trust to code written by people you've never heard of, maintained on infrastructure you've never examined, potentially controlled by parties whose interests may not align with yours.

This isn't a new observation. The security community has been talking about supply chain risk since at least the early SBOM discussions in 2014. What's changed is the scale, the speed at which dependencies are introduced, and the growing sophistication of adversaries who've noticed that the software supply chain is a high-leverage attack surface.

[GABRIEL TO REVIEW] — Add a specific statistic or data point here about average transitive dependency counts in enterprise software, or a concrete example from your experience of auditing a codebase and finding the actual dependency tree size.

The Trust Model Nobody Designed

Here's the thing that bothers me most about inherited trust: it's not a deliberate design choice. It's an accidental architecture that emerged from a combination of developer productivity norms, package registry design, and the social dynamics of open-source communities.

The de facto trust model for most software supply chains works like this: if a package is available on a public registry (npm, PyPI, Maven Central, etc.) and has enough downloads to seem legitimate, developers treat it as safe. The download count functions as a proxy for trust. So does the presence of stars on GitHub, or the fact that a known company appears to use it. None of these signals are actually evidence of security. They're popularity metrics.

This is the same failure mode as security questionnaires in TPRM — trusting signals that measure something adjacent to security rather than security itself. A package can be widely downloaded, actively maintained, and thoroughly backdoored. SolarWinds had an impressive customer list right up until it didn't.

The Maintainer Problem

Open-source maintainers are the unacknowledged infrastructure of the modern software supply chain. Many are individuals working on widely-deployed software in their spare time, with no meaningful security support, no incident response capability, and no consistent security review process. The trust organisations place in their code is wildly disproportionate to the resources those maintainers have to defend it.

The XZ Utils case illustrated this starkly. A sophisticated, patient actor spent nearly two years building a trusted identity as a maintainer of a widely-used compression library, gradually accumulating commit rights, and ultimately inserting a backdoor that would have affected a significant portion of Linux systems worldwide if it hadn't been caught by an accident of performance profiling. The attack surface wasn't a technical vulnerability in the traditional sense — it was the social and operational structure of open-source maintenance itself.

[GABRIEL TO REVIEW] — Add your own analysis of the XZ Utils case or a comparable incident here. What specifically about the attack methodology do you think is underappreciated? Are there similar patterns you've observed in your work?

Build Pipelines as Attack Surface

If the dependency problem is about what code runs in production, the build pipeline problem is about who has write access to the path that gets it there. CI/CD pipelines have become one of the most consequential and least scrutinised parts of the modern software supply chain.

A typical build pipeline touches an enormous amount of sensitive infrastructure: source repositories, signing keys, deployment credentials, cloud provider APIs, artifact registries. Compromise the pipeline and you don't need to compromise the application code — you can inject malicious behaviour upstream and have it deployed as a trusted artefact, signed with legitimate credentials, through legitimate channels.

The Codecov breach of 2021 is the canonical example. An attacker modified a bash uploader script used by thousands of organisations in their CI pipelines, exfiltrating environment variables — including credentials — from every build that ran it. The attack was low-noise, affected a huge number of organisations, and demonstrated exactly why build pipeline security should be treated as a first-class concern rather than an afterthought.

Most organisations treat their CI/CD configuration the way they treat their .bashrc: something that grew organically over time, nobody fully understands, and everyone is slightly nervous to touch. The security properties of that configuration are rarely formally reviewed. The principle of least privilege is rarely applied. The blast radius of a pipeline compromise is rarely modelled.

[GABRIEL TO REVIEW] — Add specific observations from pipeline security reviews you've conducted. What are the most common misconfigurations you've found? What's the gap between how teams think their pipeline is configured and how it actually is?

The SBOM Promise and Its Limits

Software Bill of Materials documents — SBOMs — have emerged as the primary policy response to software supply chain risk. The logic is sensible: if organisations don't know what's in their software, they can't manage the risk. An SBOM provides an inventory of components, versions, and licences, creating the foundation for vulnerability tracking and risk assessment.

The US Executive Order 14028 mandated SBOM adoption for federal software suppliers. The NTIA published minimum element standards. The tooling ecosystem has matured considerably. SBOMs are, by most measures, a genuine improvement on both previous state of affairs, which was "nobody knows what's in this software."

But SBOMs have limits that are worth being clear-eyed about. A complete, accurate SBOM tells you what components are present. It doesn't tell you how they're used, how they're configured, what permissions they have at runtime, or whether they contain malicious code that hasn't been detected yet. Vulnerability scanning against an SBOM is valuable, but it's vulnerability management — identifying known-bad components — isn't supply chain security.

The gap is significant. The XZ Utils backdoor was inserted into a version of a legitimate package. An SBOM that listed the correct package name and version would have told you nothing about it. The threat that most worries practitioners isn't the known vulnerability in a named package — it's the malicious code that doesn't have a CVE yet because nobody's found it.

What Realistic Mitigation Looks Like

I want to be careful not to write a piece that ends with "and therefore the problem is unsolvable." It isn't unsolvable. It's hard, resource-intensive, and requires treating software supply chain security as a continuous programme rather than a one-time project. Here's what realistic mitigation looks like:

Dependency pinning and lock file hygiene. Lock files — package-lock.json, Pipfile.lock, cargo.lock — pin dependency versions and (crucially) their hashes. This means a compromised upstream package that changes its content without changing its version number won't silently land in your builds. Dependency pinning is table stakes. It's alarming how often it's absent.

Private registries and artifact proxying. Running your own artifact registry (Artifactory, Nexus, or similar) and proxying public registries through it gives you a control point. You can enforce policies — block packages older than a certain date, flag packages with no upstream maintenance, require internal review before a new package is approved. This doesn't eliminate the risk; it creates friction in the right places.

Pipeline hardening. CI/CD systems should follow the same least-privilege principles as any other sensitive system. Pipeline credentials should be scoped to what they need and rotated. Secrets shouldn't live in environment variables accessible to arbitrary pipeline steps. Infrastructure-as-code for pipelines should be reviewed with the same rigour as application code.

[GABRIEL TO REVIEW] — Add your recommended toolset here: specific tools for dependency review, pipeline security scanning, SBOM generation. Include any tools you've built or significantly customised for this purpose.

Supplier security assessments that ask the right questions. When evaluating software vendors, the relevant questions are about their development and release practices: Do they publish SBOMs? Do they have a documented secure development lifecycle? What's their process for responding to discovered vulnerabilities in their dependencies? How are their build pipelines secured? These questions are more useful than generic "do you have a security policy" inquiries.

The Ownership Question

One question that comes up in almost every supply chain security conversation: who owns this? Is software supply chain security an application security problem? A TPRM problem? An infrastructure problem? A policy problem?

The honest answer is that it spans all of these, which means that in most organisations, it's owned by nobody in particular. Security teams focus on external threats. Development teams focus on shipping software. TPRM teams focus on vendors with contracts. The build pipeline and the dependency tree fall into the gap.

Closing that gap requires deliberate ownership assignment, which usually means a cross-functional programme with a dedicated owner and mandate. It also requires making the invisible visible — actually knowing what's in your software, how it gets built, and where the trust boundaries are. Most organisations have never done that exercise. The ones that have are consistently surprised by what they find.

[GABRIEL TO REVIEW] — Close with a practical call to action or framing that's specific to your audience (risk teams, security practitioners). What's the single most important thing an organisation that hasn't started on this should do first?

The software supply chain is not an exotic attack surface reserved for nation-state adversaries. It's the infrastructure every organisation depends on, built on a trust model that was never designed with adversarial conditions in mind. Inherited trust is quiet, pervasive, and consequential. The organisations that are serious about managing it are starting from a position of honest accounting — understanding what they own, what they've inherited, and where the trust chains actually lead.

Back to Writing