Back

Dependency Pinning Only Works If You Actually Review the Updates

DALL·E 2022-10-19 20.00.56 - a group of objects that are all dependent on each other exploding .jpg

SUBTITLE

Software Composition Analysis or SCA tools have a much deserved reputation for being very noisy. Most of them just fire off an alert at the whiff of a dependency version matching a vulnerability report. A few like Semgrep now check to see if there is a call to the vulnerable method but none check to see if the code in the repo is even deployed, yet alone if it is important. It’s usually noise, not signal, and just results in application security busy work. 

As a reaction to the onslaught of vulnerable dependencies and the associated SCA noise, some application security teams are now promoting automatic update, similar to the way many companies deploy security patches in operating systems. 

I get why teams are doing it and I have sometimes argued for it myself. Not always because it is a true dilemma, accepting reality rather than trying to fix a deep rooted problem and implement the best possible security controls. When I have done this, my argument is not based on reducing tools noise, that’s a failure of the tools, but based on the reality that developers live in dependency hell and just don’t update dependencies at the velocity needed for applications to stay safe. 

To make it worse, and this is the real kicker, when developers do update dependencies manually, then they very rarely review the code in the new versions anyway, making manual updates versus auto-updates nothing less than security theatre.

Using npm as an example, you have a lot of configuration choices to use in your package.json to control what you will get. Different package managers behave in different ways, I am using npm for illustration only. 

  • version Must match version exactly

  • >version Must be greater than version

  • >=version etc

  • <version

  • ^version Minor updates only

  • <=version

  • ~version "Approximately equivalent to version"

  • 1.2.x 1.2.0, 1.2.1, etc., but not 1.3.0

  • * Matches any version

  • "" (just an empty string) Same as *

  • version1 - version2 Same as >=version1 <=version2.

  • range1 || range2 Passes if either range1 or range2 are satisfied.

  • tag A specific version tagged and published as tag

When deciding whether to auto-update or not, you are essentially choosing to decide which you consider to be the lesser of two evils.

Do you want to protect yourself from package maintainers (or hijacked accounts)?

or

Do you want to have old and vulnerable versions of dependencies deployed to production?

To protect yourself from rogue package maintainers or their hijacked accounts you can use dependency pinning. Pinning ensures you only use a specific version that is known to be safe and is very simple, you use plain old version and semver (see above).

Dependency pinning is generally considered to be “security best practice” because package maintainers or their hijacked accounts can release an updated version that contains malware and have it pulled straight into your projects. This has become mainstream. npmjs.org has introduced controls to make this harder such as preventing deleted packages from being republished within 24 hours and preventing republished packages with the same name from incrementing the last version that it had before it was deleted. You have to increase the version, protecting the pinners.

Auto-updates require some minor trickery by using ncu -u and remembering to then npm install to update the lockfile in your CI. As long as dependencies were not pinned or have other conditions preventing it, then you will have the latest and greatest dependencies in your project, hopefully free of vulnerabilities. The problem with this is that all of those latest versions may not be compatible. 

A way to improve this, albeit manually is to use ncu –interactive –format group.

You can then interactively choose which packages to update, having a higher degree of confidence of not breaking the build. This is actually the best time to run an SCA tool with the least chance of generating noise. It's a build that you know you can live with and so testing that makes sense. You will still get noise, it's the nature of the beast but just less of it. 

You can actually do this with less control using ncu --doctor -u  to iteratively install upgrades and run tests to identify breaking upgrades.So the million dollar question is this.

Do you auto-update and have fewer vulnerable libraries

or

do you pin and protect from upstream maintainer hijacking?

I am going to hazard a guess and say 99% of development teams don’t review the code when they update dependencies manually and so pinning in those cases is just security theatre. 

I am also going to hazard a guess and say that while the package hijacking trend is very real indeed, there are still far more easily exploitable vulnerable libraries deployed into production than there are malware dependencies pulled from rogue maintainers.

I hate that you can't have seamless, friction free protection from both vulnerable libraries and malicious maintainers without manual code review of every updated dependency, but the simple truth of the matter is you can’t. You have to both pin dependencies and do a manual review for each dependency you update. 

If you don't manually review the updates then you might as well auto-update everything because anything less is security theatre. 

Authors Note : I understand the instability caused by auto-updates. It is a nightmare. I chose not to consider it in this blog but acknowledge its a significant problem and should be considered alongside security when considering pinning.