Back

Is Shadow Development Really A Problem?

shadow

A LOOK AT HOW THE BYOD PROBLEM WAS SOLVED AND HOW WE CAN USE THAT APPROACH TO SOLVE THE SHADOW DEVELOPMENT PROBLEM

Is Shadow Development Really A Problem?

I have heard a few security people recently talking about ‘shadow development’, describing it as a big corporate problem. I heard one person say something to the effect that you need to hunt it out and shut it down. What total and utter hogwash. I don’t think it's a corporate problem or even a problem at all if you take a few simple, modern software security engineering steps. Devs are gonna dev, get over it. Let them choose the right tools for the job with as little friction as possible. Sure it introduces a security visibility problem, but that’s nothing new and one that has enough prior art for us to know how to solve. Let me explain.

The name 'shadow development’ comes from ‘shadow IT’ and ‘shadow cloud’, the use of non-sanctioned resources in a company that bypass business policies, controls and of course corporate security policies. In recent years we have seen the issue of ‘shadow cloud’, a similar problem where teams of engineers stick in their credit cards and spin up their own cloud infrastructure. It’s devops at its finest. The latest of these shadow computing problems is ‘shadow development’, where developers are spinning up their own development pipelines. Developers augment their ‘shadow cloud’ with their own Github, their own container and package registries, and their own build tools. A total development pipeline with zero corporate control. It’s not just in the corporate world. The majority of OWASP flagship projects use their own Github orgs. I just counted only 6 out of the 15 flagship projects that are using the official OWASP GitHub org.

Bring Your Own Device or BYOD was the classic shadow IT problem, something I first saw when I worked at MSFT in the 2000’s. Despite the company issuing free Windows phones to staff, everyone carried on using their iPhones, something I remember was most evident when they looked at the Exchange server logs. Users will always find a way around something if there is enough benefit in doing it, and the choice between a phone with no apps or a phone with all your killer apps was like chumming the water with blood and baiting great white sharks.

Of course people bringing in their own devices, or even connecting to the corporate systems from home PCs did indeed introduce very real risks. Do those devices have anti-virus software? Do they have screen locks? Are they shared computers, shared with a random roommate that looks like Mr Robot? Who knows and that's the point.

BYOD is a solved problem and now the norm in most companies. The first investor in SourceClear was a lovely human called Frank Marshall. Frank was the first ever VP of Engineering at Cisco and went on to invest in loads of security companies, taking a few public. One of those was MobileIron, a mobile device management company that checked controls on mobile devices such as ensuring it had a PIN code set and had the latest security patches applied before allowing it to connect to corporate resources. It was the classic trust but verify security model. Frank told me that MobileIron took off when Apple realized that the only way to get corporate adoption was through BYOD, and it had become no longer acceptable for CIOs to say people couldn't use iPhones at work. It was the perfect storm. Instead of trying to ban them, companies embraced it and said ‘if you are going to show up with your own device, show us you have taken basic measures and come on in’.

The shadow development case is nothing new, it's just another flavor of BYOD and we should apply the history lesson. Unlike the BYOD, a case where you didn't know if the device met your security policy, the implications for secure software delivery are that you have no way to know if the code is compliant with your security policy. Has it been scanned with SAST or SCA tools and free of defects? Was it built using approved containers from a trusted registry? The list goes on, as long as the number of security controls in your policy.

The TL;DR here is that it is yet another visibility problem. You have zero visibility if any security controls have been applied in accordance with your security policy.

Developers should be encouraged to use whatever tools are best to do their job, and in a way that there is as little friction as possible, and security people need to embrace it by putting visibility into the pipeline. With that visibility you can then make control decisions such as whether to allow the code to be deployed into production. You do this with build and code provenance by collecting information in the build about where the code came from, who created it, what it consists of, what quality checks have been run and what the results are. You then sign it and present it to a control step in your CI/CD. You can run that control step locally or in a central pipeline i.e. on shadow development systems or central ones where the signature provides verification of the attestation.

It's just like BYOD, but for secure software development. Trust but verify. There are easy ways to do this such as with Chalk, our free open source project but this isn’t about Chalk. This is about a mindset. Shadow development is not a development problem, it's a security visibility problem, and security people need to stop complaining and trying to manhandle it. They need to add telemetry to the production deployment pipeline, enforce the controls they care about and let devs dev.

Fun side note : I once built a fun app that looked at all of the AWS accounts known by an IT department, parsed from the AWS Cost Explorer API, and then scanned the corporate expense system looking for AWS invoices for accounts that did not match. It was a very effective way of uncovering ‘shadow clouds’ and I suspect but fun to dust off looking for billing from development tools. This article is cross posted on LinkedIn here or feedback and comments.

shadow