Back

How AI might affect DevSecOps

DALL·E 2023-04-05 10.webp

THE SINGULARITY IS COMING

This article is cross posted on LinkedIn here for comments and discussions. 

Yes, this is a bandwagon article on AI. Yes, you can’t get away from them. Yes, pure speculation. And yes, here goes all the same. 

Yesterday I was chatting to Rich Smith. Rich recently joined the leadership team at Crash Override. He's been a long time personal friend of us all. He was the CSO at Etsy when he co-wrote the O’Reilly book Agile Application Security in 2017 and like John, Brandon and myself, also took a number of years away from application security before returning to start Crash Override, now in the world called DevSecOps. While away Rich spent time running Duo Labs and recently Gemini and SuperLunar in the web3 and crypto space. 

We were chatting about future article topics and one thing Rich said struck a chord. “What we are seeing now is people trying to socially engineer AIs rather than historically socially engineering humans”. I am sure he is going to write an article about that and how things like smart-contacts may be affected but it got me thinking, how will AI affect DevOpsSec?

On reflection I am not sure any of this is ‘new new’, it’s just that it will be amplified beyond anything we could likely imagine yesterday. I read Ray Kurzweils book many years ago, The Singularity is Near which describes when and what will happen when technology overtakes biology. It’s scary. 

"I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045"

So here is a short list of a few things that I think we should keep an eye on. 

Threat modelling - before I even present my list, the obvious one is that security assessors should now, today, take AI into account when they build threat models. It's no longer only going to be human bad actors attacking systems. 

The supply chain - I previously wrote an article Why supply chain security is so much more than open source code and CVE’s in which I argue that we need to start thinking about the software supply chain as APIs, cloud services and more. It’s not just open-source code in the form of libraries. Behind those APIs and services will be AI models that in turn may well be daisy chained to other models. The trust boundary shifted as we moved to the cloud and it feels prudent now to think about that boundary as becoming even more squishy behind the wall. 

SAST (including SCA and IaaC) and DAST - The sophistication of AI led attacks will no doubt raise the bar. Ever increasing intelligent fuzzing and neural networks learning from previous findings are two obvious things to watch for. There have been some theater demos using Chat-GPT to perform security code reviews, but NCC group published a solid paper Security Code Review With ChatGPT.

TL;DR: Don’t use ChatGPT for security code review. It’s not meant to be used that way, it doesn’t really work (although you might be fooled into thinking it does), and there are some other major problems that make it impractical. Also, both the CEO of OpenAI and ChatGPT itself say that you shouldn’t.

Some security people keep pointing out that your code is being uploaded to OpenAI but it's almost definitely being uploaded to Github and AWS anyways. AI will make those Dinosaurs extinct for sure. 

Risk Analysis - There is clearly a resurgence in the security industry looking again at modern technology to solve the age old question, what should you fix Now, Next or Never. It's fundamentally what we are doing at Crash Override. AI and large data sets from security findings and alerts (the vast majority of which are noise), live architecture maps of networks, code deployments, and data inventories, time-series data from configuration changes to security event history and more will all be used to train models to answer that simple question. 

Attack detection - Bots are already here. My 14 year old coded one up as a school project to buy and sell trainers online. AI automation will challenge the controls that are in place today such as how we deal with distributed denial of service attacks, mass account registration and OSSINT.

And that leads us back to people socially engineering AI itself. Chatbots and customer service are increasingly powered by AI. Those services often have deep access to users' data like their bank accounts. What could possibly go wrong.

Note : I have been using Dalle by Open-AI to create the thumbnails for all of our articles since we launched.