Back

Ransoming the CISO Role. Words of caution after the Joe Sullivan legal case.

DALL·E 2022-10-05 23

HOW THE JOE SULLIVAN CASE WILL AFFECT THE INFORMATION SECURITY INDUSTRY

A few weeks ago, it seemed like everyone in the tech industry was glued to Mudge’s congressional testimony. Not only is he one of the most irreproachable people in the security space, the drama around Twitter is fun, partially because Elon Musk plays almost like a cartoon villain.

While most eyes were on Mudge, at exactly the same time, there was actually testimony being given across the country in a California court, involving a well-respected CISO, involving a 2016 breach at Uber, that was mostly ignored (even though it got some coverage in early 2017).

And then last night, while all eyes were again on the Musk / Twitter soap opera, a verdict came down in the Uber case that, while reported on, wasn’t prominent the way it should have been. Because, if the verdict stands, is going to have massive consequences to the industry. It will change the CISO role dramatically, and have cascading effects into every organization that needs a CISO, particularly technical companies.

The Uber CISO at the time, Joe Sullivan, was prosecuted for covering up the breach, and for obstructing the investigation. Last night, he was found guilty. CISOs who are paying attention and who have looked at the facts believe he did his job, and that he did it with integrity. In fact, they believe he didn’t do anything differently than the typical CISO would do.

The most obvious consequence is that every CISO is suddenly worried that, in the course of just doing their jobs competently and ethically, they may still be at risk of criminal charges, that could include jail time.

Being CISO is already an incredibly stressful job; there’s a reason that most big CISO jobs easily make 7 figures of comp, and it’s not because you’re basically always on call. As CISO, your title could easily be, “Chief (Internal) Scapegoat Officer”. The average tenure for a CISO is just about two years, and generally nobody should bat an eyelash hiring a CISO who has been fired multiple times. We all know that it’s a core responsibility of the job to take the bullet— an excellent CISO who does everything right will not prevent all breaches, and if inevitable breaches become an issue, the CISO is the designated sacrifice. This has even proven true in cases where breaches were due to product team issues, even where the CISO raised issues well in advance of a breach.

CISOs who know Joe believe that he did everything they would have done (essentially, engaging with people who found issues via a Bug Bounty program, and then notifying the legal department). If Joe could face jail time, just for doing his job, well, so can they.

Some people seem to think this isn’t a big deal in general, that Joe was just unlucky.

I’ve heard two flip comments:

  1. CISOs will have to get added to D&O (directors and officers) insurance policies. NO! D&O policies do not cover criminal charges. How would insurance protect Joe from whatever sentence is coming?

  2. CISOs will have to partner more closely with legal. What do you think CISOs have been doing all these years? In Joe’s case, the lawyers weighed in, and they still scapegoated Joe. They weren’t his lawyers, they were the company’s lawyers They protected the company… at his expense!

From the Washington Post reporting:

Clark, the designated legal lead on breaches, was given immunity to testify against his former boss. On cross-examination, he acknowledged advising the team that the attack would not have to be disclosed if the hackers were identified, agreed to delete what they had taken and could convince the company that they had not spread the data further, all of which eventually came to pass.

For those without the background, the facts are that Uber’s security team was alerted of an issue that they paid out via their bug bounty program, in a way that is incredibly common across the industry.

Bug bounty programs, where companies pay people for reporting security vulnerabilities in their products, are generally regarded as both a net positive for the industry, and a bit shady at the same time.

On the positive side, bug bounty programs help companies find and fix problems they weren’t otherwise able to address. Security researchers, often with a very refined ethical code (Mudge, for example), build their skills by proactively finding issues in major vendors’ products. The security industry finds this function quite important, not for skill development (though that is a plus), but to hold companies accountable to fixing issues that could otherwise be used against them, or their customers.

We used to live in a world where vendors ignored friendly reports, and then the bad guys would find the same bugs, and covertly leverage them, often for years. Knowing that security researchers would disclose vulnerabilities in the public interest gave companies the kick they needed to actually care about fixing known problems.

Even more than making them jump, bug bounty programs incentivise companies to invest in finding vulnerabilities themselves, because they can often find and fix most things much more cheaply than if they just wait to pay out bounties. Ideally, if the low hanging fruit is quickly plucked, bug bounty programs turn up more interesting, esoteric stuff, and it’s good for the industry if vulnerability researchers are pushing to get good at such things, as it will inevitably lead to improvements in defence as well.

Unfortunately, bug bounty programs intrinsically live in a grey area. The people who submit bugs and get paid are often legitimate security researchers. But, without such programs, their rights to find these flaws are not so clear. Sure, the DMCA ostensibly lets people reverse engineer code, but now so much of the code is SaaS living in a black box— it’s not clear that even legitimate researchers have the legal right to try to find flaws in many cases.

And, when companies run bug bounty programs, they realise that they are paying for people find problems. And it should be clear to anyone that the people they’re paying are generally living in a grey area. Yes, sometimes someone might be quite clearly on one side of the line or another, but it’s often a judgement call.

Today, if someone proving they found an issue with an app saw sensitive data as part of legitimate security research, and was willing to sign a non-disclosure, delete the data, and attest to the data not having been distributed, then is that a breach that all customers need to know about, or is it legitimate security research with no risk to end users?

Still, plenty of CISOs have been concerned about these issues, and run to their legal teams, and the legal teams across the industry generally say, when someone does sign the non-disclosure, then it’s not a breach that merits disclosure. That was clearly said in the testimony of Uber’s Clark. Yet, it didn’t matter.

Up until today, CISOs across the industry have been paying the bug bounty, and notifying legal, letting them, as the legal experts, make the judgement call over what’s a breach and what’s not. They’re the ones who are trained to interpret the law, and can weigh in on the grey areas.

That doesn’t mean they always feel good about it. They are smart people, who see the grey area. I have even heard one particularly jaded CISO refer to Bug Bounty programs as “Ransom Payout Programs”. But, those CISOs have always deferred to their legal team to be the arbiters able to balance the legal risk with the other risks involved.

What should we expect to change, now that jail time is on the table for CISOs?

CISOs might feel the need to review anything even remotely in a grey area with their own outside council to represent them, not the company. If that’s the case, expect CISO salaries to skyrocket even more.

I’m not sure that will happen, but either way, I think the industry is going to, in many ways, regress to much darker days. But in some ways, it will be even worse.

In the early 2000’s, people used to joke that the security team was “the Department of ‘No’”, and large parts of the business tried to route around security, resulting in far more business risk. Over time, CISOs have learned how to be better business partners, and help find the middle ground. They typically don’t try to get in the way of business decisions, they instead figure out how to adapt to those decisions, and continually drive new improvements, while trying to minimise friction to the business. They’ve even gotten pretty good at affecting broader culture change, by selling, not dictating.

With jail time now on the table for CISOs, then the desire to be a good business partner will be dwarfed by the drive to avoid criminal prosecution. We won’t go back to the ‘bad old days’ of the “Department of No”, it will get much worse, as they will not just say “no” to everything, they will develop an itchy trigger finger, ready to blow the whistle if anything that might put them at legal risk breaks against them.

CISOs will try to turn over as many rocks as possible, but the rest of the organization will live in fear of lots of time wasted to busy work, and try to keep the CISO’s eyes off anything that is trying to move fast. The rest of the business is going to go back to hating the security team.

At first, one might think the world benefits if CISOs are pushing even harder to address all the issues. At least, it seems better than today, where data breach laws can incentivise “Ostrich security”— if you are doing enough to tick best practice boxes, but your head is in the sand and you don’t know that you got breached, then some companies find that preferable than paying the cost to trying to do better, especially when pursuing better security can increase the risk of bad PR when you do see you got hacked.

But think about our grey area with bug bounty programs. No CISO at this point should run a program, unless the company is willing to disclose everything that runs through the program, because they’d take on too much personal risk. Yet, there’s a huge industry of vulnerability research that is decidedly professionals who would not break the law. If companies have to incur a lot of cost dealing with such researchers, where there’s NO risk to end customers, then the company will prefer to see legitimate security researchers as criminals.

And that’s not good for the industry.

And while whistleblowing is beneficial when there’s been clear wrongdoing that doesn’t benefit shareholders, needless whistleblowing will have big costs. CISOs will be forced to blow the whistle more to mitigate their own legal risks, even when there is no wrongdoing, or when the company accidentally has a toe dipped benignly into what is now the grey area.

What’s best for customers? It’s certainly not crap security, but it’s also probably not often blowing the budget and distracting companies from their mission… many people have accepted some risk of data loss, and would rather have more functionality from our vendors, rather than less risk.

In short, the security industry is being propelled back into the dark ages, due to one bad court decision.

Let’s hope sanity prevails on appeal.