Back

Five Questionable Things About Top Ten Security Lists

Top Ten Thumbnail

Everything is not as it seems when you look under the hood

Five Questionable Things About Top Ten Security Lists

Matt Konda recently wrote an excellent piece about the OWASP Top Ten. I started this post nearly a year ago, but it sat incomplete in my drafts pile until yesterday. His post prompted me to wrap it up, although actually it is more of a rewrite if I am being honest.

The OWASP Top Ten has probably been ‘the’ project that made OWASP visible with almost everyone in the security community. As a result, and perhaps more importantly, it is almost definitely the piece of application security awareness material that has been in front of more developers than anything else, ever. So what is not to like?

At its essence, the goal of any top ten is a marketing document, marketing awareness of security issues to the security industry, and to people like software developers and end-users. After the OWASP Top Tens runaway success, consultants and tools vendors realized the power of community driven marketing for their own commercial and personal objectives. Existing top ten lists started to see a new wave of lobbying and influence, and new security top ten lists started springing up everywhere. Today it feels like we have a top ten list for everything, and a new one comes out each month. The latest bandwagon of course are top tens for LLMs.

Like others, I find myself increasingly asking pointed questions about top ten security lists.

  1. Can you trust the authors?

  2. Can you trust the process?

  3.  Can you trust the data?

  4. Are top tens even effective?

  5. Why the bloody hell are they so poorly written?

Can you trust the authors?

I know there will be blowback from some people about that question, so I am going to tackle it upfront and head-on. The blowback is going to come from people who don’t like the inference of the quote below, the authors of the lists themselves.

"It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair

You always have to ask yourself why someone does something. I am writing this article because it's part of the content marketing program at Crash Override, designed to indirectly get eyeballs from prospective users on our platform. 

Most top tens start, and to this day are still, lists created by consultants or tools vendors that they can use to sell more tools or more services. They are either directly related to their business or indirectly related though brand recognition. That's a fact. That is also everyday life and it’s just business. You have to get over it, but you have to be aware of it. 

FaceBook gives you a free platform, in return for using your activity to sell adverts and make money. Again it's just business, and as long as you are aware of how it works, you can decide if you want to play the game. 

Security top list, are not  predominantly driven by operational security teams. When you understand that, you understand the natural biases and why things are the way they are. If you talk to CSO’s about the top ten application issues they are facing, I can tell you that they aren’t a collection of tactical flaws and vulnerabilities. They certainly aint things like Injection, whatever that means unless its security anti vaxxers, or software and data integrity failures. They are things like educating developers to write more robust code, improving security controls in their development pipelines, implementing secure development frameworks and libraries. I wouldn't be surprised if a few had “explaining to people why OWASP Top Ten issues are not their top ten issues. 

The first blowback rebuttal is probably “But we have a process to ensure it's transparent”. 

Can you trust the process?

The simple answer is no. 

Last year in San Francisco, a new startup CEO asked to meet me for coffee. He asked me outright “How can I fast track a top ten into OWASP?” I asked him outright, "Is this your sales data sheet?" and he said “Yes”. He told me that the OWASP K8 Top Ten, and the API Top Ten were examples of security top tens that had been created as sales tools, and were very effective sale tools, and if he didn't do the same, he was going to be at a disadvantage. I liked this guy. He was straight up and honest, which is why I am not naming and shaming him. 

Don’t hate the player, hate the game. 

A well known case of this same behavior was when Contract Security (and other vendors like Shape) were accused of unfairly pushing to add A7: Insufficient Attack Protection into the OWASP Top Ten. The outline of A7 even mentions Runtime Application Self Protection (RASP) directly, which is what Contrast Security offers.

Another case, was where I was connected by a venture capital firm to talk to one of their CEO’s. They were developing an API security platform, and had been watching the API security top ten being debated before initial publication. They were horrified at the bun fight between two vendors, essentially arguing for issues to be included and excluded, directly mapped to their conflicting approaches of being inline or out-of-band. 

Many top tens are made up by a few people, and sometimes their mates weigh in to supposedly give them credibility. Some are even published without any supporting data or a legitimate methodology. Here is a statement about the OWASP Kubernetes Top Ten from their web page

“In the future we hope for this to be backed by data collected from organizations varying in maturity and complexity.”

The OWASP board knows all too well about this problem, but like their general approach to running the community, offers no leadership, hand waves it all off, and fails to deal with important issues. 

So if you can’t trust the process, can you at least trust the data, usually presented as fact? 

Can you trust the data?

The simple answer again is no. 

Most top tens have no data to support them. They are, like the example above, opinions.

The ones that do also have support data often have fundamental flaws. Hats off, and I really mean that, to the OWASP Top Ten who publish a methodology of how they collect the data, but from their web page they also state

“This installment of the Top 10 is more data-driven than ever but not blindly data-driven. We selected eight of the ten categories from contributed data and two categories from the Top 10 community survey at a high level.”

Guess what? Eight of the issues were derived from the following security vendors' tools or services : AppSec Labs, Cobalt.io, Contrast Security, GitLab, HackerOne, HCL Technologies, Micro Focus, PenTest-Tools, Probely, Sqreen, Veracode, WhiteHat (NTT). 

No wonder those eight issues feature, they are the very things that those tools find. 

And that is before we even ask ourselves about the integrity of that data. I get that supporting security data needs to be anonymized, but there is no independent verification that the data is even real, in an industry that has a track record of questionable behavior. 

The TLDR here is that some top tens are simply made up, and others with data, have heavy biases from data that can’t be verified. 

And so to the next question. 

Are top tens actually effective?

Who knows? I think they are more likely not effective outside of the security echo chamber and it’s a myth. I have not seen any case studies to show that there is a direct correlation between marketing these issues to developers, and a reduction in their prevalence. In fact, it's almost the opposite. The OWASP Top Ten has hardly changed in 15 years, and the changes coming soon are largely taxonomy changes. That's right, despite millions, and millions, and millions of dollars spent on developer education, tools licenses and time taking developers away from building software, we are still dealing with the same old crap 15 years after it was first published. That's surely a failure for everyone, apart of course from those that are making money off of it?

There is an argument that this is circular causation. If you train a load of security consultants to find these issues more effectively, and train tools to find these issues more effectively, and then use that data to support updated versions of the list, then you have the very definition of circular causation. 

If top ten security lists were effective at marketing security issues to people that could address them and avoid them, we wouldn't see them on these lists, year, after year. And I don't buy the bullshit that it takes time to engineer them out of environments. When the OWASP Top Ten was first published in 2004, it was three years before the first iPhone and two years before AWS was launched. It was the year when bluetooth started getting adoption and Google IPO’d. The tech world has changed beyond recognition, but appsec vulnerabilities haven’t? That doesn't even pass the sniff test. 

Why are they usually written so poorly?

Maybe one reason they aren't effective is because most of them are poorly written, and that is being nice. I am an engineer, not an English major as you see from this article, but it doesn't take an English major to look at the majority of the top ten issues, and draw some conclusions about why they might be ineffective, or at least can be improved.

For example, issues are not actionable. From the OWASP Top Ten for LLM Applications

LLM09: Overreliance

Systems or people overly depending on LLMs without oversight may face misinformation, miscommunication, legal issues, and security vulnerabilities due to incorrect or inappropriate content generated by LLMs.

Straight from the realm Captain Obvious, and as a security team, are you really going to go to the business leaders, and tell them they need to put loads of security oversight in place to deal with the biggest opportunity in tech this century, and one the biggest issues facing society? Maybe you argue that is the very point, but are you going to do that when security people are yet to address simple issues like Injection flaws in web apps?

I couldn’t resist a ChatGPT question. Sorry, not sorry. 

Can you rely on the efficacy of the owasp top ten?

The OWASP (Open Web Application Security Project) Top Ten is a widely recognized and respected resource in the field of web application security. It provides a list of the top ten most critical web application security risks, offering guidance on common vulnerabilities that organizations should be aware of and take measures to mitigate.

I guess the issue describing misinformation, at least has some legitimacy given the constructive criticism above. That ChatGPT answer is clearly wrong. 

Another example is what does the OWASP Top Ten ‘A01:2021 Broken Access Control’ mean and what is a developer meant to do about it? It is describing a broad problem, ‘stuff is broken’, without an actionable solution, and expecting developers to do something. If you are going to take that approach, why not be done with it all, and just have The Top Two, 1. Insufficient security, and 2. Broken Security? 

I could go on and on but as a final example, the OWASP mobile guide has M10: Insufficient Cryptography and marked the issue as having a severe technical impact. Full disk encryption is automatically enabled on every iPhone, straight from the manufacturer. There is a secure cryptographic boot sequence, so only trusted apps can load. 

Sure phones have issues, but that ain’t one of them you want to be putting in front of mobile developers. Maybe developers building mobile apps implement things poorly, but sensationalist headlines cause developers to raise their eyebrows, and run for the hills. This top ten was clearly created by a security vendor pushing an agenda.

Closing thoughts

As Spiderman said, “with great power comes great responsibility”’, and as an industry, we have a collective responsibility to do better.  

We must uphold and improve the integrity of top tens, before the concept becomes so overrun by bullshit, and the good intention of the majority of top ten creators and maintainers, that the power of creating simple, digestible and actionable lists, simply get drowned out by the bad ones, and no one trusts any of them anymore. 

The industry should demand better, and the technology space deserves better.

As always this article is published on LinkedIn for comments and feedback. https://www.linkedin.com/feed/update/urn:li:activity:7114945397852119041/