Back

A Personal History of the AppSec Industry

DALL·E 2022-11-16 09.webp

A REFLECTION ON MY JOURNEY FROM ACADEMIA TO INDUSTRY OVER THE LAST TWO DECADES

A few weeks ago, Jim Manico reached out to get information about the history of Application Security, in preparation of a keynote, which he will be giving tomorrow at the OWASP conference. Writing down my remembrances resulted in more than Jim was obviously going to be able to use, and we agreed that I’d post my full note to him.

Pre-history

While I was part of the birth of the industry, there is obviously a long pre-history. Certainly, the notion that software could have security flaws can be traced back to the birth of network computers.  And some of the attacks in the CIAAA model (Confidentiality, Integrity, Authentication, Authorization, Availability) pre-date modern software… like the man-in-the-middle attack is well over 100 years old, going back to the ham radio network.

By the mid ‘60s, there were definitely known security vulnerabilities, but I’ve never heard anyone who knows the details.  Lots of old timers have said William Matthews from MIT found a Multics bug in 1965 that allowed him to read the password file, which I’m told ultimately led to password hashing (crypt dates back to the early ‘70s), realizing that kind of thing could happen again.

The whole idea of setuid and setgid dates from the early ‘70s as well, attempting to put access control around software.  That clearly was problematic by the end of the ‘70s.. chroot() started a slow move toward process isolation, and that I believe was late ‘70s.

The term “hacker” being applied to computer intrusion (as opposed to the way most of us wanted it to be used) dates back to 1983 (https://www.edn.com/hacker-is-used-by-mainstream-media-september-5-1983/).  It was pretty popularized by Sneakers in 1992.  Somewhere in there was the Morris Worm, which is the first time people started talking about buffer overflows, even though I’m 100% sure the problem was a known thing by the early ‘70s.

When I got on the Internet in the early ‘90s, everybody who understood anything about software knew that there were plenty of bugs. Social engineering and hacking definitely went hand in hand, and the perception many people seemed to have through the early ’90’s was that it was possible to get remote access via an exploit, but privilege escalation was more about getting credentials, either through password cracking or social engineering.

Still, there were already plenty of known local exploits in use. In 1994, students in my school’s CS department generally were allowed root (to my recollection anyway), but when there was an issue getting it when needed (and the sysadmin wasn’t around), I remember people downloading exploits to get the access they needed.

I also remember in ’94 discovering quite by serendipitous accident a command injection problem in the SGI Iris login, where I could put my username as “|xterm" and get an xterm as root.  Then I could start up the window manager. I’m sure command injection was a thing way before that… I’m quite sure I’d seen at least another bug like that prior, and the term was probably already a thing.  But I was inspired when I learned how the login prompt actually worked!

By the mid ‘90s, there was plenty of awareness in tech circles that software security could have real consequences, but it wasn’t yet an industry.

Birth of an Industry

Frankly, the seed of software security becoming an industry in my view wasn’t the government, despite them having spent decades understanding the core problems and trying to build themselves secure computer systems (they were even using exploitation themselves no later than the mid-’90s).

I think software security started moving towards being a real commercial industry with the birth of Java in 1995. When Sun Microsystems released the original Java white paper, and they made a big deal out of security, touting it as one of the primary advantages. While they didn’t charge for Java, it was immediately clear that the language would be popular for enterprise applications, and the language emphasizing security suggested that companies using the language would start taking application security seriously.

The security features in Java themselves were not anything too esoteric. Plenty of other languages of the day had bounds checking on arrays, and lacked pointers. And Java certainly (over-)emphasized untrusted code security, under the assumption that everyone would be running Java Applets in their browser all the time. For instance, they helped promote code signing and verification, sandboxing, and a lattice-based permissions model. Again, none of this stuff was new; the permissions model was based on Bell-LaPadula, which dates back to the early 1970’s.

Still, it built awareness among people who built software, more than anything else who had come before it. Before Java, it seemed like people in the know assumed that getting hacked might be inevitable if you ran a system on the internet, but that it wasn’t likely to be a big deal, and that getting root was far less likely, without some social engineering.

That’s not to say it put most developers into a panic… virtually no developer thought about anything more than password authentication or encryption before Java. After Java launched… a few people started to care, and it ended up on the radar of a few more people.

I wasn’t actually into security at that point, myself. I was into compilers and programming languages. When I got out of grad school in the fall of ’98, I went to work for Reliable Software Technologies (RST, which in 2000 or so was renamed Cigital). Gary McGraw, the CTO, hired me, and he was one of those people Java had inspired. He had already been doing Java security research… when he got there, they had originally been focused on software testing, and that was still almost all of their business (they were mostly doing gov’t research grants).

When I got to RST, Anup Ghosh had recently received an ATP (Advanced Technology Program) grant for NIST, but their original idea failed fast, and they needed something to replace it, that NIST would like enough to keep the funding in place. I was already interested in program analysis, and saw the opportunity to apply it to security. The code name of the project was “Mjölnir”… we focused on C code, and it was WAY WAY WAY too slow… the computational complexity of the algorithms, on the hardware of the time was an issue.  And getting it working on any real C programs was a huge issue, due to the complexities of the language, and the diversity of language extensions in various compilers.  It was clearly going to be a lot of work to get something practical. But, as far as I know, this was the earliest tool for static analysis for security vulnerabilities. But it was overly ambitious, and while it worked on toy programs, never could work with anything large. It certainly never would have scaled to most real software.

By early ’99, it was clear to us that companies were definitely interested in getting code reviews— Gary started selling them, and two of us started delivering them… myself and Brad Arkin (long time Adobe CISO and now Cisco’s Chief Trust and Safety officer). We did engagements for companies like Symantec, Visa, Schlumberger (lots of smart card work starting around that time).   The most fun engagements were for online gambling and casino companies!  If there were other people doing commercial code reviews by then, I wouldn’t be surprised (I’d expect some people who were sharing vulns on the Bugtraq mailing list were probably getting engaged by forward thinking people who had read Bugtraq). Still, I have never seen anyone claim to be offering a commercial security code review before we were. Tons of companies popped up shortly after, though! And many of those people ended up doing work for governments… there’s a secret history there that I’d personally be very interested to learn about!

The start of SAST

Sometime later in ‘99, Gary heard somehow that the l0pht might be for sale. I went up to Watertown, and spent time with mudge and co, where they showed me a project they’d recently been working on, which they called “slint” (i.e., “security lint”).

I’d been so busy trying to make fairly accurate analysis practical, that I’d missed the fact that you could still get plenty of value out of a pretty cursory analysis. Slint open my eyes— at the time, it was essentially little more than grepping the output of the “strings” command (at the time, it required the binary to have symbol information), which meant it was going to spam people with false positives. BUT! It would tell someone manually auditing where to start looking.

We hadn’t been able to use our own tool on code review engagements, but we sure could use something like this.

So that weekend, I built ITS4, which was the first publicly available static analysis tool for security, and then a couple of my co-workers helped me add to it over the next few days. It tokenized code, and did basic pattern matching on the token stream. I wanted to open source it, but RST wouldn’t allow it… they went with a “free for non-commercial use” license, and insisted on calling it “open”, which caught them some grief at the time.  But the tool took off, and we definitely got a lot of mileage out of it in our own engagements. This was the first publicly available static analysis tool.

In the early days after the tool came out, I heard from plenty of people using the tool to find real security vulnerabilities, which mainly were being pushed out through Bugtraq and eventually the Full Disclosure mailing list. There was a big community already, focused on getting software vendors to fix their security issues, and very early on realized that nobody was going to start fixing bugs until there was a very real threat of those bugs being used against customers, or, perhaps more likely, of being embarrassed due to negative publicity. Before too long, this community definitely eventually spawned plenty of other people doing paid code audits.

Books and a broader community

Not long after, I left RST, and spent most of my waking hours in Dec 2000 / Jan 2001 on writing the book Building Secure Software, which was the first book for developers on security (co-authored w/ Gary).  I don’t remember why it took so long after the first draft was done to actually hit shelves (6 months, IIRC). But I do remember catching wind  that Mike Howard and David Leblanc at Microsoft were working on a book, and agonizing over why it was taking so long for ours to ship :-). In the end, we still won that battle, but only by a couple of months. Both books did well for tech books in those days, but Mike and Dave’s book made a huge impact on Bill Gates, who transformed Microsoft’s engineering practices as a result, which ended up having a huge impact on the industry in many ways.

Over the next few years, both Gary and I went on to write a few more books (oddly, while the first one sold incredibly well, the OpenSSL book was by far my best seller, because it was also widely used by networking people, not just developers). I even wrote a couple of books with Mike and Dave.

Our first book definitely ended up inspiring a lot of people at the time. One gentleman working at Charles Schwab reached out to me after he read it, indicating he was going to start a non-profit around Application Security, and wanted me to join the initial advisory board. That’s how I met Mark Curphey (my now co-founder), who was the original founder of OWASP. That advisory board included Greg Hoglund and Chris Wysopal.  That’s how we originally met, and now our relationship is old enough for alcohol, even in the US :)

The book led to a swarm of consulting offers… before the book even was published!  I ended up starting a company called Secure Software with a few friends in early 2001.

Analysis Tools

At our crappy startup, the consulting was paying the bills, along with a DARPA grant to go work on static analysis.  While working on “good” static analysis again, we produced RATS, a proper open source multi-language security analysis tool a la ITS4, which was widely used for a long time in code audits, more so than commercial tools due to support for more languages, ease of use, and being free. David Wheeler also ended up producing a similar OSS tool called FlawFinder, and wrote his own book on Software Security somewhere in there.

At Secure Software, our commercial product was much more efficient than what we’d done at RST, while being vastly less noisy than anything else out there (including my own stuff).  We also licensed a commercial front end, helping make it possible to analyze real software in C/C++ and Java.

Sometime in 2002, while we were working on the commercial product, Ted Schlein, then from the VC firm Kleiner Perkins (one of the top names of that era), reached out, because of the book. He came to our office on the East Coast with Roger Thornton.  He said he had been inspired by the book, and strongly believed in static analysis as the right approach for security, so had founded Fortify, along with Roger.  They had one developer prototyping then, but wanted to see if we wanted to join up… especially when they found out we were already building a static analysis tool.

We ended up turning them down, because moving to California was an explicit prerequisite, and that wasn’t something we were willing to do.  Instead, they next went to Cigital, and, I’m told, ended up licensing ITS4 from them.  Later on, in 2006 (well after I’d left Secure Software), they ended up buying the company, with hopes of upgrading their analysis engine… because the false positives from ITS4 turned out to be an issue after all :). Though, Secure Software had basically lost, in part because the friction needed to integrate with the build process was too much in the early days for something that was a “nice to have”.

Plenty of other people were working on both static and dynamic analysis in this same window. Commercially, Ounce Labs was a late entrant into the space (eventually sold to IBM, when HP bought Fortify). SPI Dynamics led the pack on the dynamic side as the first GREAT tool in the space, and is how I first met Caleb Sima (now CISO at Robin Hood). They were primarily competing against a company called Sanctum on the dynamic side.

Dynamic tools were easier to deploy, and better equipped to handle the explosion of discovery happening in exploitation techniques, as many of those techniques were web exploitation techniques (starting with cross-site-scripting). Those companies may have started showing up a bit later than the static analysis companies, but they sure did take off a lot faster.

Academia was also writing papers on security analysis (particularly static analysis), though the literature was generally behind what had already been done in the existing commercial products, emphasizing the poor feedback loop between academia and industry for large swaths of the security industry (cryptography is the big exception there). Still, one group in academia was doing good work on making techniques practical, and spun out a company called Coverity, which ended up, after a few years, being the best all-around solution in the early days of SAST.

Meanwhile, Slint didn’t die, but took a long time to gestate. After Symantec acquired @Stake, which had acquired the l0pht, the technology had made great leaps, and eventually Symantec decided that, instead of trying to make their own business in the AppSec, decided to spin it out into its own company. That became Vericode, which not only was the first commercial binary security analysis product, but also was the first to take a SaaS-based approach to the problem. They rewrote the rules for the SAST space early, and forced everyone to adapt, and it’s no surprise in retrospect that they were really the most successful, most long-lived of the early companies in the space.

Other Early Activity

While at Secure Software, Microsoft started heavily pushing their SDL, but I was getting a MASSIVE amount of feedback from people that it was too heavyweight for anyone other than Microsoft. So I built CLASP in response— the basic idea was to document process best practices, including pros/cons, etc. and then let people decide based on their needs.  I donated that to OWASP, and my Secure Software co-founder, Pravir Chandra, ended up working with Gary to evolve it by actually measuring what real companies were doing, which ended up becoming BSIMM, which was still going strong the last I looked.

The US government was already interested in outside validation of security, and by 2002, under the Navy-Marine Corp Internet project, they spent a lot of energy not just auditing critical software assets, but also trying to put together an automated assurance pipeline around the burgeoning tool space. Their vision was essentially billed to me as a “Nutrition Fact Label”. They were too early on vision, but the government doesn’t seem to have ever lost sight of the goal, as I see echos of it in the way the government currently talks about SBOM (Software Bills of Materials).

Also in the early 2000’s, a lot of great work in the industry that had been done on mitigations was becoming more mainstream. Particularly, it seemed that Crispin Cowan’s work on stack canaries from 1998 (and other related things like AppArmor) had inspired a bunch of work from the OS and hardware vendors, leading to ASLR (first saw it in PaX in 2001), and DEP (2003). Meanwhile, in 2001, the NSA released SELinux, and their push to get it into the kernel resulted in the LSM framework (Linux Security Modules).

Though, OpenBSD deserves the credit for being the first modern operating system to take security seriously, dating back to the mid ‘90’s.

Also in the early ‘00s, I did the AES-GCM algorithm with David McGrew (from Cisco), which isn’t quite software security (if were to expand the tent to Cryptography, this post would instead be a book). Still, it’s interesting to me that it’s probably had the bigger impact on the industry compared to ANYTHING else in the software security space from the ’00’s (it’s the default TLS cipher suite, and is responsible for 70%+ of all encrypted traffic).  Why was it a success?  Because it was open, easy to use and effective.  Nothing in SWSec from that first decade came close to ticking all three boxes, and it’s a lesson the industry hasn’t absorbed very well.

At the beginning of 2006, well before the Secure Software exit, I left the company to take an exec role at McAfee, back before it was embarrassing to say I worked at McAfee :). Mark Curphey was there at the time, had suggested me to their CTO, and encouraged me to take the job. It turned out, he did all that so he could offload ownership of Product Security :) I focused a lot on metrics and prioritized code reviews, when everyone else was focused on developer training (I was already convinced that, for 95+% of engineers, training was mostly an expensive waste of time). I also hired some of the best code auditors of the day… people like Mark Dowd, Brandon Edwards (drraid), and Stephen Ridley (sa7ori).  I remember being forced to let the NSA audit a big, important product… I had Mark Dowd audit it first, and learned that the NSA was only capable of finding a small fraction of what Mark could :)

As an interesting footnote, I ended up hiring Ryan Permeh to run product security for me.  After McAfee, he went on to found Cylance, and is now a partner at Syn Ventures… who’ve invested in Mark’s and my current company (Crash Override).

By mid-2008, I totally walked away from Application Security. I almost completely stopped paying attention to it, and am only now coming back to the space. I was jaded when I left it, because WAY too many people in our industry felt like:

  1. Security was far more important than the rest of the business, and that businesses should incur any cost required to be as secure as possible.

  2. Developers should be able to be trained to the point that they’d be great at security (never mind the fact that is’a a massive, complex subject), and should even be accountable for security issues in their code.

And, at the same time, VERY few developers cared much, they were still fighting tooth and nail NOT to fix things that people in the industry new were exploitable.

In short, many people in AppSec were VASTLY out of touch with the realities of the broader tech community, and pretty much NOBODY wanted to hear me disagreeing with the “secure all the things” mantra.

Thankfully, the industry has gotten a little more pragmatic after a decade and a half, though (though I still hear echos of those old sentiments in a few places). So even though I have totally ignored the field, I’m having a lot of fun wading back in!

My apologies for any of the good work from the industry’s early days that I overlooked. I’ll invoke my advancing age as an excuse for my forgetfulness.