Q&A with Chris Wysopal (Weld Pond)

by Mick Bauer

One of the most interesting, accomplished and productive hacking organizations of the mid- to late-1990s was L0pht Heavy Industries, a loose affiliation of “gray-hat” (i.e., mostly benevolent) hackers. During those years, the L0pht earned worldwide notoriety plus the ire of Microsoft for discovering and publicizing a number of software vulnerabilities, especially in Windows. Combined with the success of their password-auditing tool, L0phtCrack (which, besides exposing poorly chosen passwords also demonstrated inherent weaknesses in early Windows NT authentication implementations), the L0pht's relentless exposure of poor security programming played a significant role in Microsoft's slow but pronounced improvement in addressing security flaws in their products.

Q&A with Chris Wysopal (Weld Pond)

The L0pht's fame and popularity culminated in eight of their core members being invited to offer expert testimony on internet security to the US Senate in 1998. One of those members was Chris Wysopal, aka Weld Pond, a veteran computer security engineer, researcher and programmer. Chris, along with many of his former L0pht colleagues, now works for the consulting firm @stake, with whom L0pht Heavy Industries merged in January 2000.

Chris graciously interrupted his busy schedule as @stake's director of research and development to submit to a Paranoid Penguin interrogation. True to the L0pht's old form, his answers were frank, extremely well informed and thoughtful.

Mick Many of our readers are familiar with your work with the L0pht, but you've been in the public eye a bit less lately. Could you describe your current job at @stake and how it's different from what you were doing before?

Chris I am the director of research and development at @stake. From a management standpoint, I oversee the different research and tools projects that the consultants and developers are undertaking. Areas of research are forensics, attack simulation, wireless and applications. Personally, I have been most involved in the area of application security.

There are actually similarities with the L0pht. Each person has their own area of expertise and is given the opportunity to work on technology that interests them, whether it be tools development or vulnerability research. The difference with @stake is all the research and tools we build have a business need. Most are born out of problems we see working with our customers or grow out of the need to automate security tests we do manually as part of our consulting practice.

Mick What are some of the technologies you've worked with lately?

Chris Recently, I've been working on the problem of application security. How do you design products securely from the start? How do you implement them using secure coding techniques? How do you test that they are secure? This is a difficult problem to solve because you have to fit it into the way software is built in the real world: with a limited budget and extreme time pressure.

The solution we've come up with is to build security into the different stages of the development process. We have come up with techniques for diagramming the threat paths of an application to use during the design process. This allows us to find design flaws efficiently and, at the same time, make sure we have the whole design covered to a certain depth. The next step is building tools to do this.

For the implementation process, we've built tools that model an application's behavior by analyzing the source code or even the binary. This is semantic analysis and not the simple lexical analysis we did previously with SLINT [a code-auditing tool developed by the L0pht]. It allows automated detection of bad code that will cause buffer overflows or script injection, for example. Dildog, also from the L0pht, and Tim Newsham deserve the credit for this tour de force tool.

To enable automated security testing we're working on application penetration test tools that can fuzz (send random problematic data) arbitrary application protocols such as HTTP. We can set up an application in a lab environment and launch automated attacks. These tools are great for finding buffer overflows, format string, canonicalization and script injection problems. Other tools are shims and proxies to manipulate an application as it reads and writes data over the wire, a system call or an RPC call. I hope these types of tools become part of the standard quality-assurance process that people follow before releasing software.

Mick Indeed. Speaking of software, do you still find much time for coding? Anything in the works you care to discuss?

Chris I haven't found time to do any substantial coding lately. Mostly I'm creating proof-of-concept code or scripts to try out a particular application attack. If I had to mention one cool thing to look out for, it is definitely our source and binary semantic security analysis tool. This is going to bring a revolution in the ability to detect security problems before (and after) a piece of software is released.

Mick Do you see any improvement in the software industry at large in making security a design goal of programming projects rather than an afterthought?

Chris Yes, definitely. The secure software development techniques and tools we have been working on have been well received by our software customers. A lot of this is due to being beaten up about insecure products over the last few (many?) years. Sophisticated technology customers simply are not accepting it anymore. It is becoming part of the purchasing decision.

Another reason things are changing is companies are learning that it's very expensive to patch vulnerabilities after the fact, not to mention the PR nightmare. They are finally realizing that there are people out there actually downloading the trial versions, breaking them in their labs and publishing what they find. [Software companies] just can't hide their shoddy security anymore. Plus, it is cheaper to build in security upfront. We've built a “return on security investment” model using the vulnerabilities and the cost to fix them from the data of 45 customer engagements. The numbers crunch down to a 21% savings by starting out building a secure application rather than trying to bolt security on after shipping.

Mick The Open Source world has had its share of security crises in the past few months, with a string of vulnerabilities in Secure Shell, Squid, SNMP and zlib, to name a few. Yet some of these affected packages, particularly OpenSSH, are maintained by the OSS community's best and brightest. Is this trend simply bad luck? Is software becoming unsecurably complex, or do you see some other explanation?

Chris Yes, some of it is very complex. SSH took a big leap in complexity going up to version 2.0. I think the code auditing that has gone on has eliminated much of the low-hanging fruit vulnerabilities from important applications. The closed-source vendors are playing catch-up here. But I don't think this eliminates nearly all the problems.

There needs to be an effort to do more than code auditing. There is a need for threat modeling of the designs and application penetration testing such as fuzzing. Some problems, like the ASN.1 problems that plagued SNMP, are difficult to find through auditing. The data paths are becoming very complex. I also think vulnerability researchers are getting better, and there are more people doing it.

Mick Your @stake compadre, Dr. Mudge, has been speaking lately about risk management and other less-technical approaches to IS security. What sort of progress do you see companies and organizations making toward demystifying IS security in this way and institutionalizing good security policies and practices?

Chris The business managers in a company need to understand the risks of not having adequate security. This understanding should not reside in IS alone. Once security-incident costs are quantified for the people in charge of profit and loss, they start to see the value of security and are willing to pay for it. Then people running the business will have a much larger budget to allocate toward security products and services. Once executives in a company are educated to the risks, there is a much better chance that security practices and policies will be adopted company-wide.

Mick What are some approaches that seem to work in getting organizations to adopt better security policies and practices, especially in selling these concepts to nontechnical managers?

Chris Demonstrations work wonders. It's one thing to tell a nontechnical manager that his upstream internet connection can be sniffed and that his company is sending sensitive information in the clear. It's another to show him the finance department's latest salary updates that they just sent via e-mail to an out-sourced payroll company. It hits home when you get the data. Nontechnical people have a problem understanding what could be done with a vulnerability. It's too hypothetical.

Mick That's certainly my experience too. By the way, it occurs to me that your @stake colleague Frank Heidt will be performing exactly that kind of demonstration on the Linux Lunacy Cruise this October (not that I'm shilling this fine Linux Journal-sponsored event or anything). Here's a question that is of the utmost importance to our readers: what is your own experience with Linux?

Chris Heh, I am an old-timer. I set up the first Linux box we had at the L0pht in 1994. I think it was the 0.99.pl14 kernel running on a 486. I configured it to be our internet gateway, routing our class C over a 28.8. I was running NCSA web server and sendmail. For a trip down memory lane, check out the L0pht web site as it was running on that box: web.archive.org/web/19961109005607/http://l0pht.com.

We used DESLogin because these were the pre-SSH days. Linux was my first experience with UNIX programming. I had access to a SunOS 2.4 system, but it didn't have the development tools that Linux did. Linux excelled as a learning environment then, as it does now.

Mick What do you see as being some of Linux's strengths and weaknesses from a security standpoint?

Chris Linux has a simpler security model and configuration than many other OSes, although things have been growing in complexity over time. If you aren't doing anything too complex, this simplicity is a big plus. Most of the complexity of other systems ends up shooting programmers and administrators in the foot. Fewer things that need to be run as root, the ability to run almost nothing SUID root, and text configuration files make it easy to lock down a system. Linux has been very virus-free, even when tasked with everyday dangerous chores like mail reading and browsing. This is a testament to basic good security design.

On the other hand, with Linux everyone is a programmer, and let's face it, not everyone knows secure coding. I don't always want to have to audit code when I install a package that is exposed to the Internet in some way. There are even some packages out there where the author explicitly states that the program is insecure and to not bother contacting him or her. This is unacceptable. The strengths of Linux security can be undone by one poorly coded application. But of course, that is true of closed-source systems too.

Mick Bauer (mick@visi.com) is a network security consultant for Upstream Solutions, Inc., based in Minneapolis, Minnesota. He is the author of the upcoming O'Reilly book Building Secure Servers With Linux, composer of the “Network Engineering Polka” and a proud parent (of children).

Load Disqus comments

Firstwave Cloud