Posts

Showing posts from 2016

NYC Area Security Folks – Come to SOS!

Every year the NYU School of Engineering hosts Cyber Security Awareness Week (CSAW) – the largest student-run security event in the country. This year, we're trying something new that combines two of my favorite things: security and open source . The inaugural Security: Open Source (SOS) workshop , held this November 10 at NYU Tandon will feature the creators of some really cool new security tools talking about their projects. It's happening the day before one of the best CTF challenges out there, so we're expecting an audience that's not afraid of technical detail :) What will you hear about at SOS? Here some of the cool speakers and topics: Félix Cloutier  will tell us about his open-source decompiler, fcd . This is a great example of incorporating cutting edge academic research into an open-source tool that anyone can use. Félix is also a former CSAW CTF competititor. Mike Arpaia, co-founder of  Kolide , will talk about osquery , a new open-source operating

The LAVA Synthetic Bug Corpora

I'm planning a longer post discussing how we evaluated the LAVA bug injection system, but since we've gotten approval to release the test corpora I wanted to make them available right away. The corpora described in the paper, LAVA-1 and LAVA-M, can be downloaded here: http://panda.moyix.net/~moyix/lava_corpus.tar.xz  (101M) Quoting from the included README: This distribution contains the automatically generated bug corpora used  in the paper, "LAVA: Large-scale Automated Vulnerability Addition". LAVA-1 is a corpus consisting of 69 versions of the "file" utility, each  of which has had a single bug injected into it. Each bug is a named branch in a git repository. The triggering input can be found in the file named CRASH_INPUT. To run the validation, you can use validate.sh, which builds each buggy version of file and evaluates it on the corresponding triggering input. LAVA-M is a corpus consisting of four GNU coreutils programs (base64,  md5sum,

Fuzzing with AFL is an Art

Image
Using one of the test cases from the previous post , I examine what affects AFL's ability to find a bug placed by LAVA in a program. Along the way, I found what's probably a harmless bug in AFL, and some interesting factors that affect its performance. Although its interface is admirably simple, AFL can still require some tuning, and unexpected things can determine its success or failure on a bug. American Fuzzy Lop , or AFL for short, is a powerful coverage-guided fuzzer developed by Michal Zalewski (lcamtuf) at Google. Since its release in 2013, it has racked up an impressive set of trophies in the form of security vulnerabilities in high-profile software . Given its phenomenal success on real world programs, I was curious to explore in detail how it worked on an automatically generated bug. I started off with the toy program we looked at in the previous post, with a single bug added. The bug added by LAVA will trigger whenever the first four bytes of a float-type  fil

The Mechanics of Bug Injection with LAVA

Image
This is the second in a series of posts about evaluating and improving bug detection software by automatically injecting bugs into programs. Part one, which discussed the setting and motivation, is available here . Now that we understand why we might want to automatically add bugs to programs, let's look at how we can actually do it. We'll first investigate an existing approach (mutation testing), show why it doesn't work very well in our scenario, and then develop a more sophisticated injection technique that tells us exactly how to modify the program to insert bugs that meet the goals we laid out in the introductory post. A Mutant Strawman that Doesn't Work One way of approaching the problem of bug injection is to just pick parts of the program that we think are currently correct and then mutate them somehow. This, essentially, is the idea behind mutation testing : you use some predefined mutation operators  that mangle the program somehow and then declare tha

How to add a million bugs to a program (and why you might want to)

Image
This is the first in a series of posts about evaluating and improving bug detection software by automatically injecting bugs into programs. You can find part two, with technical details of our bug injection technique, here . In this series of posts, I'm going to describe how to automatically put bugs in programs, a topic on which we just published a paper at Oakland, one of the top academic security conferences. The system we developed, LAVA , can put millions of bugs into real-world programs. Why would anyone want to do this? Are my coauthors and I sociopaths who just want to watch the world burn? No, but to see why we need such a system requires a little bit of background, which is what I hope to provide in this first post. I am sure this will come as a shock to most, but programs written by humans have bugs . Finding and fixing them is immensely time consuming; just how much of a developer's time is spent debugging is hard to pin down, but estimates range between 40%

PANDA Plugin Documentation

It's been a very long time coming, but over the holiday break I went through and created basic documentation for all 54 currently-available PANDA plugins. Each plugin now includes a manpage-style document named USAGE.md in its plugin directory. You can find a master list of each plugin and a link to its man page here: https://github.com/moyix/panda/blob/master/docs/Plugins.md Hopefully this will help people get started using PANDA to do some cool reverse engineering!