Thursday, July 21, 2016

Fuzzing with AFL is an Art

Using one of the test cases from the previous post, I examine what affects AFL's ability to find a bug placed by LAVA in a program. Along the way, I found what's probably a harmless bug in AFL, and some interesting factors that affect its performance. Although its interface is admirably simple, AFL can still require some tuning, and unexpected things can determine its success or failure on a bug.

American Fuzzy Lop, or AFL for short, is a powerful coverage-guided fuzzer developed by Michal Zalewski (lcamtuf) at Google. Since its release in 2013, it has racked up an impressive set of trophies in the form of security vulnerabilities in high-profile software. Given its phenomenal success on real world programs, I was curious to explore in detail how it worked on an automatically generated bug.

I started off with the toy program we looked at in the previous post, with a single bug added. The bug added by LAVA will trigger whenever the first four bytes of a float-type file_entry are set to 0x6c6175de or 0xde75616c, and will cause printf to be called with an invalid format string, crashing the program.

After verifying that the bug could be triggered reliably, I compiled it with afl-gcc and started a fuzzing run. To get things started, I used a well-formed input file for the program that contained both int and float file_entry types:


Because I'm lucky enough to have a 24 core server sitting around, I gave it 24 cores (one using -M and the rest using -S) and let it run for about 4 and a half days, fully expecting that it would find the input in that time.

This did not turn out so well.

Around 20 billion executions later, AFL had found zilch.

At this point, I turned to Twitter, where John Regehr suggested that I look into what coverage AFL was achieving. I realized that I actually had no idea how AFL's instrumentation worked, and that this would be a great opportunity to find out.


Diving Into AFL's Instrumentation


The basic afl-gcc and afl-clang tools are actually very simple. They wrap gcc and clang, respectively, and modify the compile process to emit an intermediate assembly code file (using the -S option). Finally they do some simple string matching (in C, ew) to find out where to add in calls to AFL's coverage logging functions. You can get AFL to save the assembly code it generates using the AFL_KEEP_ASSEMBLY environment variable, and see exactly what it's doing. (There's actually also a newer way of getting instrumentation that was added recently using an LLVM pass; more on this later.)



Left, the original assembly code. Right, the same code after AFL's instrumentation has been added.


After looking at the generated assembly, I noticed that the code corresponding to the buggy branch of the if statement wasn't getting instrumented. This seemed like it could be a problem, since AFL can't try to use coverage to reach a part of the program if there's no logging to tell it that an input has caused it to reach that point.

Looking into the source code of afl-as, the program that instruments the assembly code, I noticed a curious bit of code:

AFL skips labels following p2align directives in the assembly code.

According to the comment, this should only affect programs compiled under OpenBSD. However, the branch I wanted instrumented was being affected by this even though I was running under Linux, not OpenBSD, and there were no jump tables present in the program.

The .L18 block should be instrumented by AFL, but won't be because it's right after an alignment statement.

Since I'm not on OpenBSD, I just commented out this if statement. As an alternate workaround, you can also add "-fno-align-labels -fno-align-loops -fno-align-jumps" to the compile command (at the cost of potentially slower binaries). After making the change I restarted, once again confident AFL would soon find my bug.

Alas, it was not to be. Another 17 hours of fuzzing on 24 cores yielded nothing, and so I went back to the drawing board. I am still fairly sure I found a real bug in AFL, but fixing it didn't help find the bug I was interested in. (Note: it's possible that if I had waited four days again it would have found my bug. On the other hand, AFL's cycle counter had turned green, indicating that it thought there was little benefit in continuing to fuzz.)


5.2 billion executions, no crashes :(


“Unrolling” Constants


Thinking about what would be required to find the bug by AFL, I realized that its chances of hitting our failing test case were pretty low. AFL will only prioritize a test case if it has seen that it leads to new coverage. In the case of our toy program, it would have to guess one of the two exact 32-bit trigger values at exactly the right place in the file, and the odds of this happening are pretty slim.

At this point I remembered a post by lcamtuf that described how AFL managed to figure out that an XML file could contain CDATA tags even though its original test cases didn't contain any examples that used CDATA. He also calls out our bug as exactly the kind of thing AFL is not designed to find:

What seemed perfectly clear, though, is that the algorithm wouldn't be able to get past "atomic", large-search-space checks such as:
if (strcmp(header.magic_password, "h4ck3d by p1gZ")) goto terminate_now;
...or:
if (header.magic_value == 0x12345678) goto terminate_now;

So how was AFL able to generate a CDATA tag out of thin air? It turns out that libxml2 has a set of macros that expand out some string comparisons into character-by-character comparisons that use simple if statements. This allows AFL to discover valid strings character by character, since each correct character will add new coverage, and cause further fuzzing to be done with that input.

We can also apply this to our test program. Rather than checking for the fixed constant 0x6c6175de, we can compare each byte individually. This should allow AFL to identify the trigger value one byte at a time. The new code looks like this:

The monolithic if statement has been replaced by 4 individual branches.

Once we make this change and compile with afl-gcc, AFL finds a crash in just 3 minutes on a single CPU!

AFL has found the bug!

This also makes me wonder if it might be worthwhile to implement a compiler pass that breaks down large integer comparisons into byte-sized chunks that AFL can deal with more easily. For string comparisons, one can already substitute in an inline implementation of strcmp/memcmp; an example is available in the AFL source.

A Hidden Coverage Pitfall


While investigating the coverage issues, I noticed that AFL has a new compiler: afl-clang-fast. This module, contributed by László Szekeres, performs instrumentation as an LLVM pass rather than by modifying the generated assembly code. As a result, it should be less brittle and allow for more instrumentation options; from what I can tell it's slated to become the default compiler for AFL at some point.

However, I discovered that its instrumentation is not identical to the instrumentation done by afl-as. Whereas afl-as instruments each x86 assembly conditional branch (that is, any of the instructions starting with "j" aside from "jmp"), afl-clang-fast works at the level of LLVM basic blocks, which are closer to the blocks of code found in the original source. And since by default AFL adds -O3 to the compile command, multiple conditional checks may end up getting merged into a single basic block.

As a result, even though we have added multiple if statements to our source, the generated LLVM looks more like our original statement – the AFL instrumentation is only placed in the innermost if body, and so AFL is forced to try and guess the entire 32-bit trigger at once again.

Using the LLVM instrumentation mode, AFL is no longer able to find our bug.

We can tell AFL not to enable the compiler optimizations, however, by setting the AFL_DONT_OPTIMIZE environment variable. If we do that and recompile with afl-clang-fast, the if statements do not get merged, and AFL is able to find the trigger for the bug in about 7 minutes.

So this is something to keep in mind when using afl-clang-fast: the instrumentation does not work in quite the same way as the traditional afl-gcc mode, and in some special cases you may need to use AFL_DONT_OPTIMIZE in order to get the coverage instrumentation that you want.

Making AFL Smarter with a Dictionary


Although it's great that we were able to get AFL to generate the triggering input that reveals the bug by tweaking the program, it would be nice if we could somehow get it to find the bugs in our original programs.

AFL is having trouble with our bugs because they require it to guess a 32-bit input all at once. The search space for this is pretty large: even supposing that it starts systematically flipping bits in the right part of the file, it's going to take an average of 2 billion executions to find the right value. And of course, unless it has some reason to believe that working on that part of the file will get improved coverage, it won't be focusing on the right file position, making it even less likely it will find the right input.

However, we can give AFL a leg up by allowing it to pick inputs that aren't completely random. One of AFL's features is that it supports using a dictionary of values when fuzzing. This is basically just a set of tokens that it can use when mutating a file instead of picking values at random. So one classic trick is to take all of the constants and strings found in the program binary and add them to the dictionary. Here's a quick and dirty script that extracts the constants and strings from a binary for use with AFL:

Once we give AFL a dictionary, it finds 94% of our bugs (149/159) within 15 minutes!

Now, does this mean that LAVA's bugs are too easy to find? At the moment, probably yes. In the real world, the triggering conditions will not always be something you can just extract with objdump and strings. The key improvement needed in LAVA is a wider variety of triggering mechanisms, which is something we're working on.


Conclusion


By looking in detail at a bug we already knew was there, we found out some very interesting facts about AFL:

  • Its ability to find bugs is strongly related to the quality of its coverage instrumentation, and that instrumentation can vary due both to bugs in AFL and inherent differences in the various compile-time passes AFL supports.
  • The structure of the code also heavily influences AFL's behavior: seemingly small differences (making 4 one-byte comparisons vs one 4-byte comparison) can have a huge effect.
  • Seeding AFL with even a naïve dictionary can be devastatingly effective.

In the end, this is precisely what we hoped to accomplish with LAVA. By carefully examining cases where current bug-finding tools have trouble on our synthetic bugs, we can better understand how they work and figure out how to make them better at finding real bugs as well.

Thanks


Thanks to Josh Hofing, Kevin Chung, and Ryan Stortz for helpful feedback and comments on this post, and of course Michal Zalewski for making AFL.

Monday, July 11, 2016

The Mechanics of Bug Injection with LAVA

This is the second in a series of posts about evaluating and improving bug detection software by automatically injecting bugs into programs. Part one, which discussed the setting and motivation, is available here.

Now that we understand why we might want to automatically add bugs to programs, let's look at how we can actually do it. We'll first investigate an existing approach (mutation testing), show why it doesn't work very well in our scenario, and then develop a more sophisticated injection technique that tells us exactly how to modify the program to insert bugs that meet the goals we laid out in the introductory post.

A Mutant Strawman that Doesn't Work


One way of approaching the problem of bug injection is to just pick parts of the program that we think are currently correct and then mutate them somehow. This, essentially, is the idea behind mutation testing: you use some predefined mutation operators that mangle the program somehow and then declare that it is now buggy.

For example, we could take every instance of strncpy and change it to strcpy. Presumably, this would add lots of potential buffer overflows to a program that previously had none.

Unfortunately, this method has a couple problems. First, it is likely that many such changes will break the program on every input, which would make the bug trivial to find. The following program will always fail if strncpy is changed to strcpy:

We also face the opposite problem: if the bug doesn't trigger every time, we won't necessarily know how to trigger it when we want to. This will make it hard to prove that there really is a bug, and violates one of the requirements we described last time: each bug must come with a triggering input that proves the bug exists. If we wanted to find the triggering input for a given mutation, we'd have to find an input that reaches our mutant, which is actually a large part of what makes finding bugs hard!

Dead, Uncomplicated and Available Data


Instead of doing random, local mutations, LAVA first tries to characterize the program's behavior on some concrete input. We'll run the program on an input file, and then try to see where that input data reaches in the program. This solves the triggering program because we will know a concrete path through the program, and the input needed to traverse that path. Now, if we can place bugs in code along that path, we will be able to reach them using the concrete input we know about.

We need a couple other properties. Because we want to create bugs that are triggered only for certain values, we will want the ability to manipulate the input of the program. However, doing so might cause the program to take a different path, and the input data may get transformed along the way, making it difficult to predict what value it will have when we actually want to use it to trigger our bug.

To resolve this, we will try to find parts of the program's input data that are:

  • Dead: not currently used much in the program (i.e., we can set to arbitrary values)
  • Uncomplicated: not altered very much (i.e., we can predict their value throughout the program's lifetime)
  • Available in some program variables

We'll call data that satisfies these three properties a DUA. DUAs try to capture the notion of attacker-controlled data: a DUA is something that can be set to an arbitrary value without changing the program's control flow, is available somewhere along the program path we're interested in, and whose value is predictable.

Measuring Liveness and Complication with Dynamic Taint Analysis


Having defined these properties, we need some way to measure them. We'll do that using a technique called dynamic taint analysis1. You can think of dynamic taint analysis like a PET scan or a barium swallow, where a radionuclide is introduced into a patient, allowed to propagate throughout the body, and then a scan checks to see where it ends up. Similarly, with taint analysis, we can mark some data, allow it to propagate through the program, and later query to see where it ended up. This is an extremely useful feature in all sorts of reverse engineering and security tasks.

Like a PET scan, dynamic taint analysis works by seeing where marked input ends up in your program.

To find out where input data is available, we can taint the input data to the program – essentially assigning a unique label to each byte of the program's input. Then, as the program runs, we'll propagate those labels as data is copied around the program, and query any variables in scope as the program runs to see if they are derived from some portion of the input data, and if so, from precisely which bytes.

Next, we want to figure out what data is currently unused. To do so, we'll extend simple dynamic taint analysis by checking, every time there's a branch in the program, whether the data used to decide it was tainted, and if so, which input bytes were used make the decision. At the end, we'll know exactly how many branches in the program each byte of the input was used to decide. This measure is known as liveness.

Liveness measures how many branches use each input byte.

Finally, we want some measure of how complicated the data in each tainted program variable is. We can do this with another addition to the taint analysis. In standard taint analysis, whenever data is copied or computed in the program, the taint system checks if the source operands are tainted and if so propagates the taint labels to the destination. If we want to measure how complicated a piece of data is – that is, how much it has been changed since it was first introduced to the program – we can simply add a new rule that increments a counter whenever an arithmetic operation on tainted data occurs. That is, if you have something like c = a + b; then the taint compute number (TCN) of c is tcn(c) = max(tcn(a),tcn(b)) + 1.

TCN measures how much computation has been done on a variable at a given point in the program.

On the implementation side, all this is done using PANDA, our platform for dynamic analysis. PANDA's taint system allows us to taint an input file with unique byte labels. To query the state of program variables, we use a clang tool that modifies the original program source code2 to add code that asks PANDA to query and log the taint information about a particular program variable. When we run the program under PANDA, we'll get a log telling us exactly which program variables were tainted, how complicated the data was, and how live each byte of input is.

PANDA's taint system allows us to find DUAs in the program.

After running PANDA, we can pick out the variables that are uncomplicated and derived from input bytes with low liveness. These are our DUAs, approximations of attacker controlled data that can be used to create bugs.

Finding Attack Points


With some DUAs in hand, we now have the raw material we need to create our bugs. The last missing piece is finding some code we want to have an effect on. These are places where we can use the data from a DUA to trigger some buggy effect on the program, which we call attack points (ATP). In our current implementation, we look for places in the program where pointers are passed into functions. We can then use the DUA to modify the pointer, which will hopefully cause the program to perform an out of bounds read or write – a classic memory safety violation.

Because we want the bug to trigger only under certain conditions, we will also add code at the attack point that checks if the data from the DUA has a specific value or is in a specific range of values. This gives us some control over how much of the input space triggers the bug. The current implementation can produce both specific-value triggers (DUA == magic_value) and range-based triggers of varying sizes (x < DUA < y).

Each LAVA bug, then is just a pair (DUA, ATP) where the attack point occurs in the program trace after the DUA. If there are many DUAs and many attack points, then we will be able to inject a number of bugs roughly proportional to the product of the two. In large programs like Wireshark, this adds up to hundreds of thousands of potential bugs for a single input file! In our tests, multiple files increased the number of bugs roughly linearly, in proportion to the amount of coverage achieved by the extra input. Thus, with just a handful of input files on a complex program you can easily reach millions of bugs.

Our "formula" for injecting a bug. Any (DUA, ATP) pair where the DUA occurs before the attack point is a potential bug we can inject.

Modifying the Source Code


The last step is to modify the source code to add our bug. We will insert code in two places:
  1. At the DUA site, to save a copy of the input data to a global variable.
  2. At the attack point, to retrieve the DUA's data, check if it satisfies the trigger condition, and use it to corrupt the pointer.
By doing so, we create a new data flow between the place where our attacker-controlled data is available and the place where we want to manifest the bug.

A Toy Example


To see LAVA in action, let's step through a full example. Have a look at this small program, which parses and prints information about a very simple binary file format:


We start by instrumenting the source code to add taint queries. The queries will be inserted to check taint on program variables, and, for aggregate data structures, the members inside each structure. The result is a bit too long to include inline, since it quadruples the size of the original program, but you can see it in this gist.

When we compile and run that program on some input inside of PANDA with taint tracking enabled, we get information about taint compute numbers and the liveness of each byte of the input. For example, here's the liveness map for a small (88 byte) input:

Liveness map for the input to our toy program. The bytes with a white background are completely dead – they can be set to arbitrary values without affecting the behavior of the program.

LAVA's analysis finds 82 DUAs and 8 attack points, for a total of 407 potential bugs. Not all of these bugs will be viable: because we want to measure the effect of liveness and taint compute number, the current implementation does not impose limits on how live or complicated the DUAs used in bugs are.

To make sure that an injected bug really is a bug, we do two tests. First, we run the modified program on a non-triggering input, and verify that it runs correctly. This ensures that we didn't accidentally break the program in a way we weren't expecting. Second, we run it on the triggering input and check that it causes a crash (a segfault or bus error). If it passes both tests we deem it a valid bug. This could miss some valid bugs, of course – not all memory corruptions will cause the program to crash – but we're interested mainly in bugs that we can easily prove are real. Another approach might be to run the buggy program under Address Sanitizer and check to see if it flags any memory errors. After validation, we find that LAVA is able to inject 159 bugs into the toy program, for a yield of around 39%.

Let's look at an example bug (I've cleaned up the source a little bit by hand to make it easier to read; programmatically generated code is not pretty):


On lines 6–15, after parsing the file header, we add code that saves off the value of the reserved field3, which our analysis correctly told us was dead, uncomplicated, and available in head.reserved. Then, on line 20, we retrieve the value and conditionally add it to the pointer ent that is being passed to consume_record (checking the value in both possible byte orders, because endianness is hard). When consume_record tries to access fields inside the file_entry,  it crashes. In this case, the DUA and attack point were in the same function, and so the use of a global variable was not actually necessary, but in a larger program the DUA and attack point could be in different functions or even different compilation units.

If you like, you can download all 407 buggy program versions, along with the original source code and triggering inputs. Note that the current implementation does not make any attempt to hide the bugs from human eyes, so you will very easily be able to spot them by looking at the source code.

Next Time


Having developed a bug injection system, we would like to know how well it performs. In the next post, we'll examine questions of evaluation: how many bugs can we inject, and how do the liveness and taint compute measures influence the number of viable bugs? How realistic are the bugs? (much more complicated than it may first appear!) And how effective are some common bug-finding techniques like symbolic execution and fuzzing? We'll explore all these and more.




1 Having worked with dynamic program analysis for so long, I sometimes forget how ridiculous the term "dynamic taint analysis" is. If you're looking for another way to say the same thing, you can use "information flow" but dynamic taint analysis is the name that seems to have stuck.

2 Getting taint information by instrumenting the source works, but has a few drawbacks. Most notably, it causes a huge increase in the size of the source program, and slows it down dramatically. We're currently finishing up a new method, pri_taint, which can do the taint queries on uninstrumented programs as long as they have debug symbols. This should allow LAVA to scale to larger programs like Firefox.

3 The slightly weird ({ }) construct is a non-standard extension to C called a statement expression. It allows multiple statements to be executed in a block with control over what the block as a whole evaluates to. It's a nice feature to have available for automatically generated code, as it allows you to insert arbitrary statements in the middle of an expression without worrying about messing up the evaluation.

Tuesday, June 7, 2016

How to add a million bugs to a program (and why you might want to)

This is the first in a series of posts about evaluating and improving bug detection software by automatically injecting bugs into programs. You can find part two, with technical details of our bug injection technique, here.

In this series of posts, I'm going to describe how to automatically put bugs in programs, a topic on which we just published a paper at Oakland, one of the top academic security conferences. The system we developed, LAVA, can put millions of bugs into real-world programs. Why would anyone want to do this? Are my coauthors and I sociopaths who just want to watch the world burn? No, but to see why we need such a system requires a little bit of background, which is what I hope to provide in this first post.

I am sure this will come as a shock to most, but programs written by humans have bugs. Finding and fixing them is immensely time consuming; just how much of a developer's time is spent debugging is hard to pin down, but estimates range between 40% and 75%. And of course these errors can be not only costly for developers but catastrophic for users: attackers can exploit software bugs to run their own code, install malware, set your computer on fire, etc.

Weekly World News has known about this problem for years.


It should come as little surprise, then, that immense effort has been expended in finding ways to locate and fix bugs automatically. On the academic side, techniques such as fuzzing, symbolic execution, model checking, abstract interpretation, and creative combinations of those techniques, have been proposed and refined for the past 25 years. Nor has industry been idle: companies like Coverity, Fortify, Veracode, Klocwork, GrammaTech, and many more will happily sell (or rent) you a product that automatically finds bugs in your program.

Great, so by now we must surely have solved the problem, right? Well, not so fast. We should probably check to see how well these tools and techniques work. Since they're detectors, the usual way would be to measure the false positive and false negative rates. To measure false positives, we can just run one of these tools on our program, go through the output, and decide whether we think each bug it found is real.

The same strategy does not work for measuring false negatives. If a bug finder reports finding 42 bugs in a program, we have no way of knowing whether that's 99% or 1% of the total. And this seems like the piece of information we'd most like to have!

Heartbleed: detectable with static analysis tools, but only after the fact.


To measure false negatives we need a source of bugs so that we can tell how many of them our bug-finder detects. One strategy might be to look at historical bug databases and see how many of those bugs are detected. Unfortunately, these sorts of corpora are fixed in size – there are only so many bugs out there, and analysis tools will, over time, be capable of detecting most of them. We can see how this dynamic played out with Heartbleed: shortly after the bug was found, Coverity and GrammaTech quickly found ways to improve their software so that it could find Heartbleed.

Let me be clear – it's a good thing that vendors can use test cases like these to improve their products! But it's bad when these test cases are in short supply, leaving users with no good way of evaluating false negatives and bug finders with no clear path to improving their techniques.

This is where LAVA enters the picture. If we can find a way to automatically add realistic bugs to pre-existing programs, we can both measure how well current bug finding tools are doing, and provide an endless stream of examples that bug-finding tools can use to get better.

LAVA: Large-scale Automated Vulnerability Analysis

Goals for Automated Bug Corpora


So what do we want out of our bug injection? In our paper, we defined five goals for automated bug injection, requiring that injected bugs
  1. Be cheap and plentiful
  2. Span the execution lifetime of a program
  3. Be embedded in representative control and data flow
  4. Come with a triggering input that proves the bug exists
  5. Manifest for a very small fraction of possible inputs
The first goal we've already discussed – if we want to evaluate tools and enable "hill climbing" by bug finders we will want a lot of bugs. If it's too expensive to add a bug, or if we can only add a handful per program, then we don't gain much by doing it automatically – expensive humans can already add small numbers of bugs to programs by hand.

The next two relate to whether our (necessarily artificial) bugs are reasonable proxies for real bugs. This is a tricky and contentious point, which we'll return to in part three. For now, I'll note that the two things called out here – occurring throughout the program and being embedded in "normal" control and data flow – are intended to capture the idea that program analyses will need to do essentially the same reasoning about program behavior to find them as they would for any other bugs. In other words, they're intended to help ensure that getting better at finding LAVA bugs will make tools better at understanding programs generally.

The fourth is important because it allows us to demonstrate, conclusively, that the bugs we inject are real problems. Concretely, with LAVA we can demonstrate an input for each bug we inject that causes the program to crash with a segfault or bus error.

The final property is critical but not immediately obvious. We don't want the bugs we inject to be too easy to find. In particular, if a bug manifests on most inputs, then it's trivial to find it – just run the program and wait for the crash. We might even want this to be a tunable parameter, so that we could specify what fraction of the input space of a program causes a crash and dial the difficulty of finding the right input up or down.

Ethics of Bug Injection


A common worry about bug injection is that it could be misused to add backdoors into legitimate software. I think these worries are, for the most part, misplaced. To see why, consider the goals of a would-be attacker trying to sneak a backdoor into some program. They want:
  1. A way to get the program to do something bad on some secret input.
  2. Not to get caught (i.e., to be stealthy, and for the bugs to be deniable).
Looking at (1), it's clear that one bug suffices to achieve the goal; there's no need to add millions of bugs to a program. Indeed, adding millions of bugs harms goal (2) – it would require lots of changes to the program source, which would be very difficult to hide.

An attempted Linux kernel backdoor attempt from 2003. Can you spot the bugdoor?

In other words, the benefit that LAVA provides is in adding lots of bugs at scale. An attacker that wants to add a backdoor can easily do it by hand – they only need to add one, and even if it takes a lot of effort to understand the program, that effort will be rewarded with extra stealth and deniability. Although the bugs that LAVA injects are realistic in many ways, they do not look like mistakes a programmer would have naturally made, which means that manual code review would be very likely to spot them.

(There is one area where LAVA might help a would-be attacker – the analysis we do to locate portions of the program that have access to attacker controlled input could conceivably speed up the process of inserting a backdoor by hand. But this analysis is quite general, and is useful for far more than just adding bugs to programs.)

The Road Ahead


The next post will discuss the actual mechanics of automated bug injection. We'll see how, using some new taint analyses in PANDA we can analyze a program to find small modifications that cause attacker-controlled input to reach sensitive points in the program and selectively trigger memory safety errors when the input is just right.

Once we understand how LAVA works, the final post will be about evaluation: how can we tell if LAVA succeeded in its goals of injecting massive numbers of realistic bugs? And how well do current bug-finders fare at finding LAVA bugs?

Credits


The idea for LAVA originated with Tim Leek of MIT Lincoln Laboratory. Our paper lists authors alphabetically, because designing, implementing and testing it truly was a group effort. I am honored to share a byline with Patrick Hulin, Engin Kirda, Tim Leek, Andrea Mambretti, Wil Robertson, Frederick Ulrich, and Ryan Whelan.

Tuesday, January 5, 2016

PANDA Plugin Documentation

It's been a very long time coming, but over the holiday break I went through and created basic documentation for all 54 currently-available PANDA plugins. Each plugin now includes a manpage-style document named USAGE.md in its plugin directory.

You can find a master list of each plugin and a link to its man page here:
https://github.com/moyix/panda/blob/master/docs/Plugins.md

Hopefully this will help people get started using PANDA to do some cool reverse engineering!

Friday, October 2, 2015

PANDA VM Update October 2015

The PANDA Virtual machine has once again been updated, and you can download it from:


Notable changes:

  • We fixed a record/replay bug that was preventing Debian Wheezy and above from replaying properly.
  • The QEMU GDB stub now works during replay, so you can break, step, etc. at various points during the replay to figure out what's going on. We still haven't implemented reverse-step though – hopefully in a future release.
  • Thanks to Manolis Stamatogiannakis, the Linux OS Introspection code can now resolve file descriptors to actual filenames. Tim Leek then extended the file_taint plugin to use this information, so file-based tainting should be more accurate now, even if things like dup() are used.
  • We have added support for more versions of Windows in the syscalls2 code.
Enjoy!

Thursday, August 27, 2015

(Sys)Call Me Maybe: Exploring Malware Syscalls with PANDA

System calls are of great interest to researchers studying malware, because they are the only way that malware can have any effect on the world – writing files to the hard drive, manipulating the registry, sending network packets, and so on all must be done by making a call into the kernel.

In Windows, the system call interface is not publicly documented, but there have been lots of good reverse engineering efforts, and we now have full tables of the names of each system call; in addition, by using the Windows debug symbols, we can figure out how many arguments each system call takes (though not yet their actual types).

I recently ran 24,389 malware replays under PANDA and recorded all the system calls made, along with their arguments (just the top-level argument, without trying to descend into pointer types or dereference handle types). So for each replay, we now have a log file that looks like:

3f9b2340 NtGdiFlush
3f9b2340 NtUserGetMessage 0175feac 00000000 00000000 00000000
3f9b2120 NtCreateEvent 0058f8d8 001f0003 00000000 00000000 00000000
3f9b2120 NtWaitForMultipleObjects 00000002 0058f83c 00000001 00000000 00000000
3f9b2120 NtSetEvent 000002ec 00000000
3f9b2120 NtWaitForSingleObject 000002f0 00000000 0058f89c
3f9b2120 NtReleaseWorkerFactoryWorker 00000050
3f9b2120 NtReleaseMutant 00000098 00000000
3f9b2120 NtWaitForSingleObject 000005a4 00000000 00000000
3f9b2120 NtWaitForMultipleObjects 00000002 00dbf49c 00000001 00000000 00000000
3f9b2120 NtReleaseMutant 00000098 00000000
3f9b2120 NtWaitForMultipleObjects 00000002 00dbf4a8 00000001 00000000 00dbf4c8
3f9b2120 NtWaitForMultipleObjects 00000002 00dbf49c 00000001 00000000 00000000
3f9b2120 NtClearEvent 000002ec
3f9b2120 NtReleaseMutant 00000098 00000000
3f9b2120 NtWaitForMultipleObjects 00000002 00dbf49c 00000001 00000000 00000000
3f9b2120 NtReleaseMutant 000001e8 00000000
3f9b2120 NtWaitForMultipleObjects 00000002 00dbf3b8 00000001 00000000 00000000
3f9b2120 NtReleaseMutant 00000158 00000000
3f9b2120 NtCreateEvent 00dbeed4 001f0003 00000000 00000000 00000000
3f9b2120 NtDuplicateObject ffffffff fffffffe ffffffff 002edf50 00000000 00000000 00000002

3f9b2120 NtTestAlert
...

The first column identifies the process that made the call, using its address space as a unique identifier. The second gives the name of the call, and the remaining columns show the arguments passed to the function.

As usual, this data can be freely downloaded; the data set is 38GB. Each log file is compressed; you can use the showsc program (included in the tarball) to display an individual log file:

$ ./showsc 32 32bit/008d065f-7f5d-4a86-9995-970509ff3999_syscalls.dat.gz

You can download the data set here:

Interesting Malware System Calls

As a first pass, we can look at what the least commonly used system calls are. These may be interesting because rarely used system calls are more likely to contain bugs; in the context of malware, invoking a vulnerable system call can be a way to achieve privilege escalation.

Here are a few that came out from sorting the list of system calls in the malrec dataset and then searching Google for some of the least common:
  • NtUserMagControl (1 occurrence) One of many functions found by j00ru to cause crashes due to invalid pointer dereferences when called from the context of the CSRSS process
  • NtSetLdtEntries (2 occurrences) Used as an anti-debug trick by some malware
  • NtUserInitTask (3 occurrences) Used as part of an exploit for CVE-2012-2553
  • NtGdiGetNearestPaletteIndex (3 occurrences) Used in an exploit for MS07-017
  • NtQueueApcThreadEx (5 occurrences) Mentioned as a way to get attacker-controlled code into the kernel, allowing one to bypass SMEP
  • NtUserConvertMemHandle (5 occurrences) Used to replace a freed kernel object with attacker data in an exploit for CVE-2015-0058
  • NtGdiEnableEudc (9 occurrences) Used in a privilege escalation exploit where NtGdiEnableEudc assumes a certain registry key is of type REG_SZ without checking, allowing an attacker to overflow a stack buffer (I was unable to find anything about whether this has been patched – Update: Mark Wodrich points out that this is CVE-2010-4398 and it was patched in MS11-011)
  • NtAllocateReserveObject (11 occurrences) Used for a kernel pool spray
  • NtVdmControl (55 occurrences) Used for the famous CVE-2010-0232 bug; Tavis Ormandy won the Pwnie for Best Privilege Escalation Bug in 2010 for this.
Of course, we can't say for sure that the replays that execute these calls actually contain exploitation attempts. After all, there are benign ways to use each of the calls, or they wouldn't be in Windows in the first place :) But these are a few that may reward closer examination; if they are in fact exploit attempts, you can then use PANDA's record and replay facility to step through the exploit in as much detail as you like. You can even use PANDA's recently-fixed QEMU gdb stub to go through the exploit instruction by instruction.

You can peruse the full list of system calls and their frequencies here: 32-bit, 64-bit. Let me know if you find any other interesting calls in there :)

Updates 8/28/2015

If you want to know which log files have which system calls without processing all of them, I have created an index that lists the unique calls for each replay:
Also, Reddit user trevlix wondered whether the lack of pointer dereferencing was inherent to PANDA or something I'd just left out. My response:

Yes, it is possible to do that. I just wasn't able to because I didn't have access to full system call prototypes. E.g., to follow pointers for something like NtCreateFile, you need to know that its full prototype is
NTSTATUS NtCreateFile(
  _Out_    PHANDLE            FileHandle,
  _In_     ACCESS_MASK        DesiredAccess,
  _In_     POBJECT_ATTRIBUTES ObjectAttributes,
  _Out_    PIO_STATUS_BLOCK   IoStatusBlock,
  _In_opt_ PLARGE_INTEGER     AllocationSize,
  _In_     ULONG              FileAttributes,
  _In_     ULONG              ShareAccess,
  _In_     ULONG              CreateDisposition,
  _In_     ULONG              CreateOptions,
  _In_     PVOID              EaBuffer,
  _In_     ULONG              EaLength
);
You furthermore have to know how big an OBJECT_ATTRIBUTES struct is, so that when you dereference the pointer you know how many bytes to read and store in the log.
If you wanted to collect extra information about any of the logs posted, it's possible since they are full-system traces and can be replayed :) Supposing you have a syscall trace file like0a1a1a77-d4f1-43e0-bc14-4f34f7d96820_syscalls.dat.gz, you can use the UUID to find it on malrec and download the log file:
Then you'd just unpack that log (scripts/rrunpack.py in the PANDA directory) and replay it with a PANDA plugin that understands how to dereference the various pointers involved. For reference, you can see the PANDA plugin I originally used to gather the syscall traces:
And you can see on lines 108 and 119 where you'd have to add in code to read the dereferenced values.

Monday, August 24, 2015

One Weird Trick to Shrink Your PANDA Malware Logs by 84%

When I wrote about some of the lessons learned from PANDA Malrec's first 100 days of operation, one of the things I mentioned was that the storage requirements for the system were extremely high. In the four months since, the storage problem only got worse: as of last week, we were storing 24,000 recordings of malware, coming in at a whopping 2.4 terabytes of storage.

The amount of data involved poses problems not just for our own storage but also for others wanting to make use of the recordings for research. 2.4 terabytes is a lot, especially when it's spread out over 24,000 HTTP requests. If we want our data to be useful to researchers, it would be great if we could find better ways of compressing the recording logs.

As it turns out, we can! The key is to look closely at what makes up a PANDA recording:
  • The log of non-deterministic events (the -rr-nondet.log files)
  • The initial QEMU snapshot (the -rr-snp files)
The first of these is highly redundant and actually compresses quite well already – the xz compression used by PANDA's rrpack.py usually manages to get around a 5-6X reduction for the nondet log. The snapshots also compress pretty well, at around 4X.

So where can we find further savings? The trick is to notice that for the malware recordings, each run is started by first reverting the virtual machine to the same state. That means that the initial snapshot files for our recordings are almost all identical! In fact, if we do a byte-by-byte diff, the vast majority differ by only a few bytes – most likely a timer value that increments in the short time between when we revert to the snapshot and begin our recording.

With this observation in hand, we can instead store the malware recordings in a new format. The nondet log will still be compressed with xz, but now the snapshot for each will now instead be stored as a binary diff with respect to a reference snapshot. Because we have two separate recording platforms and have changed the initial environment used by Malrec a few times, the total number of reference snapshots we need is 8 – but this is a huge improvement over storing 24,000 snapshots! The binary diff for each recording then requires only a handful of bytes to specify.

The upshot of all of this is that a dataset of 24,189 PANDA malware recordings now takes up just 387 GB, a savings of 84%. This is pretty astonishing – the recordings in the archive contain 476 trillion instructions' worth of execution, meaning our storage rate is 1147.5 instructions per byte! As a point of comparison, one recent published instruction trace compression scheme achieved 2 bits per instruction; our compression is 0.007 bits per instruction – though this comparison is somewhat unfair since that paper can't assume a shared starting point.

You can download this data set as a single file from our MIT mirror; please share and mirror this as widely as you like! There is a README included in the archive that contains instructions for extracting and replaying any of the recordings. Click the link below to download:


Stay tuned, too – there's more cool stuff on the way. Next time, I'll be writing about one of the things you can do with a full-trace recording dataset like this: extracting system call traces with arguments. And of course that means I'll have a syscall dataset to share then as well :)