Friday, October 2, 2015

PANDA VM Update October 2015

The PANDA Virtual machine has once again been updated, and you can download it from:

Notable changes:

  • We fixed a record/replay bug that was preventing Debian Wheezy and above from replaying properly.
  • The QEMU GDB stub now works during replay, so you can break, step, etc. at various points during the replay to figure out what's going on. We still haven't implemented reverse-step though – hopefully in a future release.
  • Thanks to Manolis Stamatogiannakis, the Linux OS Introspection code can now resolve file descriptors to actual filenames. Tim Leek then extended the file_taint plugin to use this information, so file-based tainting should be more accurate now, even if things like dup() are used.
  • We have added support for more versions of Windows in the syscalls2 code.

Thursday, August 27, 2015

(Sys)Call Me Maybe: Exploring Malware Syscalls with PANDA

System calls are of great interest to researchers studying malware, because they are the only way that malware can have any effect on the world – writing files to the hard drive, manipulating the registry, sending network packets, and so on all must be done by making a call into the kernel.

In Windows, the system call interface is not publicly documented, but there have been lots of good reverse engineering efforts, and we now have full tables of the names of each system call; in addition, by using the Windows debug symbols, we can figure out how many arguments each system call takes (though not yet their actual types).

I recently ran 24,389 malware replays under PANDA and recorded all the system calls made, along with their arguments (just the top-level argument, without trying to descend into pointer types or dereference handle types). So for each replay, we now have a log file that looks like:

3f9b2340 NtGdiFlush
3f9b2340 NtUserGetMessage 0175feac 00000000 00000000 00000000
3f9b2120 NtCreateEvent 0058f8d8 001f0003 00000000 00000000 00000000
3f9b2120 NtWaitForMultipleObjects 00000002 0058f83c 00000001 00000000 00000000
3f9b2120 NtSetEvent 000002ec 00000000
3f9b2120 NtWaitForSingleObject 000002f0 00000000 0058f89c
3f9b2120 NtReleaseWorkerFactoryWorker 00000050
3f9b2120 NtReleaseMutant 00000098 00000000
3f9b2120 NtWaitForSingleObject 000005a4 00000000 00000000
3f9b2120 NtWaitForMultipleObjects 00000002 00dbf49c 00000001 00000000 00000000
3f9b2120 NtReleaseMutant 00000098 00000000
3f9b2120 NtWaitForMultipleObjects 00000002 00dbf4a8 00000001 00000000 00dbf4c8
3f9b2120 NtWaitForMultipleObjects 00000002 00dbf49c 00000001 00000000 00000000
3f9b2120 NtClearEvent 000002ec
3f9b2120 NtReleaseMutant 00000098 00000000
3f9b2120 NtWaitForMultipleObjects 00000002 00dbf49c 00000001 00000000 00000000
3f9b2120 NtReleaseMutant 000001e8 00000000
3f9b2120 NtWaitForMultipleObjects 00000002 00dbf3b8 00000001 00000000 00000000
3f9b2120 NtReleaseMutant 00000158 00000000
3f9b2120 NtCreateEvent 00dbeed4 001f0003 00000000 00000000 00000000
3f9b2120 NtDuplicateObject ffffffff fffffffe ffffffff 002edf50 00000000 00000000 00000002

3f9b2120 NtTestAlert

The first column identifies the process that made the call, using its address space as a unique identifier. The second gives the name of the call, and the remaining columns show the arguments passed to the function.

As usual, this data can be freely downloaded; the data set is 38GB. Each log file is compressed; you can use the showsc program (included in the tarball) to display an individual log file:

$ ./showsc 32 32bit/008d065f-7f5d-4a86-9995-970509ff3999_syscalls.dat.gz

You can download the data set here:

Interesting Malware System Calls

As a first pass, we can look at what the least commonly used system calls are. These may be interesting because rarely used system calls are more likely to contain bugs; in the context of malware, invoking a vulnerable system call can be a way to achieve privilege escalation.

Here are a few that came out from sorting the list of system calls in the malrec dataset and then searching Google for some of the least common:
  • NtUserMagControl (1 occurrence) One of many functions found by j00ru to cause crashes due to invalid pointer dereferences when called from the context of the CSRSS process
  • NtSetLdtEntries (2 occurrences) Used as an anti-debug trick by some malware
  • NtUserInitTask (3 occurrences) Used as part of an exploit for CVE-2012-2553
  • NtGdiGetNearestPaletteIndex (3 occurrences) Used in an exploit for MS07-017
  • NtQueueApcThreadEx (5 occurrences) Mentioned as a way to get attacker-controlled code into the kernel, allowing one to bypass SMEP
  • NtUserConvertMemHandle (5 occurrences) Used to replace a freed kernel object with attacker data in an exploit for CVE-2015-0058
  • NtGdiEnableEudc (9 occurrences) Used in a privilege escalation exploit where NtGdiEnableEudc assumes a certain registry key is of type REG_SZ without checking, allowing an attacker to overflow a stack buffer (I was unable to find anything about whether this has been patched – Update: Mark Wodrich points out that this is CVE-2010-4398 and it was patched in MS11-011)
  • NtAllocateReserveObject (11 occurrences) Used for a kernel pool spray
  • NtVdmControl (55 occurrences) Used for the famous CVE-2010-0232 bug; Tavis Ormandy won the Pwnie for Best Privilege Escalation Bug in 2010 for this.
Of course, we can't say for sure that the replays that execute these calls actually contain exploitation attempts. After all, there are benign ways to use each of the calls, or they wouldn't be in Windows in the first place :) But these are a few that may reward closer examination; if they are in fact exploit attempts, you can then use PANDA's record and replay facility to step through the exploit in as much detail as you like. You can even use PANDA's recently-fixed QEMU gdb stub to go through the exploit instruction by instruction.

You can peruse the full list of system calls and their frequencies here: 32-bit, 64-bit. Let me know if you find any other interesting calls in there :)

Updates 8/28/2015

If you want to know which log files have which system calls without processing all of them, I have created an index that lists the unique calls for each replay:
Also, Reddit user trevlix wondered whether the lack of pointer dereferencing was inherent to PANDA or something I'd just left out. My response:

Yes, it is possible to do that. I just wasn't able to because I didn't have access to full system call prototypes. E.g., to follow pointers for something like NtCreateFile, you need to know that its full prototype is
NTSTATUS NtCreateFile(
  _Out_    PHANDLE            FileHandle,
  _In_     ACCESS_MASK        DesiredAccess,
  _In_     POBJECT_ATTRIBUTES ObjectAttributes,
  _Out_    PIO_STATUS_BLOCK   IoStatusBlock,
  _In_opt_ PLARGE_INTEGER     AllocationSize,
  _In_     ULONG              FileAttributes,
  _In_     ULONG              ShareAccess,
  _In_     ULONG              CreateDisposition,
  _In_     ULONG              CreateOptions,
  _In_     PVOID              EaBuffer,
  _In_     ULONG              EaLength
You furthermore have to know how big an OBJECT_ATTRIBUTES struct is, so that when you dereference the pointer you know how many bytes to read and store in the log.
If you wanted to collect extra information about any of the logs posted, it's possible since they are full-system traces and can be replayed :) Supposing you have a syscall trace file like0a1a1a77-d4f1-43e0-bc14-4f34f7d96820_syscalls.dat.gz, you can use the UUID to find it on malrec and download the log file:
Then you'd just unpack that log (scripts/ in the PANDA directory) and replay it with a PANDA plugin that understands how to dereference the various pointers involved. For reference, you can see the PANDA plugin I originally used to gather the syscall traces:
And you can see on lines 108 and 119 where you'd have to add in code to read the dereferenced values.

Monday, August 24, 2015

One Weird Trick to Shrink Your PANDA Malware Logs by 84%

When I wrote about some of the lessons learned from PANDA Malrec's first 100 days of operation, one of the things I mentioned was that the storage requirements for the system were extremely high. In the four months since, the storage problem only got worse: as of last week, we were storing 24,000 recordings of malware, coming in at a whopping 2.4 terabytes of storage.

The amount of data involved poses problems not just for our own storage but also for others wanting to make use of the recordings for research. 2.4 terabytes is a lot, especially when it's spread out over 24,000 HTTP requests. If we want our data to be useful to researchers, it would be great if we could find better ways of compressing the recording logs.

As it turns out, we can! The key is to look closely at what makes up a PANDA recording:
  • The log of non-deterministic events (the -rr-nondet.log files)
  • The initial QEMU snapshot (the -rr-snp files)
The first of these is highly redundant and actually compresses quite well already – the xz compression used by PANDA's usually manages to get around a 5-6X reduction for the nondet log. The snapshots also compress pretty well, at around 4X.

So where can we find further savings? The trick is to notice that for the malware recordings, each run is started by first reverting the virtual machine to the same state. That means that the initial snapshot files for our recordings are almost all identical! In fact, if we do a byte-by-byte diff, the vast majority differ by only a few bytes – most likely a timer value that increments in the short time between when we revert to the snapshot and begin our recording.

With this observation in hand, we can instead store the malware recordings in a new format. The nondet log will still be compressed with xz, but now the snapshot for each will now instead be stored as a binary diff with respect to a reference snapshot. Because we have two separate recording platforms and have changed the initial environment used by Malrec a few times, the total number of reference snapshots we need is 8 – but this is a huge improvement over storing 24,000 snapshots! The binary diff for each recording then requires only a handful of bytes to specify.

The upshot of all of this is that a dataset of 24,189 PANDA malware recordings now takes up just 387 GB, a savings of 84%. This is pretty astonishing – the recordings in the archive contain 476 trillion instructions' worth of execution, meaning our storage rate is 1147.5 instructions per byte! As a point of comparison, one recent published instruction trace compression scheme achieved 2 bits per instruction; our compression is 0.007 bits per instruction – though this comparison is somewhat unfair since that paper can't assume a shared starting point.

You can download this data set as a single file from our MIT mirror; please share and mirror this as widely as you like! There is a README included in the archive that contains instructions for extracting and replaying any of the recordings. Click the link below to download:

Stay tuned, too – there's more cool stuff on the way. Next time, I'll be writing about one of the things you can do with a full-trace recording dataset like this: extracting system call traces with arguments. And of course that means I'll have a syscall dataset to share then as well :)

Monday, April 13, 2015

PANDA VM Update April 2015

The PANDA virtual machine has been updated to the latest version of PANDA, which corresponds to commit ce866e1508719282b970da4d8a2222f29f959dcd. You can download it here:

  • The taint system has been rewritten and is now available as the taint2 plugin. It is at least 10x faster, and uses much less memory. You can check out an example of how to use it in the recently updated tainted instructions tutorial.
  • Since taint is now usable, I have increased the amount of memory in the VM to 4GB, which is reasonable for most tasks that use taint.
  • PANDA now understands system calls and their arguments on Linux (x86 and ARM) and Windows 7 (x86). This is available in the syscalls2 plugin, and even has some documentation.
  • There is now a generic logging format for PANDA, which uses Protocol Buffers. Check out the pandalog documentation for more details.
There's lots more that has changed, and I will try to write up a more detailed post about all the cool stuff PANDA can do now soon!

Tuesday, March 24, 2015

100 Days of Malware

It's now been a little over 100 days since I started running malware samples in PANDA and making the executions publicly available. In that time, we've analyzed 10,794 pieces of malware, which generated:
  • 10,794 record/replay logs, representing 226,163,195,948,195 instructions executed
  • 10,794 packet captures, totaling 26GB of data and 33,968,944 packets
  • 10,794 movies, which are interesting enough that I'll give them their own section
  • 10,794 VirusTotal reports, indicating what level of detection they had when they were run by malrec
  • 107 torrents, containing downloads of the above
I've been pleased by the interest malrec has generated. We've had visitors from over 6000 unique IPs, in 89 different countries:

The Movies

There's a lot of great stuff in these ~10K movies. An easy way to get an idea of what's in there is to sort by filesize; because of the way MP4 encoding works, larger files in general mean that there's more going on on-screen (though only up to a point – the largest ones seemed to be mostly command prompts scrolling, which wasn't very interesting). I took it upon myself to watch the ones between ~300KB and 1MB, and found some fun videos:

Several games:


Anime-like game

"Wild Spirit"

Some fake antivirus:

Extortion attempts were really popular:

Though they didn't always invest in fancy art design:

Extortion (low-rent)

Download managers and trojaned installers were very popular:

Broken trojaned installer

Download manager

Some used legitimate-looking documents to disguise nefarious intentions:

Some maritime history

German Britfilms

Chinese newspaper

Finally, there's the weird and random:

Trust Me, I'm a Doctor


Not pictured: there was even one sample that played a porn video (NSFW) while it infected you; I guess it was intended as a distraction?

Antivirus Results

After malrec runs a sample in the PANDA sandbox, it checks to see what VirusTotal thinks about the file, and saves the result in VirusTotal's JSON format. From this, we can find out what the most popular families in our corpus are. For example, here's what McAfee thought about our samples:

And Symantec:

In the graphs above, "None" includes both cases where the AV didn't detect anything and cases where the sample hadn't been submitted to VirusTotal yet. You therefore should probably not use this data to try and draw any conclusions about the relative efficacy of Symantec vs McAfee. I'd like to go back and see how the detection rate has changed, but unfortunately my VirusTotal API key is currently rate-limited, so running all 10,794 samples would be a pain.

Bugs in PANDA

Making recordings on such a large scale has exposed bugs in PANDA, some of which we have fixed, others of which need further investigation:
  • One sample, 5309206b-e76f-417a-a27e-05e7f20c3c9d, ran a tight loop of rdtsc queries without interrupts. Because PANDA's replay queue would continue filling until it saw the next interrupt, this meant that the queue could grow very large and exhaust physical memory. This was fixed by limiting the queue to 65,536 entries.
  • Our support for 64-bit Windows is much less reliable than I would like. Of the 153 64-bit samples, 45 failed to replay (29.4%). We clearly need to do better here!
  • We see some sporadic replay failures in 32-bit recordings as well, but they are much more rare: out of the 10,641 32-bit recordings we have, only 14 failed to replay. I suspect that some of these are due to a known bug involving the recording of port I/O.
  • One sample, MD5 1285d1893937c3be98dcbc15b88d9433, we have not even been able to record, because it causes QEMU to run out of memory; if you'd like to play with it, you can download it here.


With our current disk usage (1.2TB used out of 2TB), I'm anticipating that we'll be able to run for another 50 days or so; hopefully by then I'll be able to arrange to get more storage.

Meanwhile, I've started to do some deeper research on the corpus, including visualization and mining memory for printable strings. I'm looking forward to getting a clearer picture of what's in our corpus as it grows!

Tuesday, December 9, 2014

Reproducible Malware Analyses for All

Summary: With help from GTISC, I have begun running 100 malware samples per day and posting the PANDA record & replay logs online at The goal is to lower the barriers to entry for doing dynamic malware research, and to make such research reproducible.

Today, I spoke at the ACSAC Malware Memory Forensics workshop in New Orleans about a problem that I think has been largely ignored in existing dynamic malware analysis research: reproducibility.

To make results reproducible, a computer science researcher typically needs to do three things:
  1. Carefully and precisely describe their methods.
  2. Release the code they wrote for their system or analysis.
  3. Release the data the analysis was performed on.
Of course, even research published at top conferences may fail at some of these criteria; a recent study by Collberg et al. attempted to obtain the code associated with 613 recent papers from ACM conferences, and were able to obtain, build and run the code for only 102. (I'm eliding away a lot of important detail here; please do read the original study!)

Rather than discuss sharing of code today, however, I'd like to talk about sharing data, and particularly sharing data in malware analysis.

For static analysis of malware, sharing the malware executable is usually sufficient to satisfy the requirement for releasing data; anyone can then go and look at the same static code and reach the same conclusions by following the author's description. A number of sites exist to provide access to such malware samples, such as VirusShare, OpenMalware, and Contagio.

The data associated with a dynamic analysis is more difficult to share. Software execution is by nature ephemeral: each run of a program may be slightly different based on things like timings, the availability of network servers, the versions of software installed on the machine, and more. This problem is especially apparent with malware, which typically has a short "shelf life". Many malware samples need to contact their command and control servers to operate, and these C&C servers often disappear within days or weeks after a piece of malware is released. Malware may even be designed to "self-destruct" after a certain date, exiting immediately if it is run too long after its creation.

Thus, a researcher who tries to reproduce a dynamic malware analysis by running a sample from last year will almost certainly discover that the malware no longer has the behavior originally seen. As a result, most dynamic analyses of malware are currently not reproducible in any meaningful sense.

Record and replay provides a solution. As I have discussed in the past, record and replay allows one to reproduce a whole-system dynamic execution by creating a compact log of the nondeterministic inputs to a system. These logs can be shared and then replayed in PANDA, allowing anyone to re-run the exact execution and be assured that every instruction will be executed exactly the same way.

To put my malware where my mouth is, I've set up a site where, every day, 100 new malware record/replay logs and associated PCAPs will be posted. This is currently something of a trial run, so there may be some changes as I shake out the bugs; in particular, I hope to give it a nicer interface than just a brute listing of all the MD5s. Check it out:

Here are some ideas for what to do with this data:
  1. Create movies of all the malware executions and watch them to see if there's anything interesting. For example, here's a hilarious extortion attempt from last night:
  2. Use something like TZB to find all printable strings accessed in memory throughout the entire execution, and build a search engine that indexes all of these strings, so you could search for "bitcoin" and find all the bitcoin stealing samples in the corpus.
  3. Create system call traces and then use them to automatically apply behavioral labels to the corpus.
  4. Go apply your expertise in machine learning to do something really cool that I haven't even thought of because I'm bad at machine learning, without having to set up your own malware analysis platform.
I'm really excited to see what we can accomplish.
The malware recordings are graciously hosted by the Georgia Tech Information Security Center, who are also providing me with access to malware samples. Thanks in particular to Paul Royal and Adam Allred for helping me make this a reality after I pitched it at CSAW THREADS.

Wednesday, November 26, 2014

Replaying Regin in PANDA

Regin, a piece of state-sponsored malware that may have been used to attack telecoms and cryptographers, has recently come to light. There are several good writeups out there, and I encourage you to check them out.

Getting access to samples in cases like this is often a challenge. Luckily, both The Intercept and VXShare (warning: both links contain live malware) have released samples thought to be associated with Regin, so that others can perform independent analysis. So far, it appears that the samples are all of the "stage1" component of the malware, rather than the initial "stage0" infector or the later stages.

In order to allow others to do dynamic analysis of this malware, I built a very small malware sandbox setup using PANDA. The sandbox essentially just executes a sample for five minutes, recording it using PANDA's record and replay facility. The process is slightly complicated by the fact that most of the stage1 samples are kernel-mode components; to (hopefully) deal with this I use the sc utility to create and start a service with the malware sample.

So, for normal executables:

start sample.exe

And for the kernel mode components:

sc create sample binPath= sample.exe type= kernel
sc start sample

So, without further ado, here are the recordings, associated PCAPs, and videos of the samples being executed:

The index.txt file shows the mapping between the original sample names and the auto-generated names used by the malware sandbox, along with the MD5s of each sample. Note that I have not tried to ensure that these samples really are Regin, and at least one (sample ID 26ed64ef-fcde-4171-99aa-e1e46301315d, MD5 0e783c9ea50c4341313d7b6b4037245b) seems to in fact be a QQ info stealer. There are also a few duplicates due to overlaps in the samples provided by The Intercept and VXShare; I have kept both in case a differential analysis between two runs turns out to be useful.

Happy malware analysis! And if you have more samples, please get in touch on Twitter (@moyix) or email me!