A couple ideas that went nowhere
I suspect a lot of people in academia end up having a lot of ideas and projects that went nowhere for any number of reasons – maybe there were insurmountable technical challenges, maybe the right person to work on it never materialized, or maybe it just got crowded out by other projects and never picked back up. Here are a couple of mine. For each I'll try to indicate why it fell by the wayside, and whether I think it could be resurrected (if you're interested in doing some idea necromancy, let me know! :)).
Detecting Flush+Flush
Among the flurry of microarchitectural side channel attacks that eventually culminated in the devastating Spectre and Meltdown attacks was one that has received relatively little attention: Flush+Flush. The base of the attack is the observation that
clflush
takes a different amount of time depending on whether the address to be flushed was already in the cache or not. Gruss et al. had a nice paper on this variant of the attack at DIMVA 2016.
The interesting thing about Flush+Flush to me is not its speed (which I believe has since been surpassed) but the fact that it is stealthy: unlike Flush+Reload, which causes an unusually large number of cache misses in the attacking process, Flush+Flush only causes cache misses in the victim process. So you can tell that an attack is happening, but you can't tell which process is actually carrying it out --- which might be useful if you want to stop the attack without taking down the whole machine.
The key idea of the project was that even if you have only a global detector of whether an attack is going on system-wide, you can convert this into something that detects which process is attacking with a simple trick. Assuming you have control of the OS scheduler, just pause half the processes on the system and see if the attack stops. If it does, then the attacker must have been in the half you paused; you can now repeat this procedure on those processes and find the attacker via binary search. There are some wrinkles here: what if the attacker is carrying out the attack from multiple cooperating processes? What if the attacker probabilistically chooses whether or not to continue the attack at each time step?
It's a pretty simple idea but I think it may have applications beyond detecting flush-flush. For example, I recently complained on twitter that it's currently hard to tell what's causing my Mac's fan to spin. I have a detector (fan is on); if I can systematically freeze and unfreeze processes on the system, I can convert that to something that tells me . It's also very similar to the well known ideas of code bisection and (my favorite name) Wolf Fence Debugging. So if this defense is ever implemented I hope it becomes known as the wolf fence defense :)
This project failed primarily due to personnel issues: a couple different students took it on as a semester or internship project, but didn't manage to finish it up before they graduated or moved on to other things. It also turned out to be more difficult than expected to reliably carry out Flush+Flush attacks; at the time we started the project, a lot less was known about the Intel microarchitecture, and we were tripped up by features like hardware prefetching. We were also hampered by the lack of widely available implementations of hardware performance counter-based detections, and our own implementation was prone to false positives (at least in this we are not alone: a recent SoK paper has pointed out that many of these HPCs are not very reliable for security purposes).
I think this could still make a nice little paper, particularly if some of the more complex scenarios and advanced attackers were taken into account. When I discussed this with a more theoretically minded friend, he thought there might be connections to compressed sensing; meanwhile, a game theoretically oriented colleague I spoke to thought it sounded like an instance of a Multi-Armed Bandit problem. So there's still plenty of meat to this problem for an interested researcher.
Reverse Engineering Currency Detection Code
It is by now well-known that a wide variety of software and hardware, from cameras to photo editing software such as Photoshop, implement currency detection in order to help combat counterfeiting. But surprisingly little is known about these algorithms: Steven Murdoch did some research on the topic using a mix of black-box techniques and reverse engineering all the way back in 2004, but as far as I know never published a paper on the topic.
Reverse engineering the precise detection technique could have a lot of benefits. First, as a matter of principle, I think defensive techniques must be attacked if we are to rely on them; the fact that this has been used in the wild for more than 15 years across a wide range of devices without any rigorous public analysis is really surprising to me. Second, there is a fun application if we can pin down precisely what makes the currency detector fire: we could create something that placed a currency watermark on arbitrary documents, making them impossible to scan or edit on devices that implement the currency detector! We could even, perhaps, imagine making T-shirts that trigger the detector when photographed :) I believe this idea has been floated before with the EURion constellation, but based on Murdoch's research we know that the EURion is not the only feature used (and it may not be used at all by modern detectors).
Our technical approach to this problem was to use dynamic taint analysis and measure the amount of computation performed by Photoshop in each pixel of the input image. I now think this was a mistake. First, many image analyses compute global features (such as color histograms) over the whole image; taint analysis on these will only tell you that every pixel contributes to the decision [1]. A more profitable approach might be something like differential slicing, which pins down the causal differences between two closely related execution traces. This would hopefully help isolate the code responsible for the detection for more manual analysis.
As with the Flush+Flush detection above, this project could still be viable if approached by someone with strength in binary analysis and some knowledge of image processing and watermarking algorithms.
1. This also made me start thinking about variants of taint analysis that would be better suited to image analyses. One possibility is something based on quantitative information flow; Steven McCamant et al. had some success at using this to analyze the security of image redactions. Even with traditional dynamic taint analysis, though, I think it might be possible to try tainting the results of intermediate stages of the computation (e.g., in the histogram example, one could try tainting each bucket and seeing how it contributed to the eventual decision, or for an FFT one could apply the taint analysis after the transformation into the frequency domain).
Comments