Skip to main content

DARPA's Cyber Grand Challenge: Can it be done? And Should it?

DARPA recently announced a new Grand Challenge in Cyber security. They're challenging competitors to develop self-healing software that can patch security vulnerabilities automatically, and there's a $2 million prize at stake.

It's only open to US organisations, so Trust IV aren't allowed to enter.

Still, it's fun to speculate about what kinds of technologies could be used. I've written a couple of "toy" applications, that demonstrate some of the ideas that competitors might use. They're hosted on my GitHub page.

It might get a little technical in here. If it all gets a bit much for you, you can skip ahead to the conclusions, which I've deliberately kept as non-technical as possible

Bytecode Patching

One of the motivation behind the challenge, is that a lot of the work is already done. Security researchers already use automated tools to investigate vulnerabilities, so maybe it's possible for these tools to patch them as well.

The competition rules say that competitors will be given a variety of insecure binaries, and have to find and patch the vulnerabilities in them. The rules don't say what kind of binaries, but for simplicity I'm going to assume they'll be compiled Java applications. There are already tools out there that can find a variety of bugs in compiled Java code (FindBugs is probably the best known). Maybe fixing them isn't a huge step?

One particularly common type of security vulnerability is SQL Injection. FindBugs can already spot SQL Injection vulnerabilities, and they're easily fixed, but many organisations still have vulnerable code (much to the delight of hacker groups like LulzSec). So, if it's that easy, let's have a go at fixing them automatically.

There's a simple proof-of-concept on my GitHub page. It finds unsafe uses of Statement.execute, and attempts to replace them with PreparedStatement. It's totally unsafe for production use (it's really just a find-and-replace on JVM bytecodes), but it's enough to fix the (deliberate) bug in my fake vulnerable app. A more robust implementation would operate on a tree representation of the bytecode, and would use data flow analysis to ensure that its modifications were safe. You could also plug in into Java's instrumentation framework, so that classes are patched at load-time. It needs ASM4, and VulnerableApp needs HSQLDB.

But is it a good idea? It's really a tourniquet. It stops the bleeding, but you still need a doctor to fix the root cause.

First-aid certainly sounds good, as it gives us more time to issue a cure.

But prevention is even better, and this kind of issue is already preventable. Remember, we already have tools that can find SQL Injection vulnerabilities before the system goes live. In this case prevention is a much better strategy than cure.

Smart Proxies

Maybe we're not supposed to go messing around with the binaries. After all, ff someone gives you binaries, rather than source code, it's usually a hint that they don't want you poking around in there. So what can we do without poking around in the binaries?

We can certainly put a firewall around the application. But maybe we can go a little further. A number of common vulnerabilities are caused by mis-placed trust (CSRF and XSS, for example - both have been exploited in-the-wild). HTTP has a messed-up trust model - it's too restrictive in many respects and too permissive in others - so a lot of applications get it wrong. If an application gets it wrong, maybe we can stick it in a wrapper that gets it right.

The easiest target here is CSRF. There are some widely-used design patterns, like double-submit-cookie, which make the applications immune to it. You can't fix it with a firewall, as firewalls avoid making changes to the functionality of the application. Instead, let's put a wrapper around it. The wrapper do all the required checks at the front-end, but it will make the calls to the insecure system itself, so the system doesn't know that it's being used in this way.

There's a simple example of such a "smart proxy" on GitHub. You can try it out by pointing it at a web server you'd like to wrap, and browsing to http://localhost:8001. You can simulate an unauthorised request by navigating to a page with a form on it, deleting your cookies, and then trying to submit. It doesn't work with Chrome, for reasons that I don't understand, but are probably related to this Stack Overflow question. It undoubtedly has many other bugs too, that probably aren't worth fixing.

But again, is this a good idea?

Like the SQL injection fix, it's first-aid rather than cure. And it's a fix for a known issue, so it should be preventable. Mature web application frameworks typically have built-in CSRF and XSS protection, so you only need to worry about these issues if you're rolling your own framework. If you're rolling your own framework, I'd guess that you're either smart enough to do it right, or you're too foolish to realise your mistake.

Novel Issues

Maybe the problem is that we're looking at known issues. One of the aims of the challenge is for systems to respond to novel threats. By definition, this is something no-one's seen before, so what do we do?

Well, the simplest option is to just bolt all the doors. If you've got an intrusion detection system, you can give it a panic button which severs all connection to the outside world when it sees an intrusion. This is certainly doable with current technology.

Or, maybe you just bolt some of the doors. If you've got some clues about how the intrusion happened, you might be able to just close some bulkheads around the issue. Maybe your intrusion detection system spotted some dynamic-looking SQL? Lock down the system that produces that SQL. Maybe the intrusion used a particular system user account? Lock out that account.  Maybe the intrusion threw an exception, that showed up in one of your logs? Lock down the classes that look most relevant.

To get an even better idea where to look, rather than just sitting back and waiting for someone to attack our system, let's attack it ourselves. One of the "areas of excellence" that DARPA will judge competitors on is Autonomous Vulnerability Scanning. There are already mature vulnerability scanners out there, and the information from the vulnerability scanner could be used to provide further clues about which areas to fortify.

But this brings up an issue we've seen before. These vulnerability scanners can only identify known vulnerabilities. The thing about known vulnerabilities is that they (usually) have known fixes. If an issue has a known fix, it can be prevented before it needs curing.

There could still be some mileage here. Some recent exploits have combined existing low-value vulnerabilities into one high-value exploit. Perhaps a vulnerability scanner that attempted to chain vulnerabilities could identify issues that others wouldn't. And if you know the chain it used, you can target the weak links in that chain.

Unfortunately, I don't have an example of this to show off. It's loosely based on something that a friend of mine is working on, but I wouldn't want to publicise his idea before it's finished.

Skynet

Another of my friends was deeply concerned by the possibility that this competition could produce Skynet. It seems a little far-fetched, but it's worth thinking about just how intelligent this thing could be.

The ideal system would be able to analyse a totally novel threat, identify the root cause, and formulate a patch. If I close my eyes, I can maybe imagine that there's a security team somewhere who've analysed and fixed so many vulnerabilities, that they've automated almost all of the process, and it's just a question of wiring all the steps together.

But if I open my eyes and look at some actual zero-day exploits, it's hard to see how that could work. Many zero-day exploits are genuinely clever. You need to understand how the system is supposed to work, how it actually works, and how to exploit the difference.

It's the first point that's the toughest. If we could actually create a meta-system that understood how systems were supposed to work, then that meta-system could create those systems for us, so we wouldn't have to rely on the bug-ridden system we'd create ourselves, and this whole challenge would be unnecessary.

I've seen many companies peddling products that are supposed to be able to do just that, but my experience is that none of their silver bullets actually kill werewolves.

So I think the Skynet scenario is probably off the table.

Conclusion

The difficulty with all of these approaches, is that they're merely first-aid. The root cause still needs a cure, and in most cases the root cause could have been prevented.

And in many cases, it's far easier to prevent the problem than to cure it. There are already tools out there that can detect or prevent most common security vulnerabilites, including pretty much all of the vulnerabilities I've mentioned in this blog post. It's well documented that the earlier you diagnose a problem the cheaper it is to fix, so these measures seem like no-brainers.

Systems continue to have security vulnerabilities, and sometimes it's more important to stop the bleeding than to cure the disease, but prevention is better than cure - and usually cheaper too.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.