Dan Bernstein on sandboxing

Thoughts on sandboxing, found via Aaron Swartz’s weblog, who in turn found it in a writeup about Dan Bernstein’s 2005 research activities (at the end):

The software installed on my newest home computer contains 78 million lines of C and C++ source code. Presumably there are hundreds of thousands of bugs in this code. Removing all of those bugs would be quite expensive. Fortunately, there’s a less expensive way to eliminate most security problems.

Consider, for example, a compressed music track. The UNIX ogg123 program and underlying libraries contain 50000 lines of code whose sole purpose is to read a compressed music track in Ogg Vorbis format and write an uncompressed music track in wave format.

UNIX makes it possible (though unnecessarily difficult) to build a safeogg program that does the same conversion and that has no other power over the system. Bugs in the 50000 lines of code are then irrelevant to security: if the input is from an attacker who seizes control of safeogg, the most the attacker can do is write arbitrary output, which is what the input source was authorized to do anyway.

I’m sharing this here because much of the reasoning behind Landlock traces its roots back to the same train of thoughts (and also because Dan Bernstein lais down the reasoning very nicely).

Choice of sandboxing mechanisms

This is 2005, so the UNIX mechanisms that he suggests for sandboxing are:

(Unfortunately, some of these require higher privileges than what normal programs run as, a property that these mechanisms also share with classic Linux Security modules like AppArmor and SELinux.)

One thing where Landlock’s approach differs slightly is that we believe that the mechanisms for sandboxing should be available to unprivileged processes so that they become maximally useful – software authors for general purpose software generally can not expect that their software runs with high privileges, and neither should they – it is absurd if a process needs higher privileges in order to drop privileges. (Seccomp was an attempt at a Sandboxing mechanism that did not require higher privileges, but Seccomp had other practical issues).

Sandboxing should be a developer task

But the overlap to Landlock’s approach is very large – especially when it comes to the idea that the use of the sandboxing mechanism should be integrated into a program’s design: It becomes the programmer’s responsibility to design the sandbox, and this results in narrower sandboxes:

I should emphasize here that there’s a fundamental difference between this project and typical sandboxing projects described in the literature. The goal of a typical sandboxing project is to apply as many restrictions as possible to a program while receiving no help from the programmer; the problem is that this doesn’t stop all attacks.1 In contrast, I insist on an extreme sandbox guaranteeing complete security, and I then ask how the programmer’s time can be minimized. As in Section 1, programs are not static objects; the programmer cooperates with the sandboxing tools.

Networking

Related to that, Dan Bernstein has also made the proposal in the past that it should be possible to disable the network with a dedicated disablenetwork(void) function.

In 2009-2010, Michael Stone sent a Linux patchset for that. That patchset unfortunately did not make it, but it will hopefully eventually be possible with Landlock! – Mikhail Ivanov’s “Support socket access-control” patch set for Landlock should make it possible to forbid new network connections in most cases. (There are tiny exceptions; for the details, see my presentation on it starting around minute 34.)

I continue to be very excited for this feature to land. It’ll let users define reasonably detailed policies for what kinds of networking you still want to use, but one of the largest use cases continues to be the case where a program absolutely does not want to do any networking.

Conclusion

I expect this strategy to produce invulnerable computer systems: restructure programs to put almost all code into extreme sandboxes; eliminate bugs in the small volume of remaining code. I won’t be satisfied until I’ve put the entire security industry out of work.

We’re getting there. 🚀

Again, the full writeup from 2005 is available at https://cr.yp.to/cv/activities-20050107.pdf and worth a read.


  1. https://www.usenix.org/publications/library/proceedings/sec96/goldberg.html, for example, described a sandboxing tool that applied some restrictions to Netscape’s “DNS helper” program. The subsequently discovered libresolv bug was a security hole in that program despite the sandbox. Imposing heavier restrictions would have meant changing the program. ↩︎

Comments