The Lazy Programmer’s Guide to Secure Computing

The Lazy Programmer’s Guide to Secure Computing


>> SAMUEL: Hi everybody I’m Mike Samuel from
the Applied Security team. Today to talk about the Lazy Programmer’s Guide to Secure Computing
is Marc Stiegler from HP Research. He’s going to talk about a bunch of techniques to get
towards decomposable security and he’s a great person to talk about this because he’s been
trying to–and, you know, with a lot of success–get that on to systems as complex as the Windows
Operating System as part of the Polaris project. Which you guys should definitely check out
if you’re interested in computer security. And so, Marc.
>>STIEGLER: Thank you. Of course, the key today is to talk not so much about security
but to talk about laziness. It is–it is generally well-known by everyone in the world that security
requires vigilance. And that the only solution for vigilance or–for security is vigilance
and you must combine that vigilance with a variety of special purpose exotic APIs with
which you write special purpose code that has no purpose whatsoever except to apply
security vigilance. Here, we are going to consider an alternative. We’re going to consider
focusing on laziness rather than on vigilant–vigilance. For one thing, laziness has a very striking
advantage over vigilance which is that you’ve always got it available to you 24/7. So even
at 3 o’clock in the morning when the cut caffeine has run out, you can still be lazy even though
you can’t be vigilant. And we’re going to combine that laziness. In fact, the foundation
of our laziness is going to be simply using the object-orient techniques that you use
everyday all the time. So let’s go on. We’re going to divide this talk into three sections.
We’re going to show how laziness, appropriately applied, actually solves security problems
at three different levels of abstraction. We’re going to talk about security in the
single address space. The sort of security that you need to run when you’re doing a mash
up with several different pieces of software from several different third party vendors
who you do not know how much you should trust. We’re going to then look at lazy security
across in the distributed system domain including client server systems, server to–service-to-service
systems, and peer-to-peer systems. Finally, we will look–finally, we will look at security
in the medium to large application and see how these same techniques help them make your
app–your larger applications more robust in the face of serious professional attackers.
We have one paradigm that we use to cover all of these things. And we’re going to introduce
four rules of laziness along the way. Now these are not the only rules of laziness.
Laziness, perfect laziness is a goal to be strived for. And we will never achieve perfection
in the field of laziness. But we’re going to take some important steps in that direction.
So the first laziness rule is pretty obvious. Even–even programmers of very modest success
in achieving laziness know this rule implicitly. Here we have a function, a function area that
receives an X and a Y, and returns the area of that rectangle. And now we’re going to
look at a hard worker who is going to use that function. Okay, this guy is not lazy.
And what he does is, first of all, understand that this guy has a password and a Z and an
X and a Y. So what this guy is going to do is he’s going to invoke the area function
and he’s going to pass the X and the Y and to the Z and to the password. Now this will
work, it works perfectly fine, but it’s not at all lazy. Why is he typing all those extra
keys? Why he’s pass–why is he passing all those extra parameters? That’s working too
hard. So any–any sensible person would of course, just simply pass the X and the Y because
that’s all the function needs. Now the decision to not pass the argument Z is simply a matter
of laziness. But the decision not to pass the password is not only lazy but it’s actually
a security decision. We have decided not to cast to the password, to the four winds. Okay?
So–so here, laziness has–has served the purpose of being more secure. And we–we run
this, I mean, it’s–it’s obvious and, you know, we have–we have a rule for this in
programming and it’s simply don’t give an object something that it doesn’t need to do
the job you wanted it to do for you, right? Don’t–don’t give it to them if they don’t
need it. Now that rule is so special and so powerful and so important in this security
world that it has a special name. It is called variously the principle of least privilege,
the Principle of Least Authority. We, who practice proper laziness, call it POLA because
we use this concept so much on a daily basis that we are too lazy to use more than two
syllables to describe it. Okay, so POLA. Now the pre–the idea of POLA is very simple.
Again you–you give the object or the person everything that they need to do the job that
you want them to do and do nothing else. So this is–really, POLA is the tip of a razor’s
edge where in to the left of the razor tip, you have the area where there’s not enough
authority. You haven’t given the guy enough stuff to get his work done. And on the–on
the other side we have handed the guy more power than he needs to get the work done.
Now handing the guy just a little bit of extra power is not a big problem. You know, handing
the Z value to that area function would not have been an issue. But as you move farther
and farther to the right here, you are increasing the amount of abusable power and now you’re
in trouble. Meanwhile, over here on the left hand side, you can’t get your work done unless
you do something radical like engaging credential sharing and at that point suddenly you leap
from the left-hand side where you can’t get your work done, all the way to the far right-hand
side where you’re getting your work done but you’ve maximized the potential of the other
party to abuse you. So I–an example that’s recently pointed out to me over at HP, there’s
a business achieving some considerable success on the web. I am told this business does looks–looks
at all of your–gets the data from all of your financial services, aggregates it, and
shows you beautiful views of your financial situation. And this sounds really very nice
and I would like it myself except that in order to do this for you, you have to give
them all of your credentials for all of your financial services. So they have–so they
wind up with full authority not only can they show you how well you are doing but they can
change it for you. Okay. Now, not all–neither I nor anyone who pays attention to my advice
is ever going to use this service because of this gross amount of abusable power that
has to be grant to do them in order to give you this service. So there’s another name
for POLA and that name is Maximum Business Opportunity. So now we have the same principle
playing out with slightly different words to each of several different audiences. The
security people call it POLA, the marketing people call it Maximum Business Opportunity,
we, the programmer, just simply call it “Don’t give them anything they don’t need.” Okay, let’s look at another example. Here
on the left-hand side we’re going to use the value out of the table. So what were going
to do is I’m going to send that function–the table and the index into the table so that
it can extract that value and use it. While it’s there, of course, we pass the whole table
it is going to grab up the password and why not. Since the password is one of the elements
in the table. Okay. Now, an ordinary OO programmer would never have this problem because the
ordinary OO programmer would knows that for modularity sake in this circumstance you would
not pass the index in the table, you would just simply pass the element. And so here
we are where we’re receiving the element. And here we are. We’re just simply shipping
the particular element that we wanted them to have. So far, so good. Again, very obvious.
A–an implementation of POLA for the purpose of laziness and modularity. Okay. Now I’m
going to surprise some of you. I know this is going to come as a shock but laziness is
not always identical to security. Okay. And so here is an example, this is something that
I myself have done, now suppose we have this table but this time instead of using the value
out of the table we expect this other function to actually update the value in the table.
And so now what we’re going to do is we’re going to send in the table and the index so
that the guy can write the new value into the table. Of course, he’s still going to
grab up the password while he’s there. And he’s also going to corrupt our copy of the
password so that he can sell the password back to us at a modest fee later in the day.
Okay. No problem. Well, no serious problem. But this is definitely lazy, okay? So at this
point, we have to introduce a new distinction. And that is the distinction between the professional
lazy programmers, the guy who takes his laziness very seriously, and the amateur lazy programmer.
The amateur lazy programmer has a lot of trouble telling the distinction between what is lazy?
What is careless? What is sloppy? What is thoughtless? Okay, what will get him into
trouble the following day? The professional, the professionally lazy programmer knows that–yes
most of the work that he could do today would probably be wasted by the time tomorrow came
around. But every once in a while, he’ll look at something and say “I know that if I don’t
do this right the first time, you know, I can do, I–I can do this right, I can spend
an extra half hour now or I’m going to spend a week of agony tomorrow. Oh, oh, oh I don’t
want to go there, I’m going to go professional, I’m going to up my game in laziness, and I’m
going to–I’m going to spend just a little time now.” Okay. So now, let’s go back to
that example where we’re updating that table. And suppose that we already know, professionally
lazy programmers that we are. And I see a lot of laziness out in this audience. So I’m
very excited to have you all here. The–suppose that we just know that the data structure
is going to be modified in the course of the upcoming weeks. It’s probably going to eventually
wind up being an update to a relational data base, but we don’t really know for sure. And
so what we want to do is minimize the amount of code we’re going to have to modify when
this maintenance phase comes around and bites us. Well, in that case, we might want to consider
doing something different. Instead of passing in the index into the table, want we might
want to do–what we would surely want to do in this case–is passing a function that accepts
a value and updates the table here locally. So that when the modification to the data
structure comes rolling around, I don’t have to modify the function that is doing the editing.
I only have to modify this one line of code. Okay. Now at this point, once again we have
achieved the principle of least authority, the guy that is–this function that is going
to be doing the editing only has the authority to update to a specific element that he is
suppose to update and he does not have the authority to manipulate our password field.
So this is–this is the professional version for making your system more maintainable.
And now here’s the interesting thing, which is that this version isn’t any longer than
the first version. So, maybe it was just as lazy all along. Okay. Okay, but the problem
gets harder, okay. Now, I’m going to bring in a subcontractor. And the subcontractor
is going to have to do our editing function. But there are some important rules, okay?
He’s got to only modify that element one time. And when he modifies at that one time, he
must notify us. Well, we could just simply document it for him and pray to the gods of
programming that he pays attention to the documentation. We actually have three different
choices here. We could just give him the power to manipulate everything, which is, in security
terms, the more like equivalent of giving him our user ID and password so that he can
do one little function, you know, the same idea of I’ve got to give him all of my financial
credentials so that he can read the data. We could give him two little functions like
the ones that we just saw, the updater and the notifier. But he could still accidentally
forget to notify or he could update twice, or we could give him one reliably correct
function that does exactly what you need him to do and nothing else. This is an example
of becoming meta-lazy because at this point we are rising up as professionally lazy programmers
to understand that that other guy is lazy too and in particular he is too lazy to read
the documentation, okay. So let’s take a look at what and how we might go about solving
this problem. This time, we’re going to hand him an edit function again. Notice that even
with these new requirements we did not have to change the contract for the editing function,
okay. The contract remains the same, okay? They’re very simplifying the maintenance problem
once more. The red line here is the boundary in the reliance set. We in programming would
refer to this as the reliance set wherein the things–the correctness of our table depends
on all of the things that are below the line but should not be dependent on the things
that are above the line. In the security world, this would be referred to as the trust boundary.
Okay. So we’re going to come down here and we’re gong to make–again, we’re going to
manufacture an edit function that specifies a particular element in the table and as a
particular listener who will be receiving the notification. Here’s the guy who’s receiving
the notification and here’s the function that manufactures the function that we’re going
to send. So what happens here is this function maker receives the index in the table and
the observer. It loads the table into a revocable–into another variable, and it manufactures the
function which when it gets the new value, updates the table, then sets the variable
to null so that the next time the guy tries to do an update, it will throw null pointer
exception. And then finally, it notifies the observer, okay. So it’s all packed in here.
Now, this time, you’ll notice that I am sending the index in the table to a separate function,
which was something that I avoided earlier. This time, I’m doing a separate set of tradeoffs.
One of the things that have changed is that implicitly earlier I was assuming that reliance
set was within the boundaries of the individual function. This time the reliance set in the
trust boundary is in a different location and I’ve extracted it so that if I have other
functions down here that are also that going to have to send values up to this guy, they
can simply reuse this function maker. So we’re getting more reusability down here inside
our trust domain.
So here we are, in each one of these cases, we have implemented POLA. But each time we’ve
implemented POLA for a different reason that have nothing to do with security. So the first
time we implemented it, strictly for the purpose of being lazy. The second time we implemented
it, we implemented it for the sake of maintaining object oriented modularity. The third time
we implemented it, we implemented it for the sake of enhancing our maintainability. And
last, we implemented it to improve our reliability in the phase of multiple people and multiple
organizations. There is so many different reasons for implementing POLA in addition
to the security properties that we got each time protecting the value of our password
and our other special data that one has to ask the question, is POLA really a security
principle or is it just a darn good engineering practice? So let’s stand back and talk as if we were
security people for a moment. You know, security people, you know, I mean, security people,
they’re using names and passwords. Where did all the authentication go? We never passed
or we never set an ACL, we never passed an object ID and a password to prove that my
object was authenticated as being the right object to fill–to send a message to your
object. Where did all the authentication go, and the answer, of course, is that we–we
weren’t doing all of that kind of authentication that security people are famous for. What
we were doing was what we refer to as authorization-based access control as opposed to authentication-based
access control. With authorization-based access control, you do not go around authenticating
the guy every 15 minutes, okay. Instead, you authenticate him once when you hand him the
authority. And once you’ve authenticated him and handed him the authority, you don’t have
to authenticate him anymore. He uses the authority. It’s all cool. It’s good. And in that sense
it is very much like the object oriented professionally lazy programmer. It’s not merely simpler,
it’s more composable and it’s more POLA, again, because it’s simply object oriented programming
done with a little bit of extra sincerity. So now, we’re going to talk about the remarkable
properties of the ordinary object oriented message send. But before we do that I’m going
to talk just briefly about the letter envelop, okay. So here’s the letter envelop, it’s been
under evolution for over 2,000 years, okay. I would like to see us achieve the same quality
of security as the letter envelop in less than 2,000 years, you’re welcome to join me
in that hope. So I’m going to look at this envelop from the point of view of my mother-in-law
and from the point of view of a security guru. So if you ask my mother-in-law why the envelop
is opaque; well, she says, “So that the mailman doesn’t get confused with all of the writing
that would get in the way of him reading the address,” okay. The security guy says, “Oh,
that’s your encryption.” Okay. You ask my mother-in-law, so why do we have this flap
that seals down on the back? She says, “So the letter doesn’t fall out. The security
guy says, “That’s my tamper resistant tamper detection system.” On the front, in the top
left hand corner you ask my mother-in-law, well, what does that stand for? And she says,
“So that the postal office can get paid.” The security guy says, “Oh, that’s my spam
filter.”
And finally, you asked my mother-in-law what the point of this little clear panel is in
the middle of the envelop; and she says, “So that the mail guy can read the address.” The
security guy says, “That’s my principle of list authority grant of just enough authority
to the routing system so that they can deliver the mail.” Here is the very interesting thing
about all of this. All of the security in the letter envelop is serving some other function
in addition to the security purpose. And that is what I have been showing you down all the
way through here at every step, the same thing that you had to do for security was something
that you had to do for some other reason. And if you do this properly, then you’re security
disappears into the background and people are not annoyed by it anymore and people don’t
have to do credential sharing anymore and everybody is less vulnerable. So now, we’re
going to look with our new eyes at the object oriented reference, which is also very interesting
from a security perspective. So here we’re looking at the object Alice passing the foo
message to Bob that’s carrying the argument Carol, and we can see it down here, basically,
Alice is saying, “bob.foo(carol).” Okay. So what’s interesting here? Well, first of all,
when Alice sends foo to Bob, have you ever worried that some other object in the middle
of the system was going to eavesdrop on that message being sent and either read the Carol
reference or modify the Carol reference and maybe slip a Dave in there. Of course not.
That’s not–that can’t happen in object oriented program is–well, in memory safe oriented
programming. So that is giving us the moral equivalent of what a security guy achieves
by encryption. When Alice sends the message foo to Bob, is she afraid that–are you afraid
that the message will accidentally be rerouted to Mike? You know, is there, you know, some
sort of centralized service, naming service like DNS who’s cached can poisoned in the
middle of your object oriented program and cause the message that you’re sending to Bob
to go off to Mike. No. Okay, so that is what a security guy would achieve by authenticating
the recipient. And the only way Alice can possibly be sending a reference to Bob is
if somebody else who already have a reference to Bob has explicitly and voluntarily decided
to give Alice a reference to Bob, which is to say Alice is authorized to send messages
to Bob and manipulate Bob. Okay. So we have all the essential characteristics of authorization-based
access control embedded right in the object oriented paradigm. You don’t need anything
else. Last, it doesn’t work hardly at all, and the reason is that object oriented programming
languages, all the popular ones uniformly supply access mechanisms that break the object
oriented paradigm, and in doing so they discard all of the cool security properties that you
have acquired for free in embracing OO in the first place. So JavaScript, we have a
lot of fun with this. I did a review of this presentation for some of the people who have
practiced hacking JavaScript systems, and this was–there was an argument about which
one of the ways of breaking the examples, I’ve shown you already, would be the most
fun way of presenting how breakable JavaScript is. This is one of them. The function area
which is receiving only the X and the Y throws an error, looks at the stack, extracts all
the arguments in the lazy user and grabs the password, all right. So it seemed real good
for awhile there and I have really good news for you. Google has been the leader in developing
a rewriter, the Caja rewriter for JavaScript that fixes all of these violations of the
object oriented principles in JavaScript. So if you feed JavaScript code into to the
Caja rewriter and it comes out the other side rather than being rejected, you now have JavaScript
that is doing two things; one is it’s enforcing the security principles, but the other one
is the way it’s enforcing the security the security principle is it’s simply enforcing
the object oriented engineering best practices that you depend on anyway. It simply enforcing
the object-oriented engineering best practices that you depend on anyway. So we have more
than just a Caja. There actually a lot of verifiers and rewriters that are floating
around in the world. Some are the ones that are using for different reasons. The Caja
rewriter was developed by Google. The largest users of it at the moment is actually Yahoo,
they use Caja to confine all the widgets on all of their pages. The AdSafe verifier was
also written for JavaScript. It was actually developed at Yahoo. Joe-E developed at UC
Berkeley is a verifier that enforces the same capability–same discipline for Java. And
there’s a couple of exotic ones, there’s a verifier for Ocaml. And the only reason why
that one is very interesting is that Ocaml runs at approximately C++ speeds. So if you
need to write code that has this kind of security properties that runs as fast as C++, then,
you could investigate using Ocaml. Well, okay, so let’s step back from lazing us for a moment
and ask another question, and that is, so you guys, you know, you’ve been working for
weeks adding functionality to the software, and it’s getting to be a grind. And you’re
looking over at the security guides and the security guides have this really big APIs
and lots of documentation that they’re reading and you’re thinking, you know, that was a
like a lot of fun. And you’d like to do some security work for change of phase rather than
just simply, you know, implementing more functionality for the users. So is there any way of having
fun doing security anymore when you move into this approach? The answer is, you’re not going
to have as much fun, but you are going to have some fun. One of the things that you
will need to do that’s not quite lazy is learned a little bit about the various patterns of
secure cooperation that have been developed across the course of the last couple of decades
for authorization base, the access control. This is one here. So here we have a Slot maker.
A Slot is just simply a variable with a–it’s an object with a value inside. You can set
the value, you can get the value. And here’s the thing that makes Slots. And the Slot–once
you’ve made a Slot, it is an authority, right? You have read and write authority on the Slot
once you have the back–once you have reference to the object. And one of the things that
you got here is “Object.freeze”, okay. We are talking earlier about the fact that JavaScript
actually doesn’t supply these properties but Caja does. “Object.freeze” is a method supplied
by the Caja rewriter. And if you freeze an object, when you construct it, then, you can’t
do any of the really funky things to that object that will busted opening and steal
its private variables. So that’s a very important piece of making this all work right in JavaScript.
The–anyway, so here’s the Slot. Now, suppose you want to hand somebody a revocable authority
to that Slot. Here’s a general purpose revocable Slot maker. So hand this guy a Slot, it returns
a frozen object that has a set method and a get method that simply forward to the inner
Slot and it includes at the end a revoke method. When you invoke the revoke method, it sets
the Slot to no and, thereafter, and it attempt to do a get or a set will throw back a no
pointer exception. Okay. So, and then–then, you can make a Revocable Slots simply by making
a Slot and then making–and invoking “makeRevocable” on that, and, now, you’ve got Revocable Slot.
You can mix and match these things. If I hand you a revocable access to my Slot and you
want to turn around and hand a separately revocable access to my Slot to somebody else,
you just simply wrap your Revocable Slot with another revoker, okay, hand it off to the
next guy and now, you can revoke his Slot separately from revoking your own Slot access.
Okay. Onward to–let’s see how this works. So when we go distribute it, so the first
thing we’re going to do to implement the same principles in a distributed context is we’re
going to reproduce the virtues of the object-oriented reference in the distributed environment.
So you’ll remember that the marvel of object-oriented references was that there was no man in the
middle. The recipient was authenticated in the center was authorized. So here we have
two examples of webkeys. The input–let’s see, a couple other things that the two kinds
of webkeys have in common, they’re both HTTPS, so they’re encrypted, so there’s no man in
the middle attack. The other thing is that the actual resource has an unguessable name.
And so the only way you can find an access to that resource is if somebody who already
has access to the resource explicitly hands you a reference to it, okay, just like in
the object-oriented shared memory case. And so we know that anybody who can reach the
resource has been authorized. The thing that separates them is–let me talk about the—is
that, with the simple one you use some random standard domain name and you depend on a certificate
authority to assure that if somebody goes there and there’s some DNS cache positing
going on that the certificate authority check will throw out the exception dialogue box
that the user can then click okay on. On the other hand–so anyway, so that’s one version
of the webkey. And the other version, you actually embed the fingerprint of the public
private key pair for the service that hosts this authority right to there in the URL.
And this is–I refer to this as the professional version because–and the professional version
here is for people who are too lazy to coordinate their activities with a certificate authority.
Okay. The user of the professional version here can use self-singed certificates. And
one of the applications that I built that use this, we were actually building a system
where we were putting servers on every individual person’s machine. Indeed, we were putting
individual servers with individual certificates on every machine for every user. And so attempting
to–so being able–the self-signed certificates in that system was not merely a matter of
being lazy, it was a matter of being able to survive. Okay. So in any event, the–in
either of these cases, either the certificate authority or the fingerprint is ensuring that
the recipient is authenticated. And now, you’ve got all the properties of an object-oriented
message send. And this–the professional version of the webkey is implemented on the Waterken
open source platform. If you’re interested in playing with these things, go out and check
out that site or ask me or ask Tyler Close. Yes?
>>Sorry. To go back, what is the first part of the domain, it’s a fingerprint?
>>STIEGLER: Yes. It’s the fingerprint for the public private key pair of the–that is
being held by the server, by the service that holds the resource. So what happen is you
use the fingerprint to challenge the guy to prove that he holds the private key. Only
if he proves that he holds the private key to you, then reveal the name of the resource.
So here are some examples of a Java code and the Waterken system. Waterken platform is
a Java based. And we have reproduced the Slot example that we had done earlier in JavaScript.
So over here on the left-hand side, we have a Slot with a get and a set. In the far side,
we have a Revocable Slot that is both of the type Slot is also of the type revoker. So
it has the get and the set method and the revoke method, and it works approximately
the same way. In the middle, we have the ReadOnly version of the Slot, which when given a Slot,
what it does is it forwards “getRequests” to the inner Slot, but it throws an exception
when you try to do a set, okay, and so this is a ReadOnly read version. And, again, you
can mix and match this. Okay. This–you can build arbitrarily sophisticated compose–dynamically
composable security policies out of this simple building blocks. So let’s take a look at this
for a slightly larger example. This is digital money. Okay, it’s digital money secure that
fits on a PowerPoint slide. I’d like to see other implementations of digital money that
fit on a single PowerPoint slide. Okay.
And we come down here. This is based on the first protocol. The first protocol was design
for authorization based access control systems like 10 years ago. And the basic idea is that
I have a purse and I can withdraw money. If I withdraw money from my purse, I will create
a new purse and put some of my money into that purse and then I can hand that second
purse to you. And if–and then you have a purse and you can deposit the money from the
purse that I handed to you into your purse, and now, it’s your money. Okay. So that’s
the first protocol for doing digital money. And so we have a purse with a balance, and
we come down here, and so we’ve got the three methods. We’ve got “getBalance”, which just
simply returns the balance. This is all done in a–again, in a Waterken system. We’re returning
a promise for an integer for reasons having to do with a way a Waterken server returns
values over the wire. It has nothing to do with our security discussion. Let’s see. And
we’ve got to withdraw method which manufactures a new purse with some amount of money in it
and returns the new purse and deducts the appropriate amount from our balance. Finally,
we have the deposit method which receives the purse. We deduct the appropriate amount
of money, the–all the money from the purse that we receive. We set that purse to zero
balance and then we add that amount of money to our purse. So where’s the security in here?
Well, you know, I looked at this and I thought, “Well, there are a couple of things that looks
suspiciously like security. One is this assert statements. I’ve got this assert to make sure
that the amount of money is less than the balance and more than zero. And a security
guy would say, “Well, that’s to protect you from the malicious attacker who is going to
try to corrupt your money supply.” But, you know what? I look at that money and say, “Well,
I’m just trying to make sure that the program runs correctly.” Okay. Similarly, down here,
we do another assert. These assert is to make sure that we didn’t do an integer rollover.
We’re using pure integers here. So if we manage to build the system that had enough money
put into a single purse so that it rolled over, you’d have a problem. Again, an attacker
might try to attack that, but I need to fix it so that it’s correct. The closest real
thing to a security item is when I take the purse that I received in the deposit and I
cast it to a concrete type rather than an interface type before manipulating it. Now,
the security guy says, “The reason you’re doing that cast is so that–so that you know
that it’s a real purse, it’s not a forge purse.” Okay. But the other reason for doing that
cast is I was too lazy to put a “setBalance” method on the purse so that I could manually
set the balance on the purse interface type. And this actually takes less code than doing
it the other way. So, yeah, security but it’s also lazy because it’s less code. So let’s
take a look at what the stuff looks like when it’s brought out to the user, okay? Now at
this point, we’re into the single long part. Most of you received, I handed out authority
at the beginning of this session, okay? Looks like Mike has one left. So, in any events,
so this is the single long part. I’ve given you two authorities and one of them we’ll
be looking at both of them right here in a moment. So, I’m going to come up here into
my secured bookmark and I’m going to pop open one of my webkeys. I’m going to pop open this
first. Okay. Did you catch me typing in my user name and password which I had forgotten,
okay? So, I’m going to withdraw some money from my purse. So, I’m going to withdraw 14
credits. And here’s the purse with 14 credits in it. And I’m going to show you there has
14 credits in it because I’m going to open it in a new window. Here it is. And it has
14 credits in it. And now, I’m going to withdraw seven credits from that, okay? And I’m going
to deposit that into the first purse, okay? Any questions about how that works? Everybody
understand why that’s actually a secure transaction? Okay, so anyway. Let me move the rest of the
money back up here. Every time I run this demo, I run the risk of leaving some money
on the table. Okay, and throw in any event. So that’s the first demo, very light weight
digital cash. Yes?>>[INDISTINCT]
>>STIEGLER: What?>>[INDISTINCT]
>>STIEGLER: What’s that?>>You tried to deposit the same purse place.
>>STIEGLER: Oh, did I? Okay and it didn’t work, and it didn’t work to [INDISTINCT].
That’s fine. Oh, that was a sweaty moment. Okay. Oh, let’s see, what’s next. Okay now,
here’s the–well, here’s my authority. I’ve given you all an authority we will look at
in a moment on one of my servers. This is a Share Shell. This is an approximation of
a bash–remote bash, command line. It’s a–it serves approximately the same purpose as SSH.
It’s not actually quite SSH but, you know, it serves much the same purpose. And I’m going
to come down here and you see that I have this field where I can type in a command,
a shell command and it’ll come out and it’ll pop all that stuff up for me, okay. So, now
one of the things that I have here in my shell in addition to a place where I can type in
arbitrary command, I also have a list of quick commands. So, I can come up here and just
simply list–come up here and run a command. There we have it. And so far,- so good. Now,
let’s take a look at how I manage this. This is my–this is the attenuated authorities.
This is my window for creating and managing attenuated authorities for other people on
my HP Server. So, here we have two of these things. One is an authority I handed to Allen,
who’s the guy I work with. I don’t trust him hardly at all. He’s sitting right here in
the audience. And so I gave him very limited authorities. Far less than I’m giving you
guys. And here’s the authority that I handed to you guys. So anyway, those of you who have
the CD, can just simply click on the link and bring this puppy right up. And now, you
can come over here and you can do a listing on my home directory. Anybody see anything
interesting here? Root password.text. that look like fun stuff. I have a $20 bill for
the guy who can tell me my root password, okay? So yeah, you can see it but I seem to
have forgotten to give you the cat command on the root password.text. Even though, I
did give you a cat command on my log file which we can sort of see down here. Okay,
so now, suppose that you want to turn around and attenuate this even further. For somebody
else, you can say, “I already did this once but I’m going to do it again.” So, I’m going
to delegate super limited–super limited. I’m going to make a delegate. It gives me
this in case there’s new authority down here. And what I’m going to do is I’m going to come
up here and I’m just simply going to delete a bunch of the commands. Okay. Now, I can
hand this authority to somebody else. And the only commands they can run are these three
commands. I can come back up here and I’ll always have a clear visibility into what authorities
that I’ve handed out. If I were doing this for real, I would have said in my note about
this who I was giving it to as well as how limited it was. And of course, I can revoke
it simply by going in there and pressing enabling revocation and pressing the revoke button.
Okay. So anyway, so that is our attenuated shell. That’s what it looks like when you
use this kind of stuff at the user level. I’ve been able to use this. One of the more
amusing things that I use this for is I’ve been able to give my–I have another server
that goes down occasionally. So, I was able to build an attenuated shell for my administrative
assistant that allowed her to re-launch the server. You actually have that authority there
on your version. the ability to re-launch my server. So if you press that by accident,
you may be re-launching a server that I needed to bring backup anyway, so thank you. And
so, she can do that but she can’t bring it down and neither can you. Okay. Okay, let’s
go on to how this works in larger applications. This time, we’re going to talk about another
interesting characteristic of POLA. Okay. People like to talk about defense in depth.
POLA is the ultimate poster child for defense in depth. We have–through these examples
that we’ve been running here, we have been achieving POLA at the object level of granularity,
okay? Man, are you getting depth of defense when you use POLA at that level? So, this
is a very rude and crude picture of a system that I’ve built SCoopFS, the Secure Cooperative
File Sharing system. It’s a peer-to-peer file sharing system that the closest of things
similar to it in the world is Microsoft Live Mesh. The difference is that you need a giant
server farm to store all the data in live mesh so that the government can come and read
it all. And in SCoopFS, it’s all peer-to-peer. There’s no central server, so you don’t have
either the expense or the central point of failure, or the central point of vulnerability.
Let’s see. And some of the pieces here, we use a mailbox metaphor, actually. One of the
little things that you get when you set up a SCoopFS system is you get a secure mail-like
system. We’re not using SMTP. It’s got–it uses the mail metaphor and it’s secure. Basically,
you send in an attachment in an email. And when the guy says that attachment some place
whenever either one of you modifies that file and saves your copy on your machine it automatically
updates the other guys. I don’t care how many firewalls are sitting in between you. And
so, here we have–this is an application built on top of the Waterken framework. So, here
we’ve got the application framework kernel, Waterken which is running at full user power.
And it invokes the part of SCoopFS that launches at the beginning or you can think of it as
the main. And it is granted all of the authorities that the SCoopFS system needs and then that
main starts handing out only the authorities appropriate to each one of the things that
it invokes. So, the PAL’s manager has references to all the PALs which is a pretty powerful
authority. But the PAL’s manager does not have authority to reach the file dialog a
box which is able to confer to one with a user action the authority to actually read
or write a file at some location the user is choosing in the user’s net space. And then
the PAL manager has references to all the PALs, the individual PAL doesn’t have a referenced
too much of anything except the Out Channel and the In Channel for the particular person
who is a PAL. The only part of this system that’s actually exposed to the outside world
with a webkey, pretty much, is the In Channel. So that makes the natural attack–that makes
it a natural attack vector. The only two real points of exposure for this system are the
PAL In Channel and the–you know, basically, a lower level attack on the application framework
and the operating system and the TCP/IP stack. And that’s a possible attack but these are
all the parts that are worked on once very carefully. And then, host a large variety
of applications, the Waterken server in particular went through a week long, immersive review
by a team lead by David Wagner. So, it’s pretty secure such things go. On the other hand,
the entire SCoopFS system was written by me, all by myself. No one’s ever reviewed it,
okay? You expect this to be the vulnerable part. So–so in any events, so the problem
here is that this being the vulnerable part means that you’ve got to go tracing through
here in order to be able to get any power out of it. The only thing that the In Channel
has the authority to do is to light off the notifier for the email manager with the notification
that mail has arrived. Okay, now let’s look at this the way software normally works. And
I’m going to be using models that are so simplified that they make my point too well. And so after
having made my point way too well, I’m going to back off and tell you the truth. Okay.
So, this is a picture of the way most software normally works. This is the normal, simple
models the way software works which is that all of the objects in a software application
are loaded up with enormous amounts of authority, okay? They all have great excesses of abusable
authority. And so basically, breaching anyone part is as good as breaching any other one
part. And to that is the basic model that Ross Anderson used to prove that security
is impossible for economic reasons, okay? And so basically, so you’ve got this series
of pieces that you can reach more or less that are running through here. And you’ve
got a–if you’ve got a 20% risk of the guide finding a breach in each of one these major
pieces, by the time you’ve or’d them all together, you’ve got a 63% probability of a breach,
okay? You’re–basically, you’re screwed. Now on the other hand, let’s take a look at this,
when we’re running it under POLA, defense in depth, and then now, all the really hot
authorities are back here in the application of framework kernel. All right, even breaching
the main doesn’t buy you a lot of exciting authorities. And so, what you’ve got to do
is you’ve got to breach this guy of the point where he will breach this guy to the point
where he will breach this guy, breach this guy and finally reach this guy. In the last
picture, we or’d all of those risks together. This time, we get to and them together. And
the consequence is if you and all those 20% risks together, you get a 0.008% chance of
a breach. Now like I said, it’s not actually this good. However, here are some things that
I’m starting to be able to say with more confidence having built–SCoopFS is now the largest system
I know that has been built, basically, following these principles. It’s got about 15,000 lines
of code. I’ve done some informal assessments. And it looks like somewhere between around
60–the way I wrote it, about 60% of the code just simply does not have enough authority
to be dangerous. And you don’t ever have to look at it. And by just changing a few things
about my coding style, I could have lifted that up to about 80%. At that point where
80% of the code simply does not have enough authority to be interesting, you’re looking
at getting a factor of five increase in the value of the dollar that you put in to security
review because your only review–you only have to review 20% of the code. We have experiences
pointing in this direction going back to the beginning of the century when David Wagner
did a security review of an earlier system. And you can read his review in which he talks
about this appear interesting characteristic. The next interesting thing is under maintenance,
you get a reliable improvement in security as you move forward. The reason is that in
general under maintenance, you’re adding lots of new software out along the edge. And along
the edge, it turns out that the software generally needs very little authority. So you’re adding
a lot of code that doesn’t have enough authority to be dangerous while meanwhile you’re fixing
the security vulnerabilities in the inner core. And so you’re reliably moving forward.
It actually does get better. Finally, okay, even though you’re not going to get to the
kind of walking improvement to 0.008% risk of security breach I was talking about earlier.
You’re getting that big improvement. This is no longer low-hanging fruit. And last but
not least, even though it’s not–not as big as a 0.008% percent risk, it is a fundamental
change, a fundamental change in the rules governing the relative advantage of the defender
and the attacker. So here’s rule number four. Do not play by the enemy’s rules. You know,
I had the seven delightful years during which every year for some different reason some
different organization would ask me to write viruses. And it was satisfying work. You could
always tell that you were making progress because it was so easy. But every once in
a while, I think, “So, what would I–what would I do to make it easier for me to attack
a system?” You know, what sort of propaganda would I try to brainwash programmers with
to make my job easy? And I came up with a number of very interesting strategies. The
first one would be to tell people to use C and C++ for their code. Okay, just a delightfully
hazardous language, very satisfying opportunity for the attacker. I particularly make sure
that they understand that automated garbage collection always causes terrible performance
problems and causes your program to pause for seconds at a time in the middle of the
most important operation. And in that, so it is consequently most important to use C
and C++ anytime that you’re writing a code that has root privilege, okay? Okay, these
are really important rules to help the attacker. I would, as an attacker, embrace complicated
security control kits. Sell APIs with lots and lots of features. I would join the committees
and help them add new options that are structurally different from each other to maximize the
confusion in the mind of the programmer, giving me more opportunity. And of course, I would
make sure that the security software is something that you write separately from writing the
main software that does the work and encourage the programmer that since its being separate
why they can write it last and they can slide it in as a module. I would not mention that
the place where you slide it in is, as a module, right between the quality module and the reliability
module. I wouldn’t mention that part. When I look at those rules and I thought, “Wow,
we’re following them. Are we the victims of a conspiracy? Did we do this to ourselves?
Whoa.” So we need to take–we really need to take a page from Buffy Summers and James
T. Kirk. Okay, James T. Kirk, Kobayashi Maru, okay? What he did was he changed the rules.
So that somebody could win the un-winnable game. Buffy Summers in season three, her mother
makes the criticism that every security person needs to pay attention to. Okay, at one point
in season three, Joyce says to Buffy, “But what’s your plan? You go out everyday. You
kill a bunch of all bad guys. And then the next day, there’s more bad guys. You can’t
win this way. You need a plan.” And finally, Buffy, on the last episode of the last season,
comes up with a plan. She changes the fundamental physics of her universe to permanently favor
the defender. What could be lazier than forcing the other guy to play by your rules? Okay,
laziness. It’s not just a good idea. It’s a requirement, because laziness will stand
by you. It will be your friend no matter what the circumstance is. But to–but to make laziness
really work for you to really rise to the top of your game in the laziness field, you
need to be using the right tools. And in particular, you need to be using tools that fully support
laziness. Notably, tools that turn all our best practice into enforce security. Tools
like Caja and Joey and the Waterken webkey. You need to learn a little bit about the patterns
of secure cooperation. And there are documents that you can read or you can ask me about
these patterns. And you have a number of people now here at Google who are experts in these
patterns as well. And above all, you have to make at them play by your rules. Don’t
play their game anymore. It is time to stop working so hard, let us work securely instead.
Thank you. What questions is in one hand?>>Hi. Yeah, I have two questions.
>>STIEGLER: Okay.>>The first one, if you go back to the slide
where you’re talking about Anding the risks together.
>>STIEGLER: Yeah.>>Are you assuming that somebody can’t attack
the framework kernel directly because otherwise how does your risk get below 5%?
>>STIEGLER: Oh, the reason why the framework kernel got a lower risk factor on this slide…
>>No, no. I’m sorry. On the next slide…>>STIEGLER: Oh.
>>…where you get the 0.08% chance of breach. Why isn’t that a 5% chance of breach, because
I can just attack the kernel directly.>>STIEGLER: Oh, the–so there are two ways
that you can attack the kernel. One is by doing a direct attack on it externally, okay,
which it’s pretty resistant too, because of the security reviews. That’s one kind of an
attack you can do. The application that’s running on top of it has ways of interacting
with it that are different from what you can do from the outside. Your ability to just
hammer it from the outside is really very limited, right? I mean all you can do is submit
things with webkeys that are bad webkeys and see if you can expose some sort of a failure.
But if you can actually cause the main to make calls of your choosing on the framework
then you’re in a better position to try to exploit something. So yes, there’s an attack
vector for attacking the framework directly from the outside. And I’m not talking about
that part.>>Okay.
>>STIEGLER: I’m talking about, you know, the extra vulnerability that you get by coming
down through the very poorly reviewed application running on top of it.
>>Okay. The second question I had was if you go back to your secure purse examples.
>>STIEGLER: Yeah.>>Is it possible to corrupt the integrity
of the purses by subclassing the purse X-class and passing in something with a negative balance
that you’ve constructed?>>STIEGLER: The…
>>Like let’s say I call a deposit. I can get my own subclass or purse. It’s got a negative
balance.>>STIEGLER: Right. Yeah, one of the things
that I didn’t mention, although I did put it on the slide, is that this is actually
only–this particular implementation which fits on a PowerPoint slide is only secure
if you run it standalone on a separate–on its own Waterken server and its own JVM. And
so nobody is in a position to manufacture a new subclass of the purse. If you’re going–if
you’re going to do something like–if you’re going to run a purse in a shared memory environment
with other things going on, you need to do a fancier version of this. You can go with
the original document on the purse protocol published in financial cryptography 2000 or
2001.>>So the basic idea of what you’re saying
is that that line 34 where you’re casting to the purse says, “No, this is really a purse.
It isn’t a subtype or something masquerading as a purse.”
>>STIEGLER: Yes.>>Okay.
>>STIEGLER: Yeah. I know that because of the rules that I set up for this particular
example.>>Okay. Thanks.
>>STIEGLER: And again, there is a–there is a solution that does the–that does the
job more generally, but it didn’t fit on the slide. Yeah.
>>Hi Marc, one of the hobgoblins that you didn’t directly address is the hobgoblin of
singletons and more generally mutable static state.
>>STIEGLER: Yes, mutable static state is a bad thing. I know that in Joe-E for the
Java verifier that ensures capability security, it will reject programs that have static mutable
state. I would hope that you can tell me what the Caja does with this.
>>Well, it’s not really a question of what Caja does. But I’m trying to address your
issue about the laziness. So generally, it is fairly true if that programs written with
mutable static state can be far lazier to write than programs without.
>>STIEGLER: In the amateur, yeah, it’s far more amateur–lazy in the amateur sense. It
does not nearly violate security principles. It also violates…
>>The design.>>STIEGLER: …fundamental object-oriented
design principles which actually protect professional levels of laziness.
>>Right. The problem that we get, though, is that what the argument that one makes to
people is to say, “Things will be better in the long term if you adopt this good practice.”
Now the history of human enterprise over the millennia is that people do not look out for
their long term interest. They rush in foolishly and deal with the miseries later. I’m just
presenting this as one of the foundational problems that the things you are talking about
or up against.>>STIEGLER: That’s true, that’s true.
>>It’s something that comes up again and again. And I think it has more to do with
human behavior.>>STIEGLER: Yeah, yeah. Well, all I can help
for at the moment is to see this group of people who look like they’re pretty lazy to
me, you know, raise their game into professional levels of laziness, okay, and leave their
amateur statuses behind and go for it. Okay.

20 thoughts to “The Lazy Programmer’s Guide to Secure Computing”

  1. Interesting and fun stuff, although I wouldn't really call it laziness. Only one objectction… did he really need to get so close to the mic? I kept thinking of that bit in Fight Club when Chloe is talking about only wanting to get laid.

  2. Mark has considerable experience with Windows, but as you say not working on what is typically considered Windows 'security'. That is not working on internal features of Windows to do permissions checking, authentication, etc. Instead he has focused most on changing the game of Windows security by working on the HP Polaris project that provided a POLA environment for execution of programs running under the Windows API,

  3. @HDFuXoNiz and what do you do for work… you merit-less prawn.. It really is the freaks and mutants that save and create this shared world.. hope you enjoy your life in the customer service field loser..

  4. he may look sweet but you just know he's got a robot powered island full of minions somewhere just waiting to make his move. His lazy move.

  5. Why is he screaming like that 🙁
    The talk is informative but even with tuned down volume you can hear the screaming voice kind of overdriven and someone needs to adjust the microphone correctly next time :/

    Really hurts the ears :S

  6. 5 minutes in, and I ask, "is that it?", and then realize that I am not a student….. good god, i saved a ton – a ton – of cash.

  7. so, web keys… most servers will be setup to log http requests. these will contain the url which will contain the 'secure' access key. I have a lot of issues with this as an actual solution to authentication.

  8. talk about laziness….this video could be much more intelligently put together if you placed the mini image of him on the left side of the video rather than the right, as he would be actually looking at the power point rather than looking off the screen.

  9. Gozer the Traveler. He will come in one of the pre-chosen forms. During the rectification of the Vuldrini, the traveler came as a large and moving Torg! Then, during the third reconciliation of the last of the McKetrick supplicants, they chose a new form for him: that of a giant Slor! Many Shuvs and Zuuls knew what it was to be roasted in the depths of the Slor that day, I can tell you!

Leave a Reply

Your email address will not be published. Required fields are marked *