Securing the Software Supply Chain (Cloud Next ’18)

Securing the Software Supply Chain (Cloud Next ’18)


[MUSIC PLAYING] JIANING GUO: Hello, everybody. Welcome to our session. I’m [? Sandra ?]
from Google Security, and this is Jonathan, a
security architect from Shopify. Together, we’ll be telling you
guys about the new container security products that we
just announced today, binary authorization, and how you can
use it to secure your software supply chain. So I’ll be going over a
overview for the product, and how you can use it in
your production environment, and Jonathan here will
give you a live demo after my presentation. Then we’ll take
questions in the end. All right, let’s get started. So what is one of
the top questions that DevOps and security
stakeholders at enterprises have on their mind? What is running? Enterprises run thousands of
services in their production clusters, across
multiple environments, and often times it
is very difficult to keep track exactly
what is running, let alone how to apply
centralized consistent control over this software. Data leaks, security incidents,
data breaches are on the rise, and many of those are
caused by bad code running in trusted environment, having
access to super sensitive data. So code is the weak link in
many enterprises security story. I’ve seen a lot of users
that invest a lot of energy and effort for configuring
the perfect access control policies, to make sure only
trusted accounts can access sensitive data, as well as
pushing production code. This is good, but not enough,
because as organizations grow, as the number of applications,
grow as the number of employees grow, it becomes increasingly
difficult to keep track of what can deploy in
the production environment. What do I mean by that? So take an example. This user has a bucket
of customer credit card information in production–
super sensitive– and he has configured
his access control policy to make sure only his
production applications, and a couple trusted
admins, having access to this bucket of
critical information. But because his
application is VM based, it is very difficult
to build new versions, and push new releases
of this application. So it ended up running
for a long time. And while it’s running
different roles from different part
of the production, or from the organization,
needs to do things just running application. For example, security
engineers may come and patch, apply updates, when
things becomes outdated. Admins may have to come
and restart the software when it gets stuck. Software engineers may have
to come and manually make adjustment, because when new
requirement get published, things change. Before you know it,
you have a number of people, and a
number of paths, that can apply changes
to production software. With this fairly
broad attack surface, a attacker, or rogue
employee, could come by, and slip in a piece of bad
code to one of the changes to the production software. Now, voila, you have
a security incident. So you may think
that, well, that is legacy software for you. I plan to update all of my
production infrastructure to use containers. I am good, right? Take a look. So, this user has a container
running in production. It is true that containers
refine the software supply chain. You don’t have a
number of people apply changes to the
software directly anymore. Instead, developers have to
rebuild and repush a container, in order to apply any changes
to it, because containers are immutable. So now organizations have
a centralized chokepoint, where all changes to production
deployments are applied, and a lot of them
take advantage of that to bake in streamlined
security controls, and test to make sure that
all the production changes are up to snuff. Things like making sure it’s
build from trusted sources, making sure it passes
all the unit tests, making sure it clears
vulnerability scans, making sure analysis– static analysis don’t find
any fault with the code, and it has been manually OK’d
by a quality control engineer. It is all good. But it still does
not fundamentally change how deployments
are controlled. It is still a count-based
deployment control, which means that if you have
an employee, or attacker who stole the credential
of an employee, have access to push
code in production. They could still just bypass
all of those streamlined CI/CD controls, and push a untrusted
piece of code in production directly. Now, that is where binary
authorization comes in play. Binary authorization makes
sure that only properly signed containers are deployed
to the Kubernetes Engine. It gives you a tool to
define policies around what can be deployed, in
addition to who can deploy. Addressing the weakness in a
count-based deployed control mechanism. So let’s take a look
at how it works. Users have a piece
of code that– he has a CI/CD pipeline
set up, run the code through a set of required
tests and controls before pushing to production. Binary authorization integrates
with the CI/CD pipeline by having them producing
a signature as the image passes through the individual
stages in the pipeline. So a builder would sign
a container that says, I’m the trusted builder. I am the one who
built the container. Tester will sign
the container that says this container
passed my test pipeline. Vulnerability sign
the container when it doesn’t find any known
existing vulnerabilities on it. Analysis you know. In QA, engineers put their
respective signatures on this piece of image. So by the time the image comes
to be deployed to production, binary authorization is
integrated into the GKE deploy API, that will look at the
signatures that are produced by the different stages
in the CI/CD pipeline, compare that against
customer-defined policies on what can deploy
in production, and then make a deploy decision. So now, if you have a
employee with access to push code to production
comes along to push them untrusted code, it would fail
because the image would not have the required
signatures on it. Now you have code-based
deployment controls. So, at a high level,
this is how it works, and we’ve worked with design
partners, such as Shopify, to make sure that
it can properly integrate into your
existing production set up. We have a number of
features that I’m going to go through with you
today, to see how you may– to see how it works with
your existing infrastructure. So first of all, you
probably already have– if you use container,
you probably already have a CI/CD pipeline setup. You want to make sure
that your process can be re-used across a large
number of production deployment. You don’t want to a
separate repository, a separate pipeline, for each
environmental application that you build. Binary authorization
accommodates that by defined policies at the
runtime environment level. What do I mean by that? Let’s say you have two clusters
on GKE, a prod and a dev. For the production
environment, you want to make sure
that an image has to be signed off by all the
required CI/CD stages before it can run, because it has
access to live customer data. But for a dev
environment, you want to maintain developer velocity. So as long as image has
been built properly, has passed the unit
test, it’s good to go. There are difference
in deployed time policy for this
environment, but they still use the same CI/CD set up. For a production
deployment, it has to go through the
entire CI/CD pipeline, clear all stages before
it can be deployed, but for the dev environment, you
have the options of build it, test it, take it
out of the pipeline, deploy it then and there. Or, if after running in
production– in dev environment for a while, you’re like
this container is good, I’m going to promote
it to production, you don’t have to duplicate
any of your CI/CD set up. You can just push it through
the remaining required tests, and now it has all the
required signatures. The same image that
was deployed to dev can now be deployed
to production. So you can maintain
a centralized, shared CI/CD process across
all [INAUDIBLE] environments, which is easier
to manage and more secure. Binary authorization integrates
with the centralized metadata store hosted by Google
Container Registry. It’s called Container
Analysis API. We’ve also announced
this product today. Container Analysis
API is designed to be a one-stop shop for
all metadata associated with a container. So that a centralized
stakeholder can, if he has questions about
a particular container that is being deployed
or that is running, he can go in and check
to see, you know, what are all the information? What are all the things I
know about the container? So as a container
goes through its build processes, for example,
the builder would generate a build record indicating,
what are the sources used for this container? What are the packages
of what’s included? When was it built? Will built it? Container goes through test. The tester can write a
standardized metadata about the test result.
It goes through scanner, standardized metadata around
vulnerability findings for the image. So by the time the
image comes to deploy, controls such as
binary authorization can take a look at
all the metadata that are produced in the
Container Analysis API, and apply policies,
apply controls, to determine what can
deploy in production. So binary authorization’s
signature format is just one of the metadata
that Container Analysis API supports. The Google engineers have
also published a open source standard called Grafeas for
this container metadata analysis API. It uses the same metadata format
and uses the same API format, so that you can collect,
and produce and collect this information for
your on-prem environment. Binary authorization also
published a Kritis open source project, which is a open source
implementation of deploy time enforcement with Kubernetes–
very similar to binary authorization– which, combined
with Grafeas, you have the building blocks to
implement a similar enforcement flow for your
on-prem deployment, similar to what we have
here for Kubernetes Engine. If you use Google GCP tools,
such as Google Container Builder and Google Container
Registry Vulnerability Scanner, there are a set
of metadata that’s already produced for you. So if your code is viewed
by Google Container Build, it will already produce
a verifiable build record for you, for every
container that it builds, and today that is happening. It is stored in the
Container Analysis API. Similarly, when you push a image
into Google Container Registry, it gets scanned by
the vul scanner, which was also announced today, it
will publish a vulnerability findings. Now, while the most
common use cases that we hear from
customers is, I want to be able to
gauge deployment based on build and
vulnerability information, the tricky thing with that
is, every organization has a different definition of
what is acceptable in a build, and in a vulnerability findings. That is why we give you a tool,
a open source signer, that you can use to define what your
organization’s requirements are to sign a build and a
vulnerability record. So the way it works is, the
user would take the signer, apply your own custom
configuration that says things like, in a build record, I
will only sign a container to be OK to deploy
if the source comes from these following
three repositories. Or, I would only
sign a container to be state ready if it does not
contain critical vulnerability findings. And when that
happens, the signer issues the attestation, using
a customer-provided key. By the time the container
reaches production, binary authorization will
look up that attestation– because the user would put
that in the policy to say, I will require
this attestation– and enforce, to make sure
only images with proper build and vulnerability
finding records can deploy to the production environment. We also understand–
so far we told you about how only properly signed
images that are built in-house can be deployed to
your environment. But we also recognize
that not all images people run on their production
environment are built in-house. There are popular third party
images, such as Nginx and Redis that a lot of customers deploy. But it’s also a common
source of vulnerabilities in many enterprise environment. So to address that issue,
we support image whitelist in binary authorization policy. So a security
stakeholder could say, I want all of the Nginx
instances deployed in my organization to be the
up-to-date, vulnerability free one, that I deemed OK. So he would put
that in the policy. And when a employee
comes and try to push the latest secure Nginx
image, the digest will match, and the deployment
will go through. But if somebody tries to
push the outdated, vulnerable version of the same
third party image, it would fail, and
generate the audit log, so the security team can review
these incidents later on, and therefore secure your
deployment environment. Binary authorization integrates
with Kubernetes master to make sure it apply
verification, apply the policies, to every
single deployment that goes to your production project. So when a deployer comes along–
the deployer could be a human, could be a automated
pipeline, does not matter– it sends a request to
Kubernetes master that says, I am deploying digest
foo to cluster bar, which then would forward this
request to the admissions controller, which is part
of the Kubernetes master. If this project or this cluster
has binary authorization turned on, admissions controller
would forward this request to binary authorization,
which then would go look up all the signatures
associated with it, come back, verify
those signatures using public keys defined
in the policy, and return a
verification decision. Once Kubernetes master
receives a decision, it can then go ahead–
either block or allow it to deployment. But we understand that
production emergencies do happen. What if your things
are on fire, so I want to push a nonconformant change? We allow that. It’s called break glass. So again, deployer comes along
and says, production’s on fire. I have this quick fix here. Let me just push it through. I will specify break glass
flag, and the request will reach Kubernetes
as usual, go through the whole
verification process. Binary authorization says,
no, this is not conformant. But Kubernetes
master will say, I’m overriding it because break
glass flag is turned on. And this will
generate an audit log so the break glass deployment
event can be reviewed later on. So this is binary authorization,
coming soon to beta. We just announced beta today. The code will be
available shortly. Hopefully, I’ve convinced you
that code-based deployment control is more secure than
account-based deployment control, and binary
authorization would provide that for
a Kubernetes engine. We will support
policies associated with a runtime environment, at
both project and cluster level, so you can define enforcement
at different granularities. We’ll have integration with GCR
Vulnerability Scanner, as well as the Cloud
Container Builder, so that you can apply deploy
policies based on vulnerability findings and build information. We’ll support whitelisting
trusted third party images so you can have standards
across your organization on what third party
images to run. In case of emergency,
break glass. We’ll integrate it with
IAM and Audit Logging so that you can review
failed attempts, brake glass deployment events later
on, after the fact, We also want to make it really
easy for you to integrate this into your CI/CD pipeline
and make it easier for you to write
binary authorization signatures for an image. So we’ll have partner support
from popular security tools such as Twistlock,
popular CI/CD tools, such as CloudBees and Jenkins,
to make it easy for you. Last, but not least, we
have open source reference implementation for
Grafeas and Kritis so you have the building block
to implement similar security controls for your
on-prem deployment. All right, so that
concludes the overview for binary authorization,
and hopefully I gave you a taste of how to
secure your software supply chain using these new
products that we’ve announced. Next to come, we’ll hand
the floor to Jonathan. Jonathan will show you guys a
demo on binary authorization work in action, and combine it
with other security controls to secure Shopify’s
production environment. Jonathan? JONATHAN PULSIFER:
Thanks, Sandra. [APPLAUSE] Cool. Look at that, it works. Demo’s done. Thanks. Yeah, I wish. So, hey, everyone. My name is Jonathan Pulsifer,
a production security engineer at Shopify. Before I get started
on this, just want to go over what our
production infrastructure looks like at a pretty
high level here. So if you are a
developer at Shopify, and you have a repository
that contains code, and you want to turn
that into a service, you interact with this tool
called ServicesDB, which is where all the robots live. So we’re going to create
your production identities, and these sorts of things, your
Kubernetes namespaces, and all that. And the automation, like
a Buildkite pipeline, will also be added for you. So Buildkite is not too unlike
Google Cloud Container Builder, and we’re going to run through
what that looks like today. After image is pushed
up to GCR, we’re going to deploy it with
the tool called Shipit and a gem, which
we’ve open sourced, called Kubernetes Deploy,
so please go check that out. Once it hits Kubernetes, there’s
a lot more automation and magic that happens there
with our cloud buddies. Our cloud buddies are the
name for our custom Kubernetes controllers that
help keep our cloud fluffy at Shopify,
and further to that, we use a number of
other GCP services to help make our cloud go. So the demo that I have for you
today is a PCI-compliant demo. As a security person, moving our
compliance environment into GKU has been a large project
that we’ve been working on, and so we’ve had the need to
create these policies, which help us remain compliant. So Google Cloud
Container Builder comes PCI-certified
out of the box, so we don’t need to
worry about that, and we get some
added stuff for free, like Sandra had mentioned,
like the verifiable build records and
vulnerability findings. So given that, we’ve built
a tool called Voucher. So this slide is very
similar to the ones that you’ve seen earlier, where
we’ve taken that open source singer and, well,
we’ve built something, so I’d like to show you what
that singer looks like– hopefully in real time. So what we’re going to do is run
through two demo attestations, or signatures, that
are created, that are going through
these integrations that Sandra talked about
earlier, containing no vulnerabilities and actually
verifying the build record. So I’m going to flip over
to my build triggers. What I’ve done here
is, I have two source repositories in Google Cloud. One is called Bad and
one is called Good, and so we’re going
to run through those, build some containers. What they’re doing is, they’re
triggering on each new git tag that’s pushed. So we’re going to change
the Dockerfiles up. We’re going to push
some tags, and we’re going to see what Voucher
has in store for us today. Maybe. So because log
demos are super fun, I actually haven’t
recorded any of this. So I’m going to try to
make it, and look at that, it’s super quick. That’s awesome. So I’m going to explain sort
of the lay of the land here. The bottom left
pane, we’re going to– and the bottom right. The bottom panes are the
containers and the source repositories that
I talked about. On the left hand side, we
have our good container. On the right hand
side, we have our bad. Voucher has been built, and
is listening in the top pane. So we’re going to go through
and show you these Dockerfiles for a good container. We’re going to make some
changes to it, just for funsies. So instead echoing hello,
we’re going to echo, hello2. I just want you to
note the from directive that we’re using here. So Alpine, as hard as it may
be to work with in production, actually does not comes stocked
with any vulnerabilities. Fingers crossed that nothing’s
changed since the last dry run. So what I’ve done is, I’ve
just change the Dockerfile. I’m going to add the Dockerfile. I’m going to write a
little commit message then say, hi next. I’m going to tag
this with next demo, and I’m going to get
push, and get push tags. Now, hopefully, when this is
up, what’s going to happen is, every time that Container
Builder finishes a build and puts it into GCR, it’s going
to create an event on Pub/Sub and so I’m listening to that– that build notification. So Voucher should,
fingers crossed, whenever that build’s done,
pick that up and do its thing. So while that’s happening,
I’m going to show you the second Dockerfile. I just want you to
notice the from line. Don’t mean to trash Ubuntu,
but, like, come on, right? Fun. So we’re just going
to do the same thing. We’re going to change
our Dockerfile. We’re going to add it up. We’re going to write
a commit message, and then we’re going to get
push, and get push tags. Oh, it didn’t tag it. Darn. Eh, good enough for
government work, anyway. So, hopefully,
that’s all done now. So I’m going to walk
you through this output that we have from Voucher. I tried to make it as pretty
as possible for the demo, so here’s what happened. So you can see that
there is a build event, that we’ve inserted that, this
new container, this new image with the shop. And what it’s
doing, it’s actually going to go through and
pull the container analysis API to see if it’s noticed
that any new images have been pushed, and
see if it’s going to run through any analysis. And we can see that it’s
found new discoveries, so it’s going to
wait for some, and it found that the
Vulnerability Scanner is going to go through
and do its analysis. That actually finished,
and no vulnerabilities were found in this container. At the same time, it’s going
to run the Provenance check. The Provenance
check is verifying that the image in the
verifiable build record, it’s checksum actually
matches the images that we want to deploy or
that’s actually hit GCR. So we’re actually
just making sure that the artifact has
maintained its checksums all through the pipeline. Snake Oil is the
name of the check that we made that
ensures that containers contain no vulnerabilities. So we can see that this
passed because Alpine is pretty cool with that. It actually worked as expected. I just quit Voucher here because
we’re done with it for now. But we see the second build
event, for our bad container, and we see it’s
doing the same thing. It’s waiting for analysis,
and, oh, look, it found 32 vulnerabilities
in this container. And just for fun, we
decided to surface one of the critical
vulnerabilities. So we have a critical
vulnerability in glibc 227, and there’s the CVE
that’s associated with it. So this would make sec
ops a little bit easier, when we have to determine
which vulnerability is actually present in an image. So we can see further, like
after all those checks have been done, we can see the
attestations are being created. So in the good container, we
have two attestations created– two signatures for Provenance
and for Snake Oil– and we see that this attestation
was created for Provenance for the bad container,
but actually failed because it contained
vulnerabilities, and we don’t want any
vulnerabilities in the images that we deploy today. So earlier we talked
about this policy. So we’re going to take
a look at the policy that we have set up for the
clusters in our project. So this policy, for
binary authorization, contains some whitelisting,
because if we didn’t whitelist these three repositories,
my Kubernetes cluster wouldn’t start on GKE. So what I’ve just done
is just globbed every– every image inside of
those GCR repositories has been listed for the
purposes of the demo. Please do not do this at home. OK, so I’m going to run
through the rest of this. We have some cluster
admission rules. So earlier, we said that we can
specify a policy on a project level, and on a
cluster level, so we’re going to try to demo
both of those today. So in the cluster
admission rules, I have running in Canada, or
North America Northeast one, a test cluster an evaluation
mode require attestation. So what this means,
is that any image that deploys a binary
authorization is going to match these attestations. And you can see that they’re
named Provenance and Snake Oil, just like we talked about. For all the other clusters
running in this project, we only want to
make sure that it’s built by a trusted identity,
and that the image actually was validated. So, moment of truth, we’re going
to try to deploy some images. We’re going to grab our
good sha that passed, and we’re going to keep
CTL, run good times, and we’re going to
try to run the thing. So we’re going to do also– come on. Come on. Hey, look at that. Then we’re also going to grab
the bad sha, the one that failed, and we’re going to
keep CTL, run bad times. We’re going to deploy those. So what do we all
expect would happen? Well, I would certainly
expect, based on my policy, to have one good times
deployment, and no bad times deployment. So let’s check it out,
and see what happened. Starting to get some
pods, and sure enough, only one good times
pod is running, because both
attestations have been created based on the policy
that we created for our project, and in this specific case,
our clustered mission rules. So, cool, but what does
that mean for a developer? How can we get any sort
of actionable feedback? Where does this live? How did that fail? I’m a command line
person, so I’m going to get the replica sets
that have been created here. A deployment in Kubernetes
is like the highest level abstraction, which
creates replica sets, and further to
that, creates pods. So we’ve created the
deployment object and said, hey, cool, we want
this container to run. And a created replica
set said, hey, cool, we’re going to try to
start this pod for you. You can see the bad times
has one container desired, but none are ready. So we’re going to see
what happened with that. Boom. Talk about error messaging. For those who don’t know,
Kubernetes controllers are on like a TL;DR for loop, so
it’s just going to keep trying to do this thing over,
and over, and over again. And every time that it tries, we
have an error creating the pod. It’s forbidden because it’s– an image policy webhook backend
denied one or more images, right? So every time that
it gets deployed, there’s some shenanigans and
webhooks that fire around, that help this do its thing. It was denied by the
attestation authority. We cannot find any Snake Oil
attestations for this image, such that it failed. Neat. What about on all
the other clusters? Just to prove that
the policy works, I created a yellow
cluster in here. And what I’m going to do is,
I’m going to try to run the bad times image– the same one
that was attested earlier with Voucher– and, fingers crossed, it should
deploy to my yellow cluster. Looks like it. Then we have one bad
times pod running. So this sort of proves that the
policy is working as intended. Again, I’m just going to bring
up that policy to show you what that looks like. Everything’s YAML these
days, such as the policy, and so it’s pretty easy to read. It should be easy to read
for, not just security people, but for other engineers and
operations people alike. It’s important that
technologies like this are our human-readable
and easy to digest, or else we can’t
implement them properly. So further into that, BinAuth is
not just a command line suite. It does come with a fancy GUI
now, thanks to our UX folks over here, who made this
the way it is today. So I put some human readable
descriptions on our signatures, or our testers, just to say
what this is actually doing, such that you know if I
name something willy-nilly like Snake Oil, that doesn’t
mean anything to anybody, but having a human
readable description there, in a nice
pretty GUI, makes it really easy for
those other folks to understand, and
see what’s going on. So that’s it for my demo. Thank you very much. What I’m going to do is continue
presenting a couple of slides. So we hit beta today,
which is super exciting, and now our website’s
live, so you can see it at cloud.google.com/binary
authorization. We are going to be in the
demo booth on the ground floor by Moscone South later,
so if you want to come hangout with us, that would be awesome. And we are running some
code labs for this. Sandra, I think there
are code labs this? JIANING GUO: Yeah, three is
a code lab are in Moscone. There’s a code lab
area in Moscone South where you can go and get– it has machines set up. You can get some
hands on experience working through a code lab. JONATHAN PULSIFER: So because
we’re a security talk, and we like our
security people, there’s a lot more in container
security happening here at Next. Its been in the number one
sort of topic these days, which is super exciting for me. But the one that
we did really want to highlight is, preventing
the next major security breach, down at the bottom
right corner, tomorrow at 1:15, and that is the talk about
that GCR Vulnerability Scanner. So if you want to learn more
about how that works, and talk to the folks who
made that happen, tomorrow afternoon is
where that’s going to be. JIANING GUO: All right. JONATHAN PULSIFER: Cool. So we’re open for questions. JIANING GUO: Thank
you very much. JONATHAN PULSIFER: Thank you. [APPLAUSE] If there are any questions,
there are mics, actually, on the either side. So if you’d like to be recorded,
such that we can all hear you, too, that would be sick. We got one over
here on the left. Wait, left. Other left. Stage left. Is that a thing? Eeny, meeny, miney, mo. Rock, paper, scissors? AUDIENCE: Hi. Hey. I guess I’ll jump in. The binary authorization
output that you were getting on the screen
from the command line there. Is that available
in Stackdriver? Like, can we push that to our
Stackdriver Logging, as well? JIANING GUO: So
binary authorization logs all the deployment events
to Audit Log, which pushes to Stackdriver, I believe. So you should be able to
get that in Stackdriver without having to
configure anything. AUDIENCE: OK, thanks. JONATHAN PULSIFER: There are
two different pieces to this, too, right? You have the actual
Kubernetes replica set– JIANING GUO: Right. JONATHAN PULSIFER:
–error, as well, which we just found in
regular Kubernetes engine logs, or other event logs,
where binary authorization it creates its own distinct audit
logs in these circumstances. So if, like, the
image was blocked, you’re going to have your
regular [? Cates ?] events that’s saying, oh, we can’t
start this pod because reasons. You can find that information
there, or in the audit log, as well. JIANING GUO: Yeah, the
audit log is– sorry. The audit log is
basically the same place where GKE [? searches ?]
their audit log. So you have that set up,
you should be all good. AUDIENCE: All right, I try. So is this something
we can bring in, say, if we’re not using
GKE, and we’ve got our own Kubernetes set up
on some other cloud provider? Is this something we
can bring in to use? JIANING GUO: So we do have
a open source reference implementation, as I mentioned. It’s open source, so we would
love to hear your needs, and get your contribution. AUDIENCE: OK. JIANING GUO: But as is,
yes, it is something that you can set up to work
with the open source Kubernetes. It’s probably going to take a
little bit of configuration, because it’s not a
hosting solution. You have to– AUDIENCE: Of course. JIANING GUO: –piece
the things together. AUDIENCE: OK. JIANING GUO: But
yeah, we do have that. AUDIENCE: I thought so,
I just wanted to confirm. Thank you. JONATHAN PULSIFER:
github.com/grafeas/grafeas, github.com/grafeas/kritis
for both their reference implementations. JIANING GUO: Thank you. AUDIENCE: Hi. So I have two questions. The first one is, so there’s
no extra steps that you have to take, like at deploy time? For example, could you use
an orchestrator like Helm and still do this? JIANING GUO: So every deployment
that goes through GKE API– so you should just be
able to deploy, as usual. It shouldn’t be any– the deployment process for the
vast majority of deployments shouldn’t change, unless
you’re deploying something that does not meet policy. Yeah, there are some
integration work that you have to do
with your CI/CD pipeline to produce a signature. AUDIENCE: OK. And then, how would
you apply the policy? Is it just a GCloud command? JIANING GUO: Yeah, we have
a GCloud command and a UI that can define the policy. AUDIENCE: OK, thank you. AUDIENCE: I’m curious about
where the private signing keys live, especially if you’re
using the open source stuff, and can’t use the integrated
Google Cloud cryptographic? JIANING GUO: Yeah, so,
unfortunately, you’ll have to manage your
own private key for the open sourced
version, as well as for the hosted version for now. We are looking at
different integrations. I would love to hear
your requirement on where we should
take this, and make it more easier to use for you. JONATHAN PULSIFER: In the case
of the demo, it’s just PGP Keys and we have a tool
called EJSON at Shopify that we use to encrypt it,
and keep it in the same source repository. GPG [? and ?] a dream. AUDIENCE: Are you planning to
offer a hosted Voucher signing service or is that something
that we have to implement? JIANING GUO: That’s
definitely something that we’re thinking of. Come talk to me
afterward and I’d like to get your
requirements on that. AUDIENCE: How does
this compare to– they call it container
signing, image signing– with a product like Twistlock? Is this basically
the same or are there differences that haven’t
necessarily been called out? JIANING GUO: No, we actually
integrate with TwistLock. So if you are a TwistLock
user, and you have your scanner set up, and you have your
scanner policy set up– so while your image passes
the TwistLock scanner, TwistLock would sign the
image, which can then be enforced at deploy time on GKE. There are other
types of products out there, maybe like
Docker Content Trust, things like that, that also
involves image signing, but that’s more around
the software distribution protection versus a
Provenance enforcement. So the difference there is,
while softwares are distributed in an open internet
environment, it may get altered while its moved
from repository to repository, and signatures, such as
Docker Content Trust, makes sure that the software
has not been altered– it comes from a trusted source. But Binary Authorization takes
a slightly different approach, focusing on largely
collaborative environments, which is enterprise
organization, and you want to make sure
that the software you deploy has passed certain
tests that are hosted by you– that
you have control over. AUDIENCE: OK. JIANING GUO: Does that
answer the question? All right, great. No more questions. As John mentioned, we’re
going to be at the demo booth around 2 o’clock. So if you want a come and check
out, and hang out with us, you’re welcome. And thank you all for coming. JONATHAN PULSIFER: Thanks. [MUSIC PLAYING]

One thought to “Securing the Software Supply Chain (Cloud Next ’18)”

Leave a Reply

Your email address will not be published. Required fields are marked *