Edge TPU live demo: Coral Dev Board & Microcontrollers (TF Dev Summit ’19)

Edge TPU live demo: Coral Dev Board & Microcontrollers (TF Dev Summit ’19)


[MUSIC PLAYING] PETE WARDEN: So thanks
so much, Raziel. And I’m real excited to be here
to talk about a new project that I think is pretty cool. So TensorFlow Lite
for microcontrollers– what’s that all about? So this all comes back to when
I actually first joined Google back in 2014. And as you can imagine,
there were a whole bunch of internal projects
that I didn’t actually know about, as a member
of the public, that sort of blew my mind. But one in particular
came about when I actually spoke to Raziel for the
first time, and he explained. And he was on the speech
team at the time working with Alex, who you just saw. And he explained that they
use neural network models of only 13 kilobytes in size. At that time I only really had
experience with image networks. And the very smallest
of them was still like multiple megabytes. So this idea of having
a 13-kilobyte model was just amazing for me. And what amazed me
even more was when he told me why these
models had to be so small. They needed to run them on these
DSPs and other embedded chips in smartphones. So Android could listen out for
wake words, like “Hey, Google,” while the main CPU was powered
off to save the battery. These microcontrollers
often only had tens of kilobytes of
RAM and flash storage. So they simply couldn’t
fit anything larger. They also couldn’t rely
on cloud connectivity because the amount of power that
would have been drained just keeping a radio connection
alive to send data over would have just
been prohibitive. So that really struck
me, that conversation and the continued work that
we did with the speech team, because they had
so much experience doing all sorts of different
approaches with speech. They’d spent a lot of time and
a lot of energy experimenting. And even within the
tough constraints of these embedded
devices, neural networks were better than any of the
traditional methods they used. So I was left
wondering if they’d be really useful for other
embedded sensor applications as well. And it left me really wanting
to see if we could actually build support for these kind of
devices into TensorFlow itself, so that more people could
actually get access. At the time, only people
in the speech community really knew about the
groundbreaking work that was being done,
so I really wanted to share it a lot more widely. So [LAUGHS] today, I’m
pleased to announce that we are releasing the
first experimental support for embedded platforms
in TensorFlow Lite. And to show you
what I mean, here is a demonstration board that
I actually have in my pocket. And this is a prototype
of a development board built by SparkFun. And it has a Cortex-M4 processor
with 384 kilobytes of RAM and a whole megabyte
of flash storage. And it was built by Ambiq
to be extremely low power, drawing less than one
milliwatt in a lot of cases. So it’s able to run on a
single coin battery like this for many days, potentially. And I’m actually going to
take my life in my hands now by trying a live demo. [LAUGHS] So let us see
if this is actually– it’s going to be
extremely hard to see, unless we dim the lights. There we go. So what I’m going
to be doing here is, by saying a particular word,
and see if it actually lights up the little yellow light. You can see the
blue LED flashing. That’s just telling me that
it’s running [INAUDIBLE].. So if I try saying, yes. Yes. [LAUGHS] Yes. [LAUGHS] I knew I was taking
my life into my hands here. [LAUGHTER] Yes. There we go. [LAUGHS] [APPLAUSE] So I’m going to quickly move
that out of the spotlight. [LAUGHS] So as you can see,
it’s still far from perfect. [LAUGHS] But it
is managing to do a job of recognizing
when I say the word, and not lighting up when
there’s unrelated conversations. So why is this useful? Well first, this is
running entirely locally on the embedded chip. So we don’t need to have
any internet connection. So it’s a good, useful first
component of a voice interface system. And the model itself
isn’t quite 13 kilobytes, but it is down to 20 kilobytes. So it only takes up 20
kilobytes of flash storage on this device. And the footprint of
the TensorFlow Lite code for microcontrollers is
only another 25 kilobytes. And it only needs about
30 kilobytes of RAM available to operate. So it’s within the
capabilities of a lot of different embedded devices. Secondly, this is
all open source. So you can actually
grab the code yourself and build it yourself. And you can modify it. I’m showing you here on
this particular platform, but it actually works
on a whole bunch of different embedded chips. And we really want to
see lots more supported, so we’re keen to work
with the community on collaborating to get
more devices supported. You can also train
your own model. Just something that recognizes
yes isn’t all that useful. But the key thing is that
this comes with a [INAUDIBLE] that you can use to actually
train your own models. And it also comes
with a data set of 100,000 utterances
of about 20 common words that you use as
your training set. And that first link there,
the aiyprojects one, if you could actually
go to that link and contribute your voice
to the open data set, it should actually
increase the size and the quality of the data
set that we can actually make available. So that would be awesome. And you can actually
use the same approach to do a lot of different
audio recognition to recognize different
kinds of sounds, and even start to use it for
similar signal processing problems, like things like
predictive maintenance. So how can you try
this out for yourself? If you’re in the audience
here, at the end of today, you will find that
you get a gift box. And you actually have
one of these in there. [APPLAUSE] [LAUGHS] And all you should need to
do is remove the little tab between the battery, and it
should automatically boot up, pre-flashed, with
this yes example. [LAUGHTER] So you can try it
out for yourself, and let me know how it goes. Just say yes to
TensorFlow Lite is the– [LAUGHTER] And we also include
all the cables, SO you should be able
to just program it yourself through
the serial port. Now these are the first
700 boards ever built, so there is a wiring issue. So it will drain the battery. It won’t last. It would last more
like hours than days. But that will actually,
knock on wood, be fixed in the final
product that’s shipping. And you should be
able to develop with these in the
exact same way that you will with the final
shipping product. And if you’re
watching at home, you can pre-order one of
these form SmartFun right now for, I
think, it’s $15. And you’ll also find lots
of other instructions for other platforms
in the documentation. So we are trying
to support as many of the modern microcontrollers
that are out there that people are using as possible. And we welcome
collaboration with everybody across the community to help
unlock all of the creativity that I know is out there. And I’m really
hoping that I’m going to be spending a lot of my
time over the next few months reviewing pull requests. And finally, this is my
first hardware project, so I needed a lot of
help from a lot of people to actually help bring
this prototype together, including the TF Lite team,
especially Raziel, Rocky, Dan, Tim, and Andy. Alister, Nathan, Owen, and Jim
at SparkFun were lifesavers. We literally got these in
our hands middle of the day yesterday. [LAUGHTER] So the fact that they
managed to pull it together is a massive tribute. And also Scott, Steve,
Arpit, and Andre at Ambiq, who actually designed
this process and helped us get the software going. And actually a lot of people
at Arm as well, including a big shout out to Neil and Zach. So this is still a
very early experiment, but I really can’t wait to see
what people build with this. And one final note. I will be around
to talk about MCUs with anybody who’s interested
at the breakout session on day two. So I’m really looking forward
to chatting to everyone. Thank you. [APPLAUSE] RAZIEL ALVAREZ: Thanks, Pete. We really hope
that you try this. I mean, it’s the early stages,
but you see a huge effort just to make this happen. We think that it will be
really impactful for everybody. Now before we go again– and I promise this is the
last thing you hear from me– I want to welcome
June, who’s going to talk about how, by using
TensorFlow Lite with the Edge TPU Delegate are able to train
these teachable machines. [MUSIC PLAYING] [APPLAUSE] JUNE TATE-GANS: Thanks, Raziel. Hi. My name is June Tate-Gans. I’m actually one of the lead
software engineers inside of Google’s new Coral Group. And I’ve been asked to
give a talk about the Edge TPU-based teachable
machine demo. So first, I should
tell you what Coral is. Coral is a platform for products
with on-device machine learning using TensorFlow and TF Lite. Our first two products are
a single-board computer and a USB stick. So what is the Edge TPU? It’s a Google-designed ASIC that
accelerates inference directly on the device that
it’s embedded in. It’s very fast. It localizes data to the
edge, rather than the cloud. It doesn’t require a
network connection to run. And this allows for a whole
new range of applications of machine learning. So the first product we
built is the Coral Dev Board. This is a single-board
computer with a removable SOM. It runs Linux and Android. And the SOM itself
has a gigabyte of RAM, a quad-core A53
SoC, Wi-Fi and Bluetooth, and of course the Edge TPU. And the second is our
Coral accelerator board. Now, this board is just
the Edge TPU connected via USB-C to whatever
development system you need, be it a Raspberry
Pi, or a Linux workstation. Now, this teachable
machine shows off a form of edge training. Now traditionally, there’s
three ways to do edge training. There’s k-nearest neighbors,
weight imprinting, and last layer retraining. But for this demo,
we’re actually using the k-nearest
neighbors approach. So in this animated GIF, you
can see that the TPU enables very high classification rates. The frame rate you
see here is actually the rate at which the
TPU is classifying the images that I’m showing it. In this case, you can
see that we’re getting about 30 frames per second. It’s essentially
real-time classification. And with that, I actually have
one of our teachable machine demos here. So if we can turn this on. There we go. OK. So on this board, we have our
Edge TPU development board assembled with a camera
and a series of buttons. And each button corresponds
with class and lights up when the model identifies
an object from the camera. First, we have to plug this in. Now every time I take
a picture by pressing one of these buttons, it
associates that picture with that particular class. And because it’s running
inference on the Edge TPU, it lights up immediately. So once it’s finished booting,
the first thing I have to do is train it on the background. So I’ll press this blue
button, and you can see it immediately turns on. This is because, again, it’s
doing inference in real time. Now, if I train one of the
other buttons using something like a tangerine,
press it a few times. OK, so now you can see it can
classify between this tangerine and the background. And further, I can even
grab other objects, such as this TF Light sticker– it looks very similar, right? It’s the same color. Let’s see. What was in the class I used? Yellow, OK. Sorry. [LAUGHTER] So now, even though
it’s a similar color, IT can still discern
the TensorFlow Lite logo from the tangerine. Oh, sorry. Tangerine, there we go. [LAUGHTER] [APPLAUSE] So you can imagine in a
manufacturing context, your operators, with
no knowledge of machine learning or training
in machine learning, can adapt your system
easily and quickly using this exact technique. So that’s about it for the demo. But before I go, I
should grab the clicker, and also I should say, we’re
also giving away some Edge TPU accelerators. For those of you
here today, we’ll have one available
for you as well. And for those of you
on the Livestream, you can purchase one at
coral.withgoogle.com. [APPLAUSE] OK. [MUSIC PLAYING]

29 thoughts to “Edge TPU live demo: Coral Dev Board & Microcontrollers (TF Dev Summit ’19)”

  1. Thanks much for this video. I help design scientific instruments at my job, so I was accepted into the beta program for the Coral devices. Surprisingly, my employer is reluctant to let me spend time creating a demo device with a Coral TPU. They just don't see a strong use case, mostly because training on the cloud is so easy. Maybe June's demo will help me convince them that the ability to train offline is worth investigating, even if we're not sure yet what we'd do with it.

  2. Hello! Great video thank you for uploading! Those boards are super cool.

    It looks like the link to the boards might be damaged. Tried to preorder one

  3. Edge TPU Real time image identification is pretty impressive ! Wonder if two can edge work together and scale. ? Thanks for unleashing technology. 💔👾

  4. Im looking forward to future releases of this product, incredible potential, but right now, failing to recognize perhaps the simplest word is really not acceptable.

  5. I've put a lot of effort into this. Take a look.

    Hi everyone, i'm a a Software Engineering student graduating in Italy and I love Machine Learning.

    How many times, trying to approach Machine Learning, you felt baffled, disoriented and without a real "path" to follow, to ensure yourself a deep knowledge and the ability to apply it?

    This field is crazily exciting, but being rapid and "new" at the same time, it can be confusing to understand what each things means, and have a coherent naming of the things across resources and tutorials.

    I recently landed my first internship for a Data Science position in a shiny ML startup. My boss asked me if it was possible to create a study path for me and newcomers, and i've put a lot of efforts to share my 4-5 years of walking around the internet and collecting sources, projects, awesome tools, tutorial, links, best practices in the ML field, and organizing them in a awesome and useable way.

    You will get your hands dirty and learn in parallel theory and practice (which is the only efffective way to learn).

    The frameworks i've chosen is Scikit-Learn for generic ML tasks and TensorFlow for Deep Learning, and I'll update the document weekly.

    No prior knowledge is required, just time and will.

    Feel free to improve it and share with everyone.

    Inb4: sorry for my english, it's not my native language 🙂

    https://github.com/clone95/Machine-Learning-Study-Path

  6. Whilst the core ideas behind this are on the coral.withgoogle.com website, there's also a cool tutorial on MagPi showing how to make the same thing from an Accelerator (with buttons etc)
    I've also posted a video of one the the demos from the Getting Started page (using an RPi) here: https://youtu.be/6uQlCiAGWzc
    They're pretty cool and surprisingly small!

  7. Someone that works for Google just said "Hey Google" in a video. My entire house responded. Shouldn't there be internal rules against that??

  8. Awesome! Edge training is very useful for prosthetics. Will this feature be available outside Coral, .e.g. in Android?

  9. This is a step in the right direction. People like me have been struggling with lack of good options for actually applying ML for real world computer vision, in real time. Still a little under-powered, can't wait for future generations of these types of devices.

  10. What documentation about other microcontrollers and platforms Pete is speaking about at 8:40, could anyone share the link?

  11. I think this should be able to run in any M4. STM has good dev boards, and they also have a dev environment now, for ML applications.

  12. I'm very excited about TF 2.0 and Tflite going to microcontrollers. But I have to say that hardware demos in the video were a little bit underwhelming. Understandable, it's a work in progress, but there are already boards out there(for example Sipeed Maix Bit) that have similar price and footprint, but blow Sparkfun board out of the water on the computing power and memory. For Coral Dev board… I really want to test it and see how it performs against Jetson Nano

  13. Aluminum Case for the Coral – https://www.kksb-cases.us/collections/coral/products/kksb-google-coral-case-aluminum-grey-black

Leave a Reply

Your email address will not be published. Required fields are marked *