What’s New in Google’s IoT Platform? Ubiquitous Computing at Google (Google I/O ’17)

What’s New in Google’s IoT Platform? Ubiquitous Computing at Google (Google I/O ’17)


[MUSIC PLAYING] WAYNE PIEKARSKI: Hi, everyone. Good morning. How’s everyone doing? I hope you enjoyed
your developer keynote and early keynote this morning. It’s really exciting
to see you all here. So my name’s Wayne Piekarski,
and I’m a developer advocate here at Google. And today, I’m really
excited to talk to you about all the cool new
things we’ve been doing on the Internet of Things. And this enables what we refer
to as ubiquitous computing. So there’s been a lot of new and
interesting products and tools of interest to developers
that going to be highlighting in this talk. And some of them are being
released at I/O right now. So there’s been a whole
bunch of new announcements during the keynotes. Yesterday we had
Cloud IoT launching, so there’s all
kinds of new things. So we’re going to do sort
of a whirlwind tour of all of Google’s products that are
related to this so that you can find out more of what
you can do for yourself. So importantly,
here at I/O, we have a whole bunch of really
interesting engineers and PMs and leaders from
all the different teams that work on these products that
I’m going to be talking about. So make sure you take
advantage of that, and go and see them in the
sandboxes and so forth. So let’s get started
with our little tour. So as we know, computing
has evolved over the years. So we started from
mainframes, and then we moved to desktops and
laptops, and then phones. And so every decade or
so, we shrink electronics. And now we’re at wearables. The problem is is that if we
get any smaller than what’s on a wearable, we run into the
problem of that humans can’t interact with them
anymore, because we can’t make something
that’s smaller than this because our fingers
can’t touch it. So there’s kind of a
limit to what we can do with shrinking electronics. And what’s happening now
is that computers are being embedded into everyday devices. We use different
devices depending on what we’re trying to do. So we use a laptop to
write documents or emails, a TV for videos, or phone
for things when we’re away from home, and a wearable
when we’re running, and something when
we’re driving. So there’s all these different
competing experiences that are suitable for what
you’re doing at the time. And now we’re at
a point where we have thermostats and light
bulbs and basic electronics that are being internet enabled. But we can do a
lot more than that, and we’re moving to a
world where computing is going to be everywhere,
assisting users with their day and solving problems that we
didn’t even realize existed. And we’re going to
look back on this and go, wow, this really
changed everything. So while phones were bigger
than PCs, Internet of Things is going to be even
bigger than phones, and it’s going to be incredible. So we use the term “ubiquitous
computing” to describe this. It’s technology that’s
accessible to the user wherever they are
and whenever they are in what they’re trying to do. And so today we’re going to
talk about what platforms Google provides and what
developer tools Google provides to help support this. So in the past, we trained
users to install apps onto their phones
that had icons. So you would click
an icon for each app. And now we have devices that
don’t even have screens, so we need to think
differently about computing and what it means. We need to take the user’s
context into consideration. So what time is it? What’s the user’s location? What are they doing? And we need to make good
decisions for our products based on those kind of things. And you need to think of
your platform as a view into your service now. So it’s no longer just a
single app or a single website. Your service needs
to be ubiquitous, it needs to be
everywhere, and it needs to work on all platforms. So we want a single,
seamless experience to keep our users happy. So with that in
mind, a little intro, let’s get started
talking about what consumer products are provided
that consumers have right now. And we’ll start
talking with Android. So with Android, as you
saw the keynote today, they announced there’s
like, 2 billion devices out there that are running Android. So that’s incredible. And we have Android
phones, we’ve got tablets, we’ve got wearables,
we’ve got TV. And they all involve
the same infrastructure, so they will work
on Google Play. You build your apps
using Android Studio, and the apps run on the device. So we do things like that
right now, but many of you may have only written
a phone up before. So one thing you
might want to consider is also extending your app
to run on Wear or Android TV. So last year, I gave
a talk at Google I/O that talked about how easy it is
to put your apps to Android TV. There’s a few tweaks in your
manifest you got to make, you change the UI a
little bit, and then it runs on those
devices as well. And so these are
new opportunities to have different kinds of users
looking at your application, because there are times when
looking at something on a TV is better, and sometimes
when a wearable is better, and you really need to take
advantage of those things. Android in general does all
kinds of interesting things. They do apps, games, they
can do sensor processing, 3D graphics, audio. So there’s lots of all kinds of
really interesting capabilities that we’re going to get
to talking about later on I’m just going to highlight
Android Auto as well. Android Auto is a
little bit different in that it’s a templated
user interface in that you can build messaging
and media apps as well. So if you’re a company that
makes media or messaging apps, you simply add some extensions
to your Android app, and then it will work when
the user plugs in their phone into a vehicle, or
when they install it into one of those cars that you
can see out in the demo area. So we have this big
ecosystem of Android devices that solve different needs. Now, there’s no one device
that’s perfect for everything. Some are better than
others, depending on the context that
the user is in, but they all solve different
problems depending on the time where the user is,
whether at their home or in their living room. So that’s Android. Now, you may have also heard
at the keynote about Google Home and the Google Assistant. And so these are also
platforms that we’re going to cover today
and explain how that fits into our vision
of ubiquitous computing. We announced today that the
Assistant is now available. It was available on
Google Home, but now we’re also announcing that
it’s going to be available on a lot of
Android phones out there. And also it’s coming soon
on Wear, TV, and Auto. So we’re bringing
the Assistant to all of these different
platforms, and the Assistant allows the user to have
a conversation with it to get things done. And so the basic
Google Assistant is you talking to
Google, but there’s a third party library
that allows you to build your own extensions as well. And we’ll get to
talking about that soon. The Google Assistant
is interesting because it supports both text
and voice-based audio inputs and outputs. And once again, that
depends on the context of what the user is doing. If they’re in a movie theater,
they can type on their phone. If they’re driving,
they can speak. If they’re standing in the
kitchen with their hands dirty, they can speak as well. So you use a different
input and output modality depending on what
the user is doing. As we can see,
context is important, and that’s going to help us
with all these platforms. So these were the
consumer platforms. Now we’re going to talk about
the possibilities for you as a developer, and how
you can extend them. Consumers have these
devices already. There’s billions of phones
and TVs and wearables and so forth out there. You want to be able to
take advantage of them, and how can you do that? Because there’s a
bunch of new options that you’re going to
find very interesting. So the first thing
that’s really cool is that we announce
recently the developer preview for Android Things. And so Android Things
is Google’s new platform to support the development
of IoT devices, and it’s based on Android. And one of the really
interesting things about Android Things
is the use of what we call a SOM, which you can
see on the slides over there. Now, a SOM is a
little tiny board that’s got a CPU,
memory, wireless, and all of the important
things that make the device do what it does. And they’re all on a
single common board with a common set of software. Now, these SOMs are
made in bulk quantities so that you can buy
them in small quantities and pay reasonable
prices for them. And you plug them into your
own custom developer board to test them, but you can build
your own custom development boards depending on your needs
for the kind of application you’re building,
because you don’t want to have to embed a full
developer board into a product that you’re selling. And so the SOM contains
hard-to-design components. When you’re doing
electronics manufacture, you start to realize that there
are high-speed electronics and low-speed electronics. High-speed electronics require
really skilled design people who really know how to build
things that run at 1 gigahertz, and there’s a lot
of design rules you’ve got to take into place. Whereas low-speed
electronics are the kind of things that
we can build at home. So I’ve actually
got to talk tomorrow where we’re going
to go through how to go about making your own
boards for an Android Thing device that you can
solder at home, basically. So we’re going to talk
about that tomorrow. But the nice thing
about SOMs is that it’s our initiative to democratize
hardware development. And we’re making it
fun for prototyping so anyone can take an
Android Things board and build something, but
there is a road to production. You don’t have to switch
hardware and switch your software platform. You can use the same
hardware and software for both your prototype
and the commercial product that you can sell in quantities. And that’s one of the really
important things about Android Things and our
architecture based on SOMs. So the way you get started
with Android Things is you start with one of
these developer boards. So I’ve got some samples
on the slide here, and you can see that the
developer board is quite large. The SOM is plugged into it. The SOM is actually the
really small component on the bottom left, and it’s on
the top left of the other one as well. So the SOM just plugs in,
but the developer board has breakout pins
for every interface you would need as a hacker
or a maker making prototypes. So there’s USB ports,
audio, GPIO pins, I2C buses, HDMI outputs, things like that. Now, you build
your prototype, you put a breadboard next to it,
you can run jumper wires across. And then when you finish, you
can then take that custom board and shrink it down. But it’s nice because you can
test both and use the same SOM. So in tomorrow’s
talk, we’re actually going to go through an example. So my colleague Dave actually
built this LED candle project. So we wanted to test
how hard it would be to make your own boards
and then solder them in our own workshop. So you can see here
on the left that Dave built a prototype which had
a large board with the LEDs and resistors and wires on
it plugged into the Edison debugging board,
and we tested it. And once we were happy with
it, then we used PCB software, we designed a
schematic, and then we had the board fabricated. Now the board we
fabricated very cheaply. There are lots of
places nowadays that will fab boards for you
really quickly and easily. And so you can see
on the right that we have our LED prototype
sitting in its plastic shell, and then there’s a lid
we put on top of it. And the board we built is
almost the size of the SOM. And that’s an example of how
you can build a cheap board, build it in with a
SOM, and then you can embed it into a
component and sell it. So I was quite
excited that we were able to hand solder this
and build these prototypes. So it helps to increase
the time that you can do for prototyping,
because you don’t want to have to wait a month every
time you want to iterate on something,
because electronics can be a little hard to
do compared to software. So the nice thing
about Android Things, and we have a bunch
of talks at Google I/O specifically about
Android Things, so definitely attend those
if you’re interested. But Android Things uses
standard Android APIs to make IoT devices. It’s a variant of Android in
the same way that Wear is, the same way that TV is, and
the same way that tablets and phones are
different variants. And so they will use the
same Android code base to implement their features,
and the same APIs are available. Use Android Studio
to write your code. You can use your existing
code-based development tools. And you can hire any
Android programmer to build IoT software for
you now, whereas in the past, you would have to
hire someone who’s familiar with microcontrollers
and assembly language and low-level hardware. It’s now a lot
easier, and that’s what we’re really
trying to do here. The other thing is that
it’s a real Android device. So it includes
Google Play services with all of its features. So we’ve built a version
of Google Play services that’s reduced in size so it
uses less memory because it doesn’t have as many features
as a phone, for example. You’re not making phone calls. But we have a special version
of the Google Play services that offers like Fused Location
provider, Firebase support, and that’s available to
you on your device as well. It really enables a lot of
interesting functionality that we’ll get into soon. So as we said before, it’s
based on a SOM architecture, but it’s more than just the
hardware modules itself. It’s also the software. So Google is providing managed
software updates for those SOMs in the form of what’s called
a BSP, or a Board Support Package. So Google provides the
Linux kernel, the drivers, and the libraries that
are necessary to run Android on that SOM. And Google pushes updates
to those devices itself. So you’re a dog
feeder manufacturer making little prototypes. You don’t want to have to worry
about security and updates and looking for kernel flaws. You simply get your
updates from Google, you update your software
module that runs on the device, and then you’re done. So the same Android
security team that makes patches
for Android will be providing those same patches
to Android Things devices as well. So every month or whatever
the release cycle is, we’ll make new binary
updates for our SOM modules and push them out
to public devices. And there will be
a developer console that you’ll use
to maintain those and keep your
devices up to date, and also monitor how your
fleet of IoT devices is doing. So we’re provided a lot of
really nice infrastructure that helps make
automatic updates and keeps devices secure,
because security with IoT devices is very important. And as we’re seeing,
lots of devices are having problems out there
that are not built correctly, we’re trying to
do our bit to make sure there’s an ecosystem
of secure devices out there that are
based on Android. So one of the things I find
really exciting about Android Things is it’s not just about
light bulbs and thermostats, but it’s about all
the really crazy ideas that we’re going to be able
to build in the future. These devices are very powerful. They’re not microcontrollers. They’re very high-end
processes, similar to what you had in a phone two
or three years ago. And so there’s going to
be a lot of possibilities for computer vision,
for audio processing, and machine learning. And the computing
power in these devices keeps increasing every
year, and the costs keep decreasing as well. So we’re going to see a lot
of really interesting things, and we’re enabling you
to build those devices. And so one of the really
nice examples is TensorFlow. So we’ve ported
TensorFlow to Android, and therefore it also works
on Android Things as well. It works on both
ARM and x86 chips, so you can support any kind of
Android Things device that’s out there. And we’ve built some
samples, so there’s a URL up there to a GitHub project
if you want to have a look. We did a blog post
about this recently where we took a dog
recognition sample and we made it work
on Android Things. And so it’s really
cool because you can take photos of
dogs and people, and it’ll classify them for you. And TensorFlow has–
they’ve already provided a model called
the Google Inception Model. And this is a
training set of data that Google has built
using millions of images that they’ve
trained, and then you can use it to recognize images. And so in the example, you
could show a photo of a dog, and it tells you what it is. And so in this example
here, I’ve got my dog, Mila, and you point the camera
at her, and then it goes, wow, she’s a
Staffordshire Terrier. So it’s that cool
that it actually can identify the breed of
the dog based on the photo. And so I think my next project– I like doing IoT projects
at home for my dog, so I built a pet
feeder last year. But I think my next
one is going to be some kind of camera-based
thing that detects when they’re on the couch and then
scares them away from it or something like that. So got to work on that. If someone wants to build
that, I’ll definitely buy it. So TensorFlow is
really cool, and it offers all kinds of
amazing possibilities of doing machine vision, and it
makes it really easy for you. And by using something
like Google Inception, you don’t even need to
build a training set for it. You can just get
started straight away. So check out the
sample for that. So we talked about the
Google Assistant before. So there is a third
party API that we’ve provided called Actions
on Google, which is what you can use to extend
the Google Assistant to support conversations with your users. And we call those
Assistant Apps that you can build as a developer. And so we provide
something called a conversation API that takes
the Google Home’s speech recognition. So when the user
speaks something, it recognizes the text,
generates a string, and it then passes it
to your cloud service, and it gives it to you so
that you can do something with that string. And then you can
generate or reply, and then the reply will
be spoken back to the user by the assistant. So it’s just strings
going back and forth. So it makes it really easy to
integrate your own actions. And there’s all kinds
of demos out here that you can see in
the sandbox areas to try them out
for yourself if you haven’t played with it before. And we have codelabs
that show you how to build your own
actions as well, which is quite interesting. So in the example
on the slide here, we have actually ported
the Google I/O app to the Google Assistant. So you can actually ask
it questions like, tell me what kind of topics
you want to know, or when’s the next Android talk. So you can ask these questions
and have a conversation or chat with it to find out
what you want to know. And so we really
want to encourage people to build
conversations that are not “press 1 for sales, and
press 2 for support,” but they’re much more intuitive. And we have a whole
bunch of talks here about how to build
really good voice user interfaces, or VUIs. And we have some of the best
experts in the field giving these presentations to give
you key insights as to how to build really good VUIs. And I remember going to one of
the talks, and I was like, wow, I didn’t realize that
there was so much thought that’s been put into this. So there’s a lot
of research that’s been going into making really
good voice conversations, and you can build them now, too. Now, the one thing about
working with Actions on Google is that the basic API is
strings going back and forth. But trying to build those
complex conversations yourself is not easy. If you had to write code
that actually processed those strings
separately, you would have a hard time doing it. So we’ve actually got
a tool called api.ai that we use to help build really
nice conversation actions that run on the Google Assistant. So let’s talk a little
bit about api.ai. Api.ai has this really
simple user interface. You don’t even need to be
a programmer to use it. And what you do is you give
it example sentences of things that you want the
system to process. So there’s a video I
made a couple of months ago where we built this
personal chef example. And so you could say things
to it like, I’m hungry, and I want a hot soup
because it’s cold outside, and chicken sounds good to me,
and I’d like a cold dessert, and I want it right now. You could give these
really complex sentences, and api.ai was
able to extract out the key phrases
of those sentences and pass it to your web hook
so that you could do something with it. Trying to write a regular
expression to process something like this is really
complicated, and api.ai uses machine learning techniques
that you program by typing in these example sentences. It then learns from you. And then as users
use your action, it then learns
from that as well. And if there’s mistakes,
you can correct it, and it trains this
model to understand the different phrases that
people are going to give you. So it’s a really neat way
of building conversations, and it makes it really easy. And so I’ve had plenty of
people on our design team who’ve never written
code in their life. And they come into
api.ai, and they start building their
own assistant app, and it’s really cool with
what you can do with it. So we’ve got a bunch
of great samples. If you go to
developers.google.com/actions, we have a bunch of great
samples on how to do that. And so if you’re a beginner,
you should start with that. And so finally, we
mentioned in the keynote today that the Assistant
runs on phones, and it runs on the
Google Home device, and we have other
platforms coming soon. But the key thing
to remember, I’ve had a few people ask
me this, is like, well, if it runs on the phone, how
do I build an APK around that? And the answer is all
of the actions on Google is hosted in the
cloud somewhere. You can use whatever
cloud provider, you want, but the action
lives in the cloud somewhere because it’s available
to every device– Google Home, phones, portable
devices, and so forth. Not every device has the
ability to install apps onto the device itself. So have a look at the
documentation to learn more, but it’s definitely
something if you’re interested in building a
conversational interaction with your service, because
people are driving, they’re walking,
they’re doing things. They’re not always at a
phone or at a computer, and this is definitely a way
of enabling more capabilities for your service or platform. So one thing we’re announcing
at Google I/O this year is a new contest for
Actions on Google. And so we want to see what kind
of amazing, cool Assistant Apps that people like you build. And so we’re launching
this contest here at I/O to encourage that. So we’ve got over
20 different prizes, and one of the biggest prizes is
a trip to Google I/O next year. So you don’t have to wait
in line for a ticket, and that’s one of the prizes. So we really want
to encourage you all to go out there and
build Assistant actions, submit them, try them out,
and see how that goes. And there’s a link,
g.co/actionschallenge, that you can use to
learn more about this. So next, we also have announced
the Google Assistant SDK, and this allows you
to embed the Google Assistant into custom devices
that you build yourself. So you’re no longer limited to
just a phone or a Google Home. You can now make your
own crazy device. And so if you look outside
in the sandbox area, we have Deeplocal,
who’s a partner of ours. They made a drink-making
machine that you can walk up to, press a button,
tell it what you want, but you can have a
conversation with it. You can say things like, oh,
I’m feeling a bit thirsty, I really feel like a
whatever the drink is. And with the power
of api.ai, it helps to understand what
you’re trying to ask for, and it helps to
understand what you want. And so it’s really
amazing what they were able to do by embedding
the Google Assistant SDK into their device. And they built up
really quickly, too. So recently, we announced
in the MagPi magazine a kit called the AIY
Projects kit, which is that cardboard box that’s
shown at the top there. So it’s a really
nice kit because it’s a cardboard box that
has a button, a speaker, a microphone, and it all
wraps around a Raspberry Pi. And we gave these kits away to
tens of thousands of people, and every one of them has the
Google Assistant built into it. So you push the
button, talk to it, and you can do stuff with it. So I’m really curious
to see what people are going to do with
this kind of stuff because the possibilities
are endless. You can talk to your
cocktail machine or whatever. So it’s quite fun. So how can you go about
embedding the Google Assistant SDK to work in your project? So we have a dedicated
session at I/O about embedding the
Google Assistant in. It has a lot of detail. The engineers on
the team are going to go through code samples
in a lot of detail. They’re going to have demos. But I’m just going to give
a quick little summary here. So if you look in the
box of the AIY Projects, it just has a microphone,
a speaker, a few cables, and some buttons. So it’s very simple. And we run on any Linux or
Android Things-based device. So we have support
on both platforms. And the nice thing
is that the software, the API it uses
to talk to Google, is based on something called
GRPC, which is an open source portable library that’s
designed for doing this kind of interaction. And within like, a
week of launching this, we had all these people popping
up with contributed projects, porting it to all kinds
of different platforms. So I was quite excited to see
the huge interest in this, and the porting to many
different platforms is really easy. And so you can run it
on pretty much anything. The next thing is that
you can use a button press to activate it,
because you don’t always want to use hotword. But we do provide a library
that adds hotword support too so that you can trigger it
just like we do on our consumer Google Home devices. So you can say, “Hey
Google, talk to it,” and it does the
recognition, and then it does everything just like
you do on a Google Home. So basically, you show up
with your hardware, speaker, and microphone, and then
you hook it all together. And then you use
Actions on Google to implement the rest
of your functionality. And the Actions on
Google that you build work on either the
device you’ve made, but it works on any
device that user had that they’re logged into. So there are a lot of
exciting possibilities. So as you saw from
the cocktail machine, that’s a pretty crazy idea. And I’m very interested to
see what other crazy ideas people come up with, and
see it online, and so forth. So that was a little tour of
some of the developer platform extensions that we provide
for Android, the Google Assistant, Actions on Google,
and the Assistant SDK. But you might want
to do even more than that as part of building
a ubiquitous computing experience for your users. So you might want to
connect up your devices and coordinate
them, and now we’re going to talk about how
we go about doing that. So today at Google I/O, we
announced the Smart Home initiative, which is our
support for home automation. So we’ve supported consumer
devices for a while. But now we’re opening this
up to third party developers so that anyone can
build a device that integrates with our Smart Home. So if you go over to
the tent over there, you can see a variety
of demos that we’ve got where you can play with
commercial devices that are available, but now
you can build your own. And you can integrate them into
what we call the Home Graph. So the Home Graph is
a diagram of sorts that knows the state
of all of the devices that are in your house. Your light bulbs, your
thermostats, your doors. It remembers how
bright your lights are at any particular time
and what color they’re set to. And with the Home Graph
storing this information, you then have the ability to
speak to the Google Assistant and to make changes to it. So you can say things
like, make it warmer, or turn off all the
lights, or dim the lights just a little bit. Now, “dim the lights
just a little bit” requires knowledge of the
current brightness and then what “a little bit” means. And so Google handles all of
the parsing of the language to extract out what
you’re trying to do. It goes to the Home
Graph, makes the changes, and then it sends commands
to all of the devices to change their state
so that it matches what the user desires here. And so device makers is can
now build these things really easily, whereas
before it required you to have a separate
app running on your phone that you would have to press. But now you can just be
sitting back on your couch, and you can just speak to
your Google Home device, and it just does
everything for you. So it’s kind of cool. So how do you do this? I’m going to quickly
go through an example. There is a talk that goes
through this in complete detail where they’re going
to show demos and go through the example
line by line. But basically what happens
is we’ve provided a sample, which is on our developers.googl
e.com/actions/smarthome. And there’s node.js sample
that we’ve provided. So what you do is you spin
up the node.js example on your cloud server
of your choice, and then you submit an action
package that tells the Google Home that that demo exists. The next step is you
go to your phone, and you have to register
the devices that you own. So you click on your
smartphone integration, you type in your
username and password for that particular
OEM’s service, and then it does an auth login,
and it returns auth credentials to the Google Home. So the assistant. So the assistant now
has an auth login token that it can use to
control your devices. So when it comes time for the
user, when they say something like “dim the lights
a little bit,” now everything
springs into action. So “dim the lights a little
bit” goes to the Home Graph. The Home Graph then
decides how much to lower the dimness
of that light bulb by. And then the light bulb
is sent a command– that’s through the
API that we show you– and then the command arrives
at your cloud service. And then you can send the
command to your light bulbs with however its implemented
and change the light bulb state. So its really designed to
integrate with the existing infrastructures, because
lots of companies have different kinds of
light bulbs out there, and we want to make
it really easy. So it hooks into all their
existing cloud infrastructure really easily. So you write a really
small cloud service and hook it all in,
and it just works. Now, one of the really nice
things about this initiative is that we integrate devices
across all different OEMs. So you don’t need to mention
the name of the manufacturer when you want to
change a light bulb. You just say turn
all the lights, or turn all the
lights off, and it tells all of them that are
connected to the Home Graph. The other thing you can do
is you can embed the Google Assistant SDK into a
device like a light bulb, and then you can talk to
the light bulb itself. So when you speak to the light
bulb, you can say, “Turn off.” The light bulb will turn off. But you can also say, “Turn all
the lights on in the house,” and then it’ll send
the command everywhere. So you have the ability
to embed the assistant into the devices
itself, and there’s all kinds of combinations
you can do here. So the cocktail
machine example, that’s also an example of a device
that has the Assistant built in, and it’s a device of
its own that hooks in to the ecosystem of Things. So that’s the really nice thing
about the Google Assistant SDK, is it brings both embedding
devices and the Smart Home initiative together. So definitely go check that
out in the sandbox area. There’s a bunch of
really nice demos. And we have examples that
you can actually play with to sort of see how they work. As a few other
developer products, let’s go through them quickly. We also have Firebase. So Firebase is an
SDK that many are familiar with for
writing Android apps. It works on the
web, Android, iOS, and it’s a very portable
library that runs anywhere. Now, Firebase Cloud
Messaging is a part of that, and it allows you to deliver
quick, reliable messages to devices to wake them up. Now, normally when
you build IoT devices, the quick and dirty
way is to do polling where you poll once every
five minutes to find out what your latest
command is going to be. But that’s not very
efficient, because if someone says change the lights,
it’ll take five minutes. So Firebase Cloud
Messaging is a great way of delivering messages to
your products between devices, if that’s something
you want to do. And it provides
very fast updates, and it’s available
currently everywhere. And because Android Things
supports Google Play services, it then supports all of the cool
features of Firebase as well. Firebase also has a real-time
database capability. So it’s cloud hosted,
it’s based on NoSQL, and it lets you store and
sync data between your devices and real time as well. So maybe the Smart Home
thing is not for you, and maybe you want to do your
own kind of synchronizing between devices. You can use that here as well. And once again, because
Android Things supports all of Firebase
out of the box, you can use this stuff for
free without having to do any extra work. And actually, one of our
engineers on my team, he made a sample where he actually
ported Firebase to work on Arduino devices as well. So there’s a URL for there
if you want to do that. That’s useful for really,
really cheap devices if you want to embed
Firebase into that. The nice thing about
Firebase, the database, is that you don’t even need
to provide your own server. If you just want to synchronize
data between devices, you don’t need to have
any cloud hosting, and we handle all of the
provisioning and scaling and so forth. And so we’ve provided a whole
bunch of samples for that. So go check that out
if you’re interested. And finally, for
Firebase, we also have the ability to store
larger user-generated content. So you might have a doorbell
where you press a button and it takes a
photo, and you want to synchronize that photo
over to a device that’s running an app that you’ve got. So this is great because
you can take that photo, it’ll store it in
Firebase for you, and then make it available to
all of the other devices that are part of your ecosystem. So you can store photos,
videos, large files. It’s kind of nice
for things like this. And once again, it
works on Android Things. But also something
that you might want to consider using on
Android Wear or Android TV as well. Because remember, they’re
full Android devices that have the same
capabilities as phones, so you can use it there as well. And then just yesterday
we released a blog post where we announced a new
initiative called Cloud IoT Core, which is a fully managed
service on the Google Cloud platform that allows you to
securely connect millions of IoT devices to Google Cloud. Now, the reason this
is cool is because it scales automatically, and it
works anywhere in the world. We provide endpoints
everywhere that you don’t have to worry about. You just simply
upload your data, and it works everywhere
in the world. It handles millions of devices. And imagine a scenario where
you’ve got an enterprise where you’re building millions of
thermostats or power meters for the power
company or something. Now, every one of
those power meters is going to be logging
data every minute, and there’s millions of them. So imagine millions of devices
logging data every minute. You’re going to end up with
terabytes and terabytes of data really, really quickly. So Google Cloud is ideal
for this kind of thing because it’s built to run
really mega large applications. And so we support the ability to
use standardized MQTT messages, so we support lots of
different protocols like that. And you can access all
your data very easily. And the nice thing is you
can use cloud services like BigQuery to
analyze that data and extract out really
useful information that you might not be able
to see with small data sets. So once again, we
have a bunch of talks about cloud IoT as well. We’ve got two of them. One is a combo session
about Android Things, and another one is just about
cloud IoT and all the concepts. So definitely check out those
two talks if you’re interested. They’re got demos and
samples and so forth that you can learn more about. So we’re almost out of time. So that was a quick whirlwind
tour of some of the things that Google provides to
support ubiquitous computing and the Internet
of Things, and we think they’re really
useful for making amazing, powerful, and
secure products that scale to a worldwide audience,
because you can’t just build things small. We’ve got to build big
because there’s lots of people out there. Every single one of the
things I mentioned here has a dedicated deep
dive session at I/O, and so you should
definitely check that out. And also if you can’t
make it to those talks, they’re all being recorded. So don’t worry, you can
catch up on them later. The other point about Google
I/O that I want to highlight is that it’s all about
meeting people and talking to the engineers who
work on the products. So if you look at the top of
the map, there’s a codelab area. Not everyone’s been to
the codelabs before, but you can actually sit
down and work on a machine and try these
products out yourself. We’ve got devices plugged
in, and it’s a lot of fun. And we have the
engineers who work on those products
standing around to answer questions
and help you out. We have office hours
on the bottom right where you can show up. We’ve got three sessions
for Android Things where you can come and meet the
engineers and ask questions. And also we’re
going to be around. So if you see any of us walking
around, come and talk to us and tell us what your idea is,
if you’ve got any problems, or whatever. So we look forward
to hearing from you. So finally, what’s next? So if you want to learn more,
join our Google+ Communities. So we’ve got three of them for
IoT, Actions, and Assistant, and we do a lot of postings
there to keep people up to date. I also am a developer advocate
for IoT and Android Things, and also the Assistant,
so follow me on Google+ and Twitter if you want to keep
up to date with everything that I’m working on. So what can you
do with all this? So there are all kinds
of possibilities for IoT and the Internet of Things. And think about cool
ideas you can build, but also I want
everyone to think about what things
you could do to help people who are less fortunate. Think of people with
accessibility issues, and people who could
benefit from having more computing in their
lives to perform tasks they can’t currently perform. This technology is going
to change the world, and I look forward to
building it with you. Thank you very much. [MUSIC PLAYING]

12 thoughts to “What’s New in Google’s IoT Platform? Ubiquitous Computing at Google (Google I/O ’17)”

  1. if you remember ubiquitous computing talk at #io15, this is for you, how google thinks about being everywhere and anywhere 🙂

  2. As you're building devices, Web Bluetooth https://webbluetoothcg.github.io/web-bluetooth/ makes it easy to connect to them from the web, no need to install apps.

  3. I would like to know how to contact a Rasberry Pi with Android Thing remotly without having a public IP. Can you explain it in a video (there are a lot of application wit this problem solved).

Leave a Reply

Your email address will not be published. Required fields are marked *