Engineering Science – March 15, 2018 – Dr. Xizorong Zhang

Engineering Science – March 15, 2018 – Dr. Xizorong Zhang


[ Intro Music ]>>Good afternoon, everybody. For those of you do not know
me, my name is Ali Kujoory. I’m adjunct professor
at the Department of Engineering Science and
then one of the organizers of this lecture series. I would like to thank Ms.
Shirom Arani [assumed spelling] and also Kate Lap [assumed
spelling], who both helped me in setting up the
lecture series. Now, before I introduce our
guest speaker, let me mention that Kate has already
ordered pizza and I hope that both our speaker and also
you have the chances to stay after 5:30 to basically
enjoy the pizza. It’s really good, I tell you. And also, our guest speaker for April 5th is
Dr. Sergio Canavati. He’s a professor at the School
of Business and Economics at Sonoma State University. And the topic of his talk
is “Engineering Knowledge, Industry Experience, and the
Recognition of Opportunities for Entrepreneurship
and Innovation. Our guest speaker for today
is Dr. Xiaorong Zhang. She’s assistant professor
at the Computer Engineering at San Francisco
State University. And the title of her talk is “Toward the Next-Generation
Neural-Controlled Artificial Limbs.” I looked at — she sent me
her slide in fact last night, and they’re really good. I bet you, you enjoy it,
because very interesting. And since on her second slide, she’d already put
excellent credential, I just let her use her slide. And that’s it. So let’s give her a hand then. [ Applause ]>>Thank you for
the introduction. Hi, everyone. So I am Xiaorong Zhang. I am assistant professor
of Computer Engineering in the School of Engineering at
San Francisco State University. It is a great pleasure
to be here today to talk about my research. So I drove from San Francisco
State and it took like more than two hours to get here. But yeah, it is a great
pleasure to be here. So the title of my
research, of my talk, is “Toward the Next-Generation
Neural-Controlled Artificial Limbs.” So a little bit about myself. So I joined San Francisco
State since 2013. So currently I’m
assistant professor in Computer Engineering. I also established
my research lab, called Intelligent Computing and
Embedded Systems Lab, ICE lab. So if you are interested in
knowing more about my research and what we’ve been doing, so this is the link
of our web page. And before joining San Francisco
State, I completed my PhD in computer engineering from
University of Rhode Island. And I’m originally from China
and got my bachelor’s degree in China, and then
I came to the US for my master’s and
the PhD degrees. Okay, so this is
just a slide about — a screenshot of the web page
of my research lab, ICE lab. So what we’re doing in ICE lab,
basically we conduct research on the development of
neural machine interfaces. I will explain what is neural
machine interface very shortly. And specifically, one
of the applications for neural machine
interfaces is neural-controlled artificial limbs. And we also develop real-time
embedded systems using different computing technologies. And so besides doing research,
I also advise graduate and undergraduate
students researchers from San Francisco State. And we also have summer research
interns from community colleges for the past few years. Okay, so now my research. First of all, what is your
neural machine interface? So actually, has anyone
know about this device? It is called my armband. Yes, that’s great. So I think I’ll just pass this. So this is one of the
platforms that we use for our research projects. So what is neural
machine interface? So to also this question, let’s
first look at a few examples in science fiction movies. So I believe most of you know
the famous “X-Men” series. So it is a science fiction
movie series about a group of superheroes with
special powers. So one of the character,
Jennifer Lawrence, the Mystique girl,
has a special power of manipulating objects
using her telekinesis. And another movie, relatively
old, is called “Matilda,” is also about the
little girl Matilda who has this special power of moving objects
just with mind power. So the question is, can
we really control objects with mind power in real life? And let’s look at
another example. So this is a screenshot
of a movie, the famous movie
series, “Star Wars.” So the major character Luke
Skywalker lost his right hand in xxx 613 with his father, and later his missing hand was
replaced by a prosthetic one. So this prosthetic hand looks
realistic and also functions as if it was Luke’s own hand. So is this also a
scene that only exist in science fiction movies? So now back to our
original question: So what is neural
machine interface? So basically neural machine
interface, abbreviated NMI, is a technology that makes
such science fiction a reality. So briefly speaking, NMI
is a communication pathway between the human neural
control system and machine. It utilizes neural activities to control different
external devices, like a powered wheelchair, a
computer game, or prosthetics. So this slide shows a few
types of neural signals that can be collected from the
human neural control system. This includes these
neural signals collected from the cortex of the
brain, peripheral nerves and neurons, and also muscles. So all these signals, they contain very
important information that can represent human
states such as emotion, intention, and motion. And basically this NMI
collects neural signals from human neural control system and then uses advanced
signal processing on machine learning
algorithms to interpret the data to human states —
that’s emotion, intention, and motion — and
then make decisions to control external devices. And typically the hardware
and software of the NMI have to be tightly integrated
to embedded systems, because usually they have
to be wearable and portable. And also NMI is a very typical
biomedical cyber physical system that features the tied
combination of and coordination between the system’s cyber
elements, which is the computer, and physical elements, that’s
the human neural control system, and these external devices. So I think maybe most of you
know what is an embedded system, but just in case you’re not
familiar with this concept. So embedded system basically
is still a computer system. So it is a computer
system usually for some specific dedicated
purpose compared to like our PC that general-purpose
computers can do a lot of different things. And embedded computer system
usually is very small, portable, just like this presenter here. And has a small computer
embedded inside and it has software
running on it. It has input/output
interfaces to interact with our physical world. So we have a lot of fabrications of embedded systems
in our daily life. In communications,
like your cellphone. Like automotive, your
remote control of your car. A lot of different applications
in many different areas. And we probably use hundreds
of embedded systems every day. So as we just mentioned
for this NMI, typically these neural
machine interfaces are also an application of embedded
systems because everything has to be integrated on a self-contained
portable, wearable device. Okay, so there are a lot
of applications of NMIs, but today I’m going to focus
on one of these applications, that’s neural machine
interface for the control of artificial limbs,
that’s prosthetic. So that’s upper limb
and lower limb. So this slide shows the
neural signals collected from muscles, okay. So these are called
electromyogram, EMG, signals. So EMG signals are
effective by electric signals for expressing movement intent. So EMG signals can be collected
using surface electrodes, just placed on the
surface of the muscles. So the device, you’re
just seeing, that’s physically a device
that can collect EMG signals from the user’s forearm. So we can just easily
slide this armband and then there are
eight sensors. If you look at the inside
of the armband, those metal, basically they’re electrodes. So it has to be placed —
it has to touch your skin. And when you perform
different gestures, basically your muscle
activities will be captured and measured by these
EMG signals. And that’s going to be sampled
by the NMI digital converters of the microcontroller that’s
built inside my armband. And then all we have
different algorithms to interpret this data to identify the user’s
intended motions or gestures. And this figure shows an example
of what EMG signal looks like. So when you perform different
gestures, for example, hand open or hand closed, so
different gestures, different muscles will
have different activities, different contraction levels. So basically our purpose is
to use this signal processing with machine learning algorithms
to find out the patterns that you make for individual
gestures and then make decisions to identify the user’s
intended motions or gestures. And then we make decisions to
control, like a prosthetic hand or like virtual reality game,
these kinds of applications. But you can see that
the real EMG signals, there are actually very noisy. So there are several steps
to process EMG signals and then recognize
gestures from these signals. So for the control — the
EMG controlled upper limb prostheses, they’re actually
already commercially available devices, okay. So there is a company
called Touch Bionics. So in 2007, they developed a
prosthetic hand called i-Limb Ultra, okay. So these two photos
basically are photos from their web page showing
this prosthetic hand developed by the company. So this prosthetic hand
basically utilizes two EMG sensors. Like one on this side. One on this side. So it is very simple but relatively more reliable
signal processing method. I think it’s just like
magnitude based method to identify a few
different gestures, okay. And there are of five
motors in these fingers. So each motor can
control like one finger. And in 2008, the company also
made these lifelike compliments for their prosthetics. So just from the pictures,
from the photos, can you tell like which one is real hand
and which one is prosthesis? It’s very hard to tell. [inaudible comment]. I don’t know either. But there’s for sure one of
them is just a prostheses. Another project that
is developed by DEKA — so it is funded by DARPA and several research
institutions involved in this project. So this is a project
launched in 2007. So it has been more
than 10 years, and they have continuous
progress on this project. So this project basically
developed this also is EMG controlled bionic arm for multi-degree
freedom and control. And what is interesting is
that this arm is named Luke, after Luke Skywalker
in “Star Wars.” Meaning that science fiction
finally becomes reality. And so we just talked about
these upper limb controls, and they’re already
commercially available devices. But for lower limb control, so far there are already
computerized powered prosthetic knees and ankles developed
and commercialized, which allows amputees to negotiate different
terrains naturally, perform natural gait patterns,
like the ground surface. And also do stair
ascent/descent, ramp ascent/descent. So these are just photos
from a company called Ossur. This is the prosthetic knee and
ankle developed by this company. And this picture shows
a prosthetic knee, a powered prosthetic knee, developed by University
of Rhode Island. That’s where I got my PhD. So we had a big research lab,
it’s kind of interdisciplinary. We have three professors. One from biomedical engineering. One from computer engineering. And one from computer science. And we work together. And this powered prosthetic
knee was developed by a postdoc in mechanical engineering
in our lab. So this video shows a testing
trial of this prosthetic knee on the amputee subject to
perform this treadmill walking, level ground walking,
and run ascent. So this computerized
powered prosthetic knees and ankles are already
available. But without the knowledge
of user intent — so when the user has to
switch the type of terrain — for example, from level ground
walking to stair ascent — this user has to manually
switch the control mode of the prostheses or
using body motion. Because the walking pattern of
the prostheses on the ground and doing still descent
is very different. So the user has to
manually switch to mode by pressing some buttons
or using body motion. Which is cumbersome and does not
allow small task transitions. So our idea, our intention, that this neural machine
interface basically is to utilize these EMG signals
and this advanced algorithm that we developed to
identify user’s intent and then automatically
switch the control mode of the prostheses. So that the user can
control their prosthetic leg or arm naturally. But there are, of course,
challenges in this project. So this is cyber physical system that coordinates the cyber
elements, that’s the computer. We have hardware and
software in the computer. And also the physical systems, that’s the human neural control
system, and also the prostheses. So challenges exists
in the management of both physical resources and the cyber resources
of the system. So for the management
of physical resources, due to the muscle loss — so
patients with amputations, they usually have
limited muscles for these EMG signal
acquisition. So basically we have to
develop these signal processing algorithms to accurately
identify user’s intent from these limited
signal resources. And also accuracy
is very critical, especially for lower
limb control, okay. So for upper limb control,
of course, if a system is not like a perfect — that means
not like 100% accurate, you’re going to make mistakes. So it makes decisions of what
is the user’s current gesture or motion or intent. And when it makes mistakes
— so for upper limb control, the consequence might be if
you want to, for example, grab a glass of water,
you’re just going to delay. You want to do that but you
were just not working that way. But for lower limb control, this
accuracy is even more critical because any mistakes
might cost a user to fall. Because if you’re
walking on a level ground and now the control mode of
the prostheses is switched to, for example, stair ascent, so that’s very dangerous
to the user actually. So accuracy is very critical. And so the reliability
and the robustness of these neural controlled
artificial limbs can be further complicated by the environmental
uncertainty, like different of noises, motion
artifacts, and sensor — loose sensors, muscle fatigue,
all these kind of factors. And the challenges
in the management of computational resources is
from the cyber element side. So as we mentioned, we need a
tight integration of hardware and software of the system
onto portable embedded systems. And usually they have to
operate on batteries as well. So these systems
has to be real-time. It has to respond fast. Because even the system, even
the algorithm, is 100% accurate, but if you want to grab
this glass of water and it takes five minutes
to react, it’s just useless. And it has to be
memory efficient. It has to be reliable, robust,
and also energy efficient. Because it usually it has
to operate on batteries. So to address these challenges,
we developed different hardware and software technologies. So this slide shows the
overall architecture of one of the prototypes of our
neural machine interface for lower limb prostheses
control. So there is a embedded
computer system. It could be a microprocessor or
microcontroller or like a VGA. So it collects EMG signals and also usually some
mechanical signals as well from like inertial measurement
unit or from load cells. So all these EMG and
mechanical signals are collected and then we have our
software, the signal processing and the machine learning
algorithms to decode the intended tasks
and transitions of the user. And then make decisions and
send commands to the prostheses to automatically
switch the control mode. So in order to identify the
user’s intended motions, so EMG signal processing
algorithms — there are many, many
different algorithms developed for this purpose. So today I’m just going to
introduce two basic categories. So one is just very simple math
that’s based on EMG magnitude. So that basically
is we just look at the magnitude of the signal. Usually when you —
if you, for example, if you just make
a fist, if you — do depending on the
contraction level of the muscle, the magnitude of the EMG
signal will be different, okay. So this simple method
is just based on, like we said, a threshold. So whenever the magnitude of the
signal is above some threshold and then the change of the
control mode is triggered. So usually for this method we
just use individual EMG sensor to kind of control like
one type of motion. So usually just of we
just use two sensors, so there are only very limited
number of decisions can be made. But this is a relatively
reliable approach. So that is approach
that is commonly used in commercially available
devices, so products. But it is not appropriate for
controlling multi-functional and multi-degree of
freedom prostheses. And the other more advanced
method is this machine learning and pattern recognitions
algorithms. Basically it can
extract more information with fewer monitored
EMG signals. So what is pattern recognition
or pattern classification? How many of you know the
concept of pattern recognition? Okay. So for those who might
not be familiar with the concept of pattern recognition, I will just give you very
simple, a basic example. It’s like a classic example from
pattern recognition textbooks. So consider this scenario. So assume we have a fish
processing plant and we want to automate the process of sorting incoming fish
according to species. We want to separate
salmon from sea bass to these two types of fish. So this system here, we
have one conveyor belt for the incoming fish. So it’s mixed, so we
have salmon or sea bass, okay, they are coming in. And then we have these
two conveyor belts for the sorted fish. So that is the decision
has already been made. And how we one belt for salmon
and one belt for sea bass. And we have a robot arm. So this robot arm basically will
— so based on the decision made by this computer,
this robot arm’s going to pick the fish and, according
to its species, it’s just going to put it into one of
the conveyor belts. And then we also have a
camera here to take a picture of each of these fish. And then this picture
that is the data to be fed into this computer and the
software on the computer, which is this pattern
recognition algorithm, interprets the data and
then make decisions, whether it is salmon
or sea bass. So to successfully
automate this process of sorting these incoming fish,
what are the necessary steps? So the key steps for
— typical key steps — for pattern recognition or pattern classification,
there are four steps. Data collection, preprocessing, feature extraction,
and classification. So data collection is
just to acquire the data. So in this example basically
it’s the camera that’s the device to capture the data. So for each incoming fish,
this camera captures one image of as a new fish
enters the sorting area. So that’s the real data, okay. And then the second step
is called preprocessing. So preprocessing
is a step that — for example, we have this
image of this fish, okay. So this contains the fish. And also you have
some like background, which is irrelevant information. So this preprocessing involves
procedures like noise removal, like just separate the
fish from the background and also segmentation. And like for these kind of
EMG, this time series signals, usually we do data
analysis on chunks of data instead of
each data point. So segmentation sometimes
is also required. And then the third step
is feature extraction. As we have seen one of
the previous slides, the raw EMG signals
are pretty noisy. You can’t tell anything
from the raw data. So this feature extraction
step basically is to calculate or extract these key features
that can characterize this data. So for this special example
here, so what can we do in the feature extraction stage? So right now we already have
the data of the fish, okay. So feature extraction,
for example, we can calculate the
length of the fish, the weight of the fish, and
the lightness of scales. So basically these are
features that might be helpful to separate different classes,
different types of fish, okay. But this step is
very, very important because not all features
are helpful. So it’s important
to just choose, just select the good features that can help separate
these different classes. Some features might get
even more confused, okay. So this is feature
extraction step. Then the last one
is classification. Classification typically
consists of two phases: training
and testing. So training is we need
to create a model based on a large number of samples. So we let it learn. We want the system to
learn, okay, from the data, and then find out
this model by itself. So again, take this fish
sorting system as an example. So basically these
trainings that we — like this is, for example, we
just plot two features, okay, that’s the length of the fish
and also the average lightness of scales of the fish. So these dots, these are — so the blue color means
the data of sea bass. And the red triangles
here, salmon data. So this training step is based on these large number
of samples. We already know the
labels, okay. We already know these are
sea bass, these are salmon. So the system’s going to try
to find out a decision boundary that can best separate
these two classes, okay. And in these example here, this decision boundary’s just a
simple linear decision boundary. But we actually have
more complex algorithms that can be also like nonlinear
decision boundary, okay. But the key is to find
out the best boundary that can separate these classes. But, of course, we can see that you cannot perfectly
does the job. There’s still some
confusion here, okay. So once in the training step,
we found this decision boundary, and then the second phase is
called testing, put it to work. So that is for now the
model is already created now for any incoming data. So we can just based on
this decision boundary of this new data, based on its
length and average lightness of scales, if this new
data is on the left side of the decision boundary,
then the system is going to make the decision
this is a sea bass, okay. If it is on the right
side, the system’s going to predict it is a salmon. So of course, it could be wrong. So we need to calculate. To evaluate the performance
of a classification algorithm, we calculate the classification
accuracy, that is the number of correctly classified
samples divided by the total number of samples. But this is just like basic idea
of what is pattern recognition. So for our neural machine
interface for this upper or lower limb prosthesis
control, basically the classes
are a set of motions. Like for lower limb
control, the classes we want to identify is just
a set of locomotions. For example, level walking,
ascent, descent, stand, sit. So these are the predictions — these are the decisions we
want the system to make. So the next few slides I’m going
to show demonstrations of some of our research results. And so this picture
shows a prototype of — embedded system prototype — of our NMI for a
lower limb control. And this prototype
consists of an FPGA device and also a microcontroller. So this microcontroller
basically is used to just sample data from
these multiple EMG channels. Because a microcontroller
typically has less computational power. But some of the pattern
recognition algorithms are pretty complex. And as we mentioned, it’s
very important for the system to meet real-time requirements. So in this particular prototype,
we have this FPGA device, which is really good
at parallel computing. So FPGA is responsible for the
computational intensive tasks in the pattern recognition
algorithm. And the microcontroller
is basically responsible for input/output interfaces. And so we have our prototype
and we tested this prototype on several different subjects. So before the prototype
can be tested on actual amputee
subject, we usually test it on able body subjects first. Basically that’s ourselves,
just students in the lab. Because the subject has
to wear a prosthesis, so this is how we
conduct the experiment. So we make like an
adaptor, plastic adaptor, so that able body subject can
also wear a prosthetic limb. And we have these EMG
signals, because we have seven or eight EMG electrodes placed
on the subject’s leg muscles. So this is a video showing a
testing trial of our prototypes. So this is the embedded
prototype. And we also made a
very simple GUI just to display the decisions
made by this embedded system. So we can see the subject
start from standing. So this is a decision
now, is walking. So this is real-time decisions. And standing again. And switches to stair ascent. So this prosthetic leg is
still a passive device, it’s not a power prosthetic leg. So you can see in
the experiment, it’s kind of like one —
the step is like this, it’s not like step over step. Because, again, we
have to make sure that the system is accurate
enough and reliable enough to — before testing using powered
prostheses doing this step over step. Because for lower
limb prostheses, the experiment is very tricky. We have to make sure
the subjects are safe. So the next video
shows a testing trial on an amputee subject for
just identify his intended transitions between
sitting and standing. So these are two
simple and basic tasks for like normal people. But it’s so critical
in our daily life. So this is just a trial testing
the accuracy of the system for just doing sitting
and standing. It’s kind of — it’s
hard to see. But now this is standing. So it’s just longer and shorter. So during the experiment,
so although the subject kind of like shift his
weight during standing and moves his legs
during sitting, but the decisions are
still pretty accurate. And we have also developed like
other evaluation platforms. So instead of just
simply displaying the text of the decision,
we also developed like a virtual-reality
system to kind of just reconstruct
our lab environment in this virtual system. And this is video showing how
this neural machine interface is actually driving the
motion of the avatar in this virtual reality system. It’s a little bit too
dark, but now it’s from level ground
walking to stair descent and then level ground
walking again. So these are some
experiments on the testing of our embedded prototype
for neural machine interface for lower limb prosthesis
control. So what we are working
on right now is to develop different computing
technologies for NMIs. So as we mentioned, there is a
trade-off between the complexity of the algorithm — the accuracy
performance of this algorithm. Because to make such
system in practice, it has to confidently
meet all the requirements, including high accuracy
and real-time response and these memory and
power efficiency. But right now it’s just
it’s still very challenging because there are many
researchers develop investigates different kind of algorithm. Some algorithms are
better than others. But usually those more
accurate algorithms, they’re just more complex. So most of these algorithms
have been tested offline, like using MATLAB
or on a desktop PC. And it produced pretty
good accuracy, but it just cannot be
implemented in real-time, especially on embedded systems because of their
computational complexity. So our idea is to develop a
hierarchical computing framework that integrates different
computing technologies, including this edge computing
and also Cloud computing. So we have this hierarchical
framework that includes microcontrollers,
that’s the platforms used by most of the embedded systems,
and then microprocessors, which are a little bit more
powerful than microcontrollers. Like processors using
smartphones or a PC. And then Cloud basically
is a platform that have like unlimited computational
power and the storage. But the disadvantage
of Cloud is that — so all the data have
to be transferred to the Cloud using Internet. And this communication latency
sometimes is even longer than the computational time. So there is a tradeoff between
the communication latency and the processing
time of the computer. So our goal is basically to
develop this hierarchical system that can leverage the advantages
of all these platforms and then confidently achieve
this massive computing power and also real-time —
massive computing power, massive storage, but also
like minimal latency, communication latency. So this is our goal. So based on that, right
now we have been working on a platform called MyoHMI. So it is a low-cost and
flexible research platform. So the purpose of
developing this platform is to just provide a research
platform that can be used by many researchers
in this field to try out different algorithms, but also can achieve
this portability and real-time response. So because right now, most EMG-based NMI
research is still limited in lab environments. That is, we have to use desktop
computers and also we have to use very expensive EMG
data acquisition systems. Usually those systems could
cost like more than — usually it’s more than $10,000,
just for a few channels. So it is really not
practical yet to like do testing in
a home setting. And also, most open software
currently available are based on MATLAB. That is, that needs this
license and also it can only run on a PC, it cannot be portable. So this motivates us to
develop this low-cost system that utilizes this — my
armband, which is just $200, and it has these
eight EMG channels to collect EMG signals
from the forearm. And also, we aim to develop
a platform that is portable but it has the capability of processing very complex
computational tasks. So what is Myo Armband? So this is just kind of
like a commercial video for this Armband. So it is Armband developed by a company called Thalmic
Labs, company in Canada. But now they also have
office in San Francisco. So it is a very neat
design that normally for the data acquisition system
use the in lab environment. Those electrodes we have to
use, those conductive gel. So even just for the
experimental setup, it needs like at least 30
minutes to place all the sensors and it has a lot of wires and
everything, very clunky system. But this one is very —
is low price, low cost, and also very easy to use. But, of course, it
has its disadvantages. The sampling frequency of the
EMG signals is just 200 Hz, versus 1000 Hz that’s
typically used by those expensive equipment
in a lab environment. So this 200 Hz, I think that’s because of the bandwidth
limitation of this Bluetooth communication. But the original idea of this
Myo Armband is to have a device that can — so you can
remotely control a lot of different things just
using your gestures. It’s kind of like an
intuitive control concept. Originally they don’t release
raw data from the Armband. Basically it’s just like
a toy and you can use it to control your presentations
and videos and these kind of things. But later, there are so many
requests for the company to release raw data
and they’re finding out now we can get raw data
and try our own algorithm to interpret the signals. Originally, there are default
gestures can be produced by the device, and that’s it. You cannot — if
you are interested in some other customized
gestures, it’s not possible. But right now, because of
the raw data are available, so we can use our own
algorithm to customize gestures. So this is some details
about this Myo Armband. So it has SDK that
can allow the user to develop their own software
on Windows operating system, Mac, iOS, and Android. And it has these
eight EMG sensors. Also, it has an axis
inertial measurement unit, include three axis
accelerometer, three axis gyroscope, and
three axis magnetometer. And the data transfer is to
use Bluetooth low energy. So it really has
a bandwidth limit. And these are the
default gestures provided by the Armband. So as we mentioned,
our objectives for this Myoelectric
Myo project is to produce a software platform
that’s providing this interface with Myo Armband, and
also, it can be interfaced with other devices too. But right now, we have
only tested on Myo Armband. And the software basically is
highly modular and scalable. And different feature extraction and pattern classification
algorithms have been implemented. And the goal of this
project is to make it open to the public in a few years. So that the researchers can
add their own algorithms on this software and
test it with Myo Armband or other EMG acquisition
devices. So far, we have developed
two versions. So one is a PC-based version,
which is just called MyoHMI. We have a few papers published
based on this implementation. And it’s developed
using C and C++. It’s running on Windows
operating system. And a mobile version that is
developed in Android Studio and is Android-based portable
version of the MyoHMI. So both platforms has — so integrates a sequence of
signal processing modules, including feature extraction
and pattern classification. And also it has interface
of data acquisition and also output control, to control different
evaluation platforms. So for this project, a special
note about this project is that this research
project is integrated with an internship
program that’s collaborated by San Francisco State
and Canada College, that’s a community college
also in the Bay area. So it is called COMETS
Internship Program. So every summer, there are
around like 20 students come to San Francisco State
and they’re going to join different research labs to do a 10-week internship
project. So I think since 2014, I have been advising
interns from Canada College. And every time I have
four to six interns work in my lab in the summer. And they’re also mentored by
graduate student in our school. So in this 10 weeks, they
actually did a lot of things. Because they’re just community
college students, fresh — no, sophomore students. So the only requirement
for them is to have a — they have to take a
basic programming class, like C or Java, that’s it. And then after they join
the lab, they start to do like about two weeks
literature study and learn all these
new concepts. And then skills training,
like programming. And also this to understand
the actual the mathematical algorithms of this
machine learning and pattern recognition
algorithms. And also the actual
implementation of the MyoHMI platform. And at the end of program, they have to do a
final representation, a poster presentation
also, writing a report. They actually did pretty well. Every year, I was — it was just
always out of my expectation. They always did better than
what I originally expected. So these are just some photos of
the interns working in my lab. So this is just this PC version
of MyoHMI developed by — mainly developed by these
interns and mentored by one graduate student. So this platform we
can see has a GUI. And then you have
several modules. So all these modules, these
are highly marginalized and they can be easily
changed if needed. And we have this
data acquisition and then feature extraction
classification training and testing. And then output control. So this output control, we
can make decisions to control. We have a few different
evaluation platform so far in our lab. One is a 3-D printed
prosthetic hand. It is also developed by just one
undergrad student in our school, majoring in mechanical
engineering. And we have a virtual
arm that’s developed by an undergrad student
in computer engineering. So this is developed
using Unity software. So this is just a
screenshot of the GUI of the PC version of MyoHMI. So we have several
different tabs to connect up to two Myo Armbands. And then we have the
real-time displaying of all these EMG signals. We have this pie chart
to show the magnitude of the EMG signals. Because this is like a
round armband, right. And also, this is like IMU,
so this is showing IMU data. And this is the pattern
classification part. So the next slide is just a
video, shows a demonstration of this MyoHMI platform. So the student is
wearing this Myo Armband. We have this virtual arm
and also this prosthesis. My video is a little bit weird. So this is making fist
and point and fist again. I think the lag is
because of the video, it’s not because of our system. But this prosthesis does have
more noticeable delay compared to the virtual arm. Because there is some issue
with the microcontroller that controls the
prosthetic hand. And that can be addressed.>>Excuse me. In this case, there are like,
what, eight sensors then?>>Yeah, just eight EMG
sensors around the forearm. So basically just the
muscles and also here.>>Now, let’s say that
somebody does not have the arm. Then you want to control
from the top part. Now, the point is that
you need to be able to [inaudible] five fingers and
so forth and then also the palm. In this case now, how many data, how many sensors are
you going to need?>>I mean, the number of
sensors is not a keeper. You can put as many
as you want actually. And so this is just one of our
research projects right now. We are also doing
— another part of our research is right now
using it’s called grip sensors. So instead of just like
this one array of sensor or like individual
channel placed on muscles, we use like a matrix of sensors. So even in a very small area, we have like maybe
hundreds of channels there. So that’s going to give
us richer information, neuromuscular information. But, of course, it can generate
more computational burden. So now that part
of the research is to investigate whether these
grip sensors can provide more information to do more
accurate gesture identification. But for your question
for the amputee that lost a whole forearm,
so there are some — there is a research project
in Johns Hopkins University. There also just using
these Myo Armband, but the armband is worn
here, it’s worn here. And also there’s a
surgery called targeted muscle reinnervation. And so like one of the very
early slides you had seen that patient basically
have lost their whole arm. But that patient can
still use his muscle to control the fingers
of prosthetic arm. So that patient —
there’s a surgery conducted for that patient that this
targeted muscle reinnervation. So basically that’s the
chest muscle is kind of rewired the nerves
here to the chest muscles. So he basically is
using his chest muscle to control the prosthesis. It will not be as intuitive as
like if you just lost a hand, you still have your
muscle of the arm. So he has to do some
training to learn like how I basically contract
my muscles to basically to produce different patterns
of the muscle activity. So that he can control — it can be associated with different motions
of the prosthesis. So yeah. So this is just a picture, a photo of their final
poster presentation — our presentation. And our group was the
winner for that year. And so another part of our ongoing project is we are
also developing virtual reality systems, like virtual-reality
games — for example, stroke
rehabilitation — that can also be interfaced with
these neural machine interface. So that’s also part
of our research. So basically this
virtual-reality system can give the user an immersive
environment that can basically
help the training to do the rehabilitation
and also even just for simple entertainment,
like just gaming purpose. So we also have — I
have a video showing. A virtual-reality system
developed by also our interns. So this is — at the beginning, this is just a GUI
of the MyoHMI. And one of the interns developed
a virtual reality game, it’s a first person shooter
game, a zombie shooting game. And we can see the subject
is wearing an Oculus goggle. Also Myo Armband app
on his right arm. And he just — because the
Oculus has this hand-tracking feature and basically he’s just
using his gestures to shoot, to reload weapon, to, for
example, turn on/turn off light, change the [inaudible]. So now he turned off the lights. So using gesture control,
we don’t have to use — because when you
wear the goggles, you cannot see the visual
world, cannot see the mouse and keyboard that’s
used to control, so now use your gestures. So that’s — this game was also
just developed in 10 weeks. And this slide shows the
mobile version of MyoHMI. So this is Android app that integrates all these
signal processing modules. So this is one at the top, that’s showing the real-time EMG
signals from the Myo Armband. So we have eight sensors. So when you tap end
of the sensor, you will just show the
signal for that channel. Too many — and this is
the feature extraction tab. So it also uses a pie chart to show the features
that’s calculated from each of these EMG channels. And now we have a
classification tab to do the pattern
classification. So the user can easily select
the pattern recognition algorithm and then do
the training and testing. And also, we have — so this
mobile version also has the access to the Cloud computing
platform that’s Amazon web services. So all this data can be
transferred to the Cloud and then also processed
in the Cloud if we want. So this slide just shows
the least of classifiers — classification algorithms
we implemented on the mobile version
of the MyoHMI. All right, so last slide
basically lists our ongoing work. So we’re adding more
classification algorithms and feature extraction
algorithms into our platform. And currently our
evaluation has only been done on EMG signal processing. But Myo Armband also has IMU
data, so we can integrate them to see if it will give
us better performance. And we’re also investigating
various edge computing and the Cloud computing
platforms. So besides like these
smartphones, tablets, and Cloud, we are also investigating
other — right now there’s a very
emerging concept is called fog computing. So that’s to bring the computing
power closer to the edge. So we can use like gate
waves, like routers. And any computing nodes
in the environment to do the computational
intensive tasks. So we need to basically quantify
and also have a good balance of the computational delay and
the communication latencies of these different
computing layers. And as we mentioned, our
goal is in a few years, we’re going to make the whole
system open to the public so that everyone
can benefit from it. So I think that’s all for
my presentation today. And if you have any
questions, please. [ Inaudible Question ] I mean, no. As long as it touche
the muscle, it is fine. It doesn’t need a long time
to place the Myo Armband. [inaudible question]. Yeah, that’s a very
good question. So that’s the good thing about
pattern recognition algorithm. That the location of the
sensor doesn’t really matter. Because basically you’re just
using these multiple muscle — multiple sensors that
cover all these areas. It is not important to really
precisely locate a sensor that on the muscle. But the simple — I mean the
commercially available devices, the algorithm they use is
— or the older algorithm, they really need to carefully
place all these sensors according to where
the muscle is. But pattern recognition —
because the system learns — as long as when the user
performs different motions, it does have different patterns. The system can learn
that and create a model. So yeah. [ Inaudible Question ] It depends. I mean, you know,
even for amputees, it’s better for patients — patients with different
conditions, their muscle activities
are quantified different. So it depends on the condition
of the of the patient. Yeah, it works better on
some patients but not so well on some other patients. Because for this pattern
recognition method, it conducts a training at first. So you will learn the
pattern from this individual. That’s better. But still, if the
patient just cannot — I mean, it depends on the
quality of the signals that can be collected. Yeah. If it’s very
difficult for the patients to produce this consistent
and different patterns for different motions, then
the performance will not be very good. So that’s why for some patients, they have to conduct
the surgery. [ Inaudible Question ] So for this particular project,
we haven’t done that yet. But we do have —
because I advise a lot of master’s students for
their master project. Yeah, many of them are
doing these kind of — basically is to collect
different physiological signals and other signals from
user and then yeah. [ Inaudible Question ] The pattern recognition
algorithm? [inaudible response]. Most of the algorithms are
already existing algorithms, so they have been applied
to different areas. So some of the algorithms is
just they have not been tested on these particular
problems yet. So we basically try out
different algorithms. But the algorithms are
existing algorithms. We didn’t develop
them from scratch. [ Inaudible Question ] So for we’re currently using EC2 to just do the storage
and also computation. And we’re also right now for
this fog computing concept, we are investigating
AW as [inaudible]. So that is a service
that can basically — it’s kind of like
Cloud computing but it’s really not a Cloud. [inaudible.] Lambda,
yeah, Lambda.>>So it looks like
some people had to leave because they have classes.>>Oh, okay.>>Please do not
take it personally.>>Since it’s already
now [inaudible]. So let’s just stop now here. [inaudible]. Thank you very much.>>Thank you. [ Applause ] [ Music ]

Leave a Reply

Your email address will not be published. Required fields are marked *