This AI Creates A Moving Digital Avatar Of You

This AI Creates A Moving Digital Avatar Of You


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. In this series, we talk about research on
all kinds of physics simulations, including fluids, collision physics, and we have even
ventured into hair simulations. We mostly talk about how the individual hair
strands should move, and how they should look, in terms of color and reflectance. Creating these beautiful videos takes getting
many-many moving parts right, for instance, before all of that, the very first step is
not any of those steps. First, we have to get the 3D geometry of these
hairstyles into our simulation system. In a previous episode, we have seen an excellent
work that does this well for human hair. But what if we would like to model not human
hair, but something completely different? Well, hold on to your papers, because this
new work is so general, that it can look at an input image or video, and give us not only
a model of the human hair, but human skin, garments, and of course, my favorite, smoke
plumes, and more. But if you look here, this part begs the following
question – the input is an image, and the output also looks like an image, and we need
to make them similar – so what’s the big deal here? A copying machine can do that, no? Well, not really. Here’s why. On the output, we are working with something
that indeed looks like an image, but it is not an image. It is a 3 dimensional cube, in which we have
to specify color and opacity values everywhere. After that, we simulate rays of light passing
through this volume, which is a technique that we call ray marching, and this process
has to produce the same 2D image through ray marching as what was given as an input. That’s much, much harder than building a
copying machine. As you see here, normally, this does not work
well at all, because, for instance, a standard algorithm sees lights in the background, and
it assumes that these are really bright and dense points. That is kind of true, but they are usually
not even part of the data we would like to reconstruct. To solve this issue, the authors propose learning
to tell the foreground and background images apart, so they can be separated before we
start the reconstruction of the human. And this is a good research paper, which means
that if it contains multiple new techniques, each of them are tested separately to know
how much they contribute to the final results. We get the previously seen, dreadful results
without the background separation step, here are the results with the learned backgrounds,
we can still see the lights due to the way the final image is constructed, and the fact
that we have so little of this halo effect is really cool. Here you see the results with the true background
data where the background learning step is not present. Note that this is cheating, because this data
is not available for all cameras and backgrounds, however, it is a great way to test the quality
of this learning step. The comparison of the learned method against
this reveals that the two are very close, which is exactly what we are looking for. And finally, the input footage is also shown
for reference. This is ultimately what we are trying to achieve,
and as you see, the output is quite close to it. As you see here, the final algorithm excels
at reconstructing volume data for toys, smoke plumes, and humans alike. And the coolest part is that it works for
not only stationary inputs, but for animations as well. Wait, actually, there is something that is
perhaps even cooler, with the magic of neural networks and latent spaces, we can even animate
this data. Here you see an example of that where an avatar
is animated in real-time by moving around this magenta dot. A limiting factor here is the resolution of
this reconstruction – if you look closely, you see that some fine details are missing,
but you know the saying…given the rate of progress in machine learning research, two
more papers down the line, and this will likely be orders of magnitude better. If you feel that you always need to take your
daily dose of papers, my statistics show that many of you are subscribed, but didn’t use
the bell icon. If you click this bell icon, you will never
miss a future episode and can properly engage in your paper addiction. This episode has been supported by Lambda. If you’re a researcher or a startup looking
for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I’ve talked about Lambda’s GPU workstations
in other videos and am happy to tell you that they’re offering GPU cloud services as well. The Lambda GPU Cloud can train Imagenet to
93% accuracy for less than $19! Lambda’s web-based IDE lets you easily access your instance right
in your browser. And finally, hold on to your papers, because
the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdalabs.com/papers and
sign up for one of their amazing GPU instances today. Our thanks to lambda for helping us make better
videos for you. Thanks for watching and for your generous
support, and I’ll see you next time!

87 thoughts to “This AI Creates A Moving Digital Avatar Of You”

  1. This might be the future of animation. An animator can speak with the computer and tell it what do he want to the character do and done. An animation without this long and anoying animation process.

  2. Just imagine when this will be used to create your avatar in 3D, or swap it with another face and while you're at it swap your voice also. On top of all of this put some fabulous hair on your head. Take that into you paper and smoke it!

  3. I have a feeling this is the future for the video game industry
    To finally bridge the gap between films and interactive experiences.

  4. Oh boy! What a time to be Alive. Also it handle black hair because it's generic enough, so people don't have to actually learn the specific of black hair, 500 in game right there!

  5. Guys I‘d like to hold on and squeeze my paper(s), but I‘m unable to find them on the web .-.
    Where should I look for if I wanted to learn about cutting edge ai and other science stuff?
    Pls help me out and provide me with some sources :3

  6. Surely they can then apply style transferred lighting direct from the games HDR lobe, sync the animation to the game engines (which is AI driven, from previous papers we've seen) and boom, you'll have a first, real time, synthesized avatar of yourself, that can be functionally used.
    It would look shit but damn things are being pieced together fast

  7. … Exactly what Facebook wants for Oculus.
    I'm down for awesome VR Avatars but it really annoys me that this wretched company is the one pushing it 🙄

  8. i will request this again please: simulated aerodynamic game engine merged along the physics engine

    im really curious to see how well a complete aerodynamic simulation can run in a game if AI is used

  9. its astounding to see this kind of development after the slow gradual progress of 3d rendering for the last 25 years

  10. You should check out Aidungeon 2! It is using natural language processing to make a text based game with no limits on your action. Some pretty cool stuff there! http://www.aidungeon.io/2019/12/aidungeon2-is-here.html

  11. Wait, have I just seen how uncanny valley problem got solved? 'Cause this models are more alive than anything I ever seen.

  12. 4:30 well, i never click the bell icon, because i dont want to be notified and annoyed every single time something new is uploaded or a youtuber writes a comment saying "thanks for 100k subs!".
    Most of us are like this, we watch videos on our own time, by checking the sub feed.
    Aint nobody got time to be beeped 150 times a day from all channels we sub to.

  13. This reminds me of the movie The Congress(2014), in it there's a studio in which it's possible to preserve yourself in a digital format, they scan you and your expressions and make a model and place it in a virtual world. It's a great movie, an has Robin Wright both as herself and as a virtual character, which is a big plus for me!

  14. Since I look over my subscriptions first, then see what's recommended if I want. There's no reason to be notified unless it's a channel I want to comment early or it's a livestream channel. The trending tab is a no-go zone for me.

  15. I never click the bell on any channel I'm subbed to because I check my Subscription tab everyday. I look at it more than any other tab actually.

  16. Looking for a Belgian machine learning enthusiast that has made useful applications of ML and would be open to teach about this technology. Interested, please contact me!

  17. I highly appreciate you placing the ads at the end of your videos. Also, I like how you say "videos" like Pavel Chekov from Star Trek. Please say "Nuclear Vessels" just once. Thank you.

  18. Someone is going to use this technology to create virtual avatars for the next-gen game consoles. Unfortunately, the computing power of millions of consoles will be used to brute force the encryption keys safeguarding the US nuclear arsenal. Then some insane billionaire is going to board Air Force 1 and wipe all drug producing areas off the face of the earth. This is just one possible scenario. Researchers need to be wary of how this technology could be used for evil.

  19. زہر کے ایک قطرے سے 100انسانوں کو ہلاک کرنے والے جانور۔ویڈیو دیکھنے کے لئے لنک پر کلک کریں
    https://youtu.be/L_Y710LplQE

Leave a Reply

Your email address will not be published. Required fields are marked *