# This AI Creates A Moving Digital Avatar Of You

Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. In this series, we talk about research on
all kinds of physics simulations, including fluids, collision physics, and we have even
ventured into hair simulations. We mostly talk about how the individual hair
strands should move, and how they should look, in terms of color and reflectance. Creating these beautiful videos takes getting
many-many moving parts right, for instance, before all of that, the very first step is
not any of those steps. First, we have to get the 3D geometry of these
hairstyles into our simulation system. In a previous episode, we have seen an excellent
work that does this well for human hair. But what if we would like to model not human
hair, but something completely different? Well, hold on to your papers, because this
new work is so general, that it can look at an input image or video, and give us not only
a model of the human hair, but human skin, garments, and of course, my favorite, smoke
plumes, and more. But if you look here, this part begs the following
question – the input is an image, and the output also looks like an image, and we need
to make them similar – so what’s the big deal here? A copying machine can do that, no? Well, not really. Here’s why. On the output, we are working with something
that indeed looks like an image, but it is not an image. It is a 3 dimensional cube, in which we have
to specify color and opacity values everywhere. After that, we simulate rays of light passing
through this volume, which is a technique that we call ray marching, and this process
has to produce the same 2D image through ray marching as what was given as an input. That’s much, much harder than building a
copying machine. As you see here, normally, this does not work
well at all, because, for instance, a standard algorithm sees lights in the background, and
it assumes that these are really bright and dense points. That is kind of true, but they are usually
not even part of the data we would like to reconstruct. To solve this issue, the authors propose learning
to tell the foreground and background images apart, so they can be separated before we
start the reconstruction of the human. And this is a good research paper, which means
that if it contains multiple new techniques, each of them are tested separately to know
how much they contribute to the final results. We get the previously seen, dreadful results
without the background separation step, here are the results with the learned backgrounds,
we can still see the lights due to the way the final image is constructed, and the fact
that we have so little of this halo effect is really cool. Here you see the results with the true background
data where the background learning step is not present. Note that this is cheating, because this data
is not available for all cameras and backgrounds, however, it is a great way to test the quality
of this learning step. The comparison of the learned method against
this reveals that the two are very close, which is exactly what we are looking for. And finally, the input footage is also shown
for reference. This is ultimately what we are trying to achieve,
and as you see, the output is quite close to it. As you see here, the final algorithm excels
at reconstructing volume data for toys, smoke plumes, and humans alike. And the coolest part is that it works for
not only stationary inputs, but for animations as well. Wait, actually, there is something that is
perhaps even cooler, with the magic of neural networks and latent spaces, we can even animate
this data. Here you see an example of that where an avatar
is animated in real-time by moving around this magenta dot. A limiting factor here is the resolution of
this reconstruction – if you look closely, you see that some fine details are missing,
but you know the saying…given the rate of progress in machine learning research, two
more papers down the line, and this will likely be orders of magnitude better. If you feel that you always need to take your
daily dose of papers, my statistics show that many of you are subscribed, but didn’t use
the bell icon. If you click this bell icon, you will never
miss a future episode and can properly engage in your paper addiction. This episode has been supported by Lambda. If you’re a researcher or a startup looking
for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I’ve talked about Lambda’s GPU workstations
in other videos and am happy to tell you that they’re offering GPU cloud services as well. The Lambda GPU Cloud can train Imagenet to
93% accuracy for less than \$19! Lambda’s web-based IDE lets you easily access your instance right
in your browser. And finally, hold on to your papers, because
the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdalabs.com/papers and
sign up for one of their amazing GPU instances today. Our thanks to lambda for helping us make better
videos for you. Thanks for watching and for your generous
support, and I’ll see you next time!

## 87 thoughts to “This AI Creates A Moving Digital Avatar Of You”

1. Donald Trump says:

first…

2. VivaLaPanda says:

Some cool shit

3. Donald Trump says:

I am a geeky president. I see Two Minute Papers video, I click and watch.

4. Dustlotus says:

We are moving towards matrix level technology fast… Simulation hypothesis intensifies

5. Marek says:

The strangest cuddly toy I've ever seen.

6. John theux says:

Is it possible to replace it by a procedural character generator where the AI only adjust the parameters?

7. Nova Verse says:

Oh wow!, Can't wait to play with myself as the hero 😛

No pun intended.

8. DavidGX says:

DON'T TAKE THE CRAZY PILLS! THEY MAKE YOU SANE!

9. pangetmo2323 says:

Wtf

10. Pan Darius Kairos says:

Hold on to your papers!

11. D B L R says:

HOW can I use this? I’d throw my money at this so fast.

12. John theux says:

Ok, I definitively need this for the next vive base station (lighthouse).

13. superninja252 says:

AI is startng to scary me, they are doing stuffs more and more amazing

14. Samer Kadoura says:

Full body 3d scanners will be useless in the future. Great upload as usual.

15. La Astronave del androide JDD says:

This might be the future of animation. An animator can speak with the computer and tell it what do he want to the character do and done. An animation without this long and anoying animation process.

16. Sahil Vasava says:

Most of the time I click on your video just to hear you say your name. 😄

17. Ecci Ecci says:

I dont know if they realize that these kind of solutions are the most important steps for agi

18. IndigoXYZ18 says:

Porn will never be the same…

19. Someone Online says:

https://gizmodo.com/nvidia-taught-an-ai-to-instantly-generate-fully-texture-1840323132 pretty epic

20. Phillip Otey says:

I've always wanted to make something that does this. There are so many applications.

21. J . M says:

Just imagine when this will be used to create your avatar in 3D, or swap it with another face and while you're at it swap your voice also. On top of all of this put some fabulous hair on your head. Take that into you paper and smoke it!

22. Spicy Eggplant Milk says:

Awesome.

23. Sikanda says:

I have a feeling this is the future for the video game industry
To finally bridge the gap between films and interactive experiences.

24. Indonesia says:

Next : This A.I can personaly designate your best way to success just by seeing your eyes.

25. Jody Mitoma's Videos says:

👏😏 This is all fantastic for so much! Thank you, Two Minute Papers!

26. steveman1982 says:

I can write a program that simulates my hair in 0 seconds. In fast, you all can 😀

27. Anonymous says:

Games can now have a fully animated and voiced version of you

28. Neoshaman Fulgurant says:

Oh boy! What a time to be Alive. Also it handle black hair because it's generic enough, so people don't have to actually learn the specific of black hair, 500 in game right there!

29. b0ngo says:

bell-bois reporting in.

30. Ultron Say says:

Finally Dream Generator ready. just need a Neuralink device to merge memory of people we missed.

31. mahchymk93 says:

It's 8:17PM, what a time to be alive!!

32. Max Xu says:

This episode has been supported by Black Mirror. Here’s a cookie.

33. QuestForEnlightenment says:

Guys I‘d like to hold on and squeeze my paper(s), but I‘m unable to find them on the web .-.
Where should I look for if I wanted to learn about cutting edge ai and other science stuff?
Pls help me out and provide me with some sources :3

34. Grant Warwick says:

Surely they can then apply style transferred lighting direct from the games HDR lobe, sync the animation to the game engines (which is AI driven, from previous papers we've seen) and boom, you'll have a first, real time, synthesized avatar of yourself, that can be functionally used.
It would look shit but damn things are being pieced together fast

35. Filip Molin says:

All 3D character modelers are now nervously questioning their career choice.

36. Yukari Hakugi says:

notifications, bell and ads .. . man, I am paying for youtube

37. artman40 says:

Low resolution can already be solved by adding super-resolution algorithms to the mix.

38. Tim Solinski says:

… Exactly what Facebook wants for Oculus.
I'm down for awesome VR Avatars but it really annoys me that this wretched company is the one pushing it 🙄

39. wintdkyo says:

I am going to stick with copy machines

40. Lars Lien says:

Where do you find all the papers? Are there a place where all papers are released?

41. Elinzar says:

i will request this again please: simulated aerodynamic game engine merged along the physics engine

im really curious to see how well a complete aerodynamic simulation can run in a game if AI is used

42. ashish singh says:

Future – we are in simulation !!

43. Noah Hornberger says:

its astounding to see this kind of development after the slow gradual progress of 3d rendering for the last 25 years

44. Nói Kristjánsson says:

Imagine auditioning for a movie, not get the part and then see your face on a ad for a blockbuster

45. Jorge C. M. says:

This… this looks photorealistic, in 3d, and interactive…

HOLY MOTHER OF GOD!

46. Lucas Andres Costa says:

Imagine future VR technology with this kind of real Avatars, amazing

47. Blitztein beta says:

This is one of the best research channels I've seen out there on YouTube

48. Daniel Mayer says:

I want to try this with my own Pictures

49. sbdyson says:

You should check out Aidungeon 2! It is using natural language processing to make a text based game with no limits on your action. Some pretty cool stuff there! http://www.aidungeon.io/2019/12/aidungeon2-is-here.html

50. Allemand Instable says:

me after viewing 10 videos of your channel per day :
W H A T A T I M E T O B E A L I V E

51. thesoundofjesse says:

Why does the thumbnail guy look like Jon "Bones" Jones and Aljamain "Funk Master" Sterling had a baby?

52. MultiNeurons says:

This is really awesome! Would be nice to have never aging actors

53. Abe Dillon says:

Come on, ladies and gentlemen! Get with the program and click on that bell icon!

54. 8SH0CK8 says:

Chiral Network holograms prototype, i see…

55. Ambient Occlusion says:

Wait, have I just seen how uncanny valley problem got solved? 'Cause this models are more alive than anything I ever seen.

56. Sircher says:

Realtime photogrammetry?

57. Russelle John says:

Just imagine people sending nudes of themselves with this and just flirting in real time

58. Nikky U says:

what would u recommend to read if I wanted to reimplemt this

59. Baleur says:

4:30 well, i never click the bell icon, because i dont want to be notified and annoyed every single time something new is uploaded or a youtuber writes a comment saying "thanks for 100k subs!".
Most of us are like this, we watch videos on our own time, by checking the sub feed.
Aint nobody got time to be beeped 150 times a day from all channels we sub to.

60. Miguel Brandão says:

Proud bell subscriber for a while!

61. lightdark00 says:

That thumbnail was so scary I wasn't going to click. Please don't torment us like that again.

62. Fernando Rodrigues dos Santos says:

This reminds me of the movie The Congress(2014), in it there's a studio in which it's possible to preserve yourself in a digital format, they scan you and your expressions and make a model and place it in a virtual world. It's a great movie, an has Robin Wright both as herself and as a virtual character, which is a big plus for me!

63. lightdark00 says:

Since I look over my subscriptions first, then see what's recommended if I want. There's no reason to be notified unless it's a channel I want to comment early or it's a livestream channel. The trending tab is a no-go zone for me.

64. Michael Spence says:

I never click the bell on any channel I'm subbed to because I check my Subscription tab everyday. I look at it more than any other tab actually.

65. XetXetable says:

Why would I want an avatar of myself? I use VR specifically for the purpose of living out my fantasy as an anime cat girl.

66. ᙅYᙖᙓᖇ ᗪOᙅ™ says:

politely head-butted ALL Notifications on the bell when I subbed your chan last week. Love your work

67. Xobotix says:

Dude WOW – i look forward to working together

68. Robbe Demey says:

Looking for a Belgian machine learning enthusiast that has made useful applications of ML and would be open to teach about this technology. Interested, please contact me!

69. Objects in Motion says:

Next up: AI captures your soul and uploads it to the cloud using only 5 seconds of camera input!

70. Ani Balasubramaniam says:

Wow, this is really mindblowing! I'm really impressed with the recent graphics research coming out of Facebook!

71. Cristian Garcia says:

Differentiable ray tracing ("marching") ?????? :O

72. Arashi Mokuzai says:

I disable notifications, So i can enable the bell without it ever notifying me of a new video, FUcc notifications.

73. Tomás Seeber says:

2 minute papers, 3 minute ads…

74. Thomas Synths says:

I can't wait for video games to be animated using this stuff. It already looks better than high end games of today.

75. Matthew Cecil says:

I highly appreciate you placing the ads at the end of your videos. Also, I like how you say "videos" like Pavel Chekov from Star Trek. Please say "Nuclear Vessels" just once. Thank you.

76. Archina Void says:

I don't use the bell icon and yet I never miss a single episode.

77. LordOfTheCats says:

Someone is going to use this technology to create virtual avatars for the next-gen game consoles. Unfortunately, the computing power of millions of consoles will be used to brute force the encryption keys safeguarding the US nuclear arsenal. Then some insane billionaire is going to board Air Force 1 and wipe all drug producing areas off the face of the earth. This is just one possible scenario. Researchers need to be wary of how this technology could be used for evil.

78. Emmett H says:

Imagine using this technology for holograms…

79. avi12 says:

Some subscribers don't need to enable notifications for the channel – but instead visit YouTube on a daily basis

80. Димитър Клатуров says:

It would be cool if there is a GitHub project to every paper so we can test this out ourselves

81. Bread says:

They look so photorealistic

82. Projects Forever says:

This is already good enough for star wars style holograms. Now we just need a magic hologram generating hardware!

83. Leonardo SA says:

I guess we will leap through the uncanny valley

84. Daniel GN says:

Smoothest "click on the bell icon" ever

85. Info Stream says:

زہر کے ایک قطرے سے 100انسانوں کو ہلاک کرنے والے جانور۔ویڈیو دیکھنے کے لئے لنک پر کلک کریں
https://youtu.be/L_Y710LplQE

86. Scott Treppa says:

What needs to happen for us to use these techniques in games? Just a polished API?

87. Henock Tesfaye says:

Ready Player One