Photorealistic Real Time Computer Graphics with Javascript – Ivan Moreno – JSConf EU 2018

Photorealistic Real Time Computer Graphics with Javascript – Ivan Moreno – JSConf EU 2018


It’s a
great honour for me to be here on this stage. My name is Ivan Moreno, I’m the co-founder
of Plus 360 Degrees, based in Bremen here in Germany. In this talk, you’re going to see what is
the relationship and the similarities between these two images. The correlations between art, science, and
engineering, what they have in concert during real time projects, share tips on managing
large amounts of data, the downsides of distribution landscape, and what having so many option
s how it can overwhelm us as creatives and how best to overcome them. A little bit about myself: I studied fine
art, design, and engineering. I did two specialisations: one in digital
and media, and another one is digital media infomatics. We have been building tools in our studio
to help us to create our projects, and help us with the e-commerce tools of some of our
clients. We have been working across different industries,
and make some prototypes of WebGL for the community. Coming back to these two images, how do you
think they are in any way similar? Any ideas? They both start with black canvasses. The machine forms 60 or 120 times what it
took a Renaissance, a Mannerist or Baroque painter three to six years to succeed. I like to treat it this way because it gives
me a scope of how far we can get in these centuries. It reminds me that WebGL frameworks and use,
in fact, many of the math developed since then in concepts and geometry. Vanishing point used for many years placed
within the math systems. The introduction of the camera obscura and
optics made us reach here. Created with those new developments, thanks
to the development at the time in optics. Modelling appeared inspired by Greek models,
and we’ve been using them since the introduction of painting, images, and films since the Renaissance. The art of creating volume by placing light
on dark. Also called “light modelling,” “lied shading”
or simply “shading” is where we have to look to understand computer graphics. You take a carbon or an oil paint and stain
on the white canvas. The works of Da Vinci, Rubens, Velázquez,
Rembrandt, Caravaggio, you will find these concepts and among other parents. We have they had a understanding of the craft
generally, and how to create that imagery. How to mix the colours, and how the light
behaves; what brushes to use depending on the material, they needed to be reassembled. Caravaggio managed to create art on big canvasses,
something that was not possible due to the technology that was used at the time. This doesn’t mean that all the classical periods
were not continuing with the development of their art, they continued until the industrial
revolution when the introduction of the photography and film was introduced in society, and painters
stuck, they transferred their art forms in different mediums. In the 1970s, a group of artists created hyper-realism
to resist the mainstream photography in the arts, and GPU used that precisely with maths
and physics. Knowing what brush or paint texture to take
and to use in your art will define your final imagery output and your audience. But how you use do all this, a little explanation
of how you can produce work might be relevant in this case. There are two sides that you need to take
care, the geometry buffer and the fragment buffer, or pixel buffer. As the name described in one, one handles
the geometry better and the other one handles the fragments better. Both are interconnected by uniform and attributes
that move, data. They move data between both buffers. You can transform them every single time in
virtual space and in virtual time. The amount of Kate that is loaded to these
buffers — the amount of data that is loaded to the buffers charges the performance budget. You will have to be careful how you create
your assets and display it in their application. A fine balance between them, otherwise the
application might suffer the risk to get blocked by the browser, or user security, or user
experience issues. How do we create those assets to have that
balance in your application? First, I’m going to explain about 3D modelling
capturing. There are multiple methods to create but one
of the most popular ones is where you surround your audience with multiple cameras and capture
the images of that object and then get all the data points, and put it into your modelling. This is a hexagonal surround. This is a complex environment, racing games,
they are done this way for video game consoles, for example, and cylindrical ones are done
for human bodies or more complex shapes. Another way is to laser scan the object. This method is pretty great for reverse engineering
on some objects, and it’s used for decoration of cars or other objects that are industrial
production. The most classical one to create the objects
is by volumetric from blueprint. It gives less noise in the correction but
it takes a lot of time to optimise, so it’s not cost-effective compared with the other
two methods. Once you have all that, with you’re ready
to remove the excesses by running the 3D model tools and you take out the – manually, your
team is required to arrange – misarray, and place the normal vector perpendicular to the
triangle in order to have accurate lights and reflections in the object. The creation of textures is also important. It is used methods like analysis as well. When you take the photograph of the texture
that you want to replicate, and just in an image-processing tool like Photoshop, or something
like that, and just adjust the value of the pixels and accommodate it for being able to
be scaleable. Once you have that texture, you can reduce
some information from the channels and then you can create other textures like the normal
maps, and so on. The way how you arrange these textures in
your programme are going to transform the different visual output, so you have to be
careful how you arrange them and you have to play with those textures and adjust them,
and you will see how the final material is going to be represented in your display. For that, it’s necessary to have knowledge
of shading programme and 3D renders, whatever it is. It’s important. Realistic 3D are large applications, so you
have to treat them like big applications in JavaScript. They require much data, so the loading process
can be long. There are many presentations already in this
video conference where you can take information into of how to create the large applications. Generally speaking, the advice would be the
same: you have to reduce the server calls when loading scripts and assets as much as
possible, load first scripts and styles, load then textures. I like to initialise the fragment shaders
right after I finish loading the textures. And create the materials, and, right after
that, I will load the geometry data, and initialise the shaders, and grab those geometries with
the fragment shaders previously created. I like to follow this path because the declaration
of both buffers is very heavy for the machine, so you can crush the application, and your
experience can get broken at that point. Once all this process is over, the rest of
the journey is going to be very smooth. Creating lighting is important, because it
adds volume to the objects to the scene. And you need to have a lighting model that
is going to be uniform in all the materials. As smart as it is, it’s not as smart as dragging
and dropping, having a AAA graphics out the box. Indeed, it gives you some advances but you
have to play with the values of your materials until you find the right balance of it. You will see that some materials, when the
light is getting too close, the material is going to get over illuminated, and going to
burn, and in other materials, they’re going to get back, and be flat, looking very flat. So, play with those values, and you’re going
to be finding your balance in there. Other good things about TVR is that it case-by-case
basis previous model like lambda illumination or – those illuminations give some visual
artefacts for you to play when you are required to have the textures in there. – that it requires. The computed illumination and textures is
known as back illumination. Usually, we use that in level design where
the camera is not going to get so close to the environment, or the area that is going
to be displayed, and it’s a great way to save the resources that you can use in other areas
of your application like the object that it requires to look realistic. In configurators, the lighting model is fixed,
and it’s a little bit more easier than the work-throughs, for example, in game and level
design, because light is fixed and you can surround it with your camera all the time,
so, there’s no need to move the cameras for the lights all the time when the camera moves. So some engines, the geometries and the lights,
others just code the geometries, so, depending on that, you choose what illumination to take. My recommendation is that you have to target,
a minimum of three lights, and the maximum of six lights, including its own light if
you’re building … . Balance this with a material properties, and
that will give you more performance space for the geometries and textures. Post-processing is important. It’s the second layer of the process in the
pipeline. But it’s expensive for the machine. But it gives you another advantage because
you can have a layer of colour correction or illumination, and adjustment of pixels. You can transform at this point with the fragment
shaders your image, since it’s rasterized already by the renderer. It’s one single image, and you can apply any
models in there like neighbouring pixels, and you can transform and make it more bright,
move them into the dimensions, and create all the effects that developers like to do. Play with those values until you find your
final imagery and you’re happy with the visual look that you have. Now, there was diversification of hardware
has brought us great opportunities but also stillness because finding the next path to
take is hard. The ways to information through these many
mediums we are expecting to happen, and we’re a little bit confused in how to move around
in the development process. There is an order of development that needs
to be addressed. I recommend first to step the first step to
target mobile and desktop, and, based on that, move to other mediums. I’m going to speak about it precisely with
augmented reality with mobile devices, because it’s what it might come faster, and reaches
more audience. The reason why it is this order that I recommend,
is because if the application doesn’t perform okay with your basic configuration options,
it’s not going to perform or look the same either in your augmented reality application. All my previous comments that I have done
with modelling and creating textures, and how to arrange your application, are useful
for all these mediums. Computer reality is not that easy as just
to divide the screen for each eye. That comes with attacks – with a tax, and
you have to play with your application requirements how high that that task could be, and make
some compromises. Mirrors, for example, is another complex part
of the virtual reality, and augmented reality. Because you have to have many cameras in rendering,
and it’s not going to support that. All the 3D rendering frameworks that exist,
they come with controls for easy integration, so, you can programme it otherwise it can
create those controls easily. In the case of AR, reflections can be hard
as well, because we can have access to all the dimensions of the environment once the
user is on a specific geographical position. You don’t have access to that, so most of
these applications, they have their reflections, textures, so the visuals are not going to
look as great as if the application were going to be developed indoors environment. We cannot have access to the illumination
that is happening in the outside environments, it’s cloudy, not cloud which, or the sun,
it is a specific time. That affects the final output. And that is something that it can take some
time to reach, to have high fidelity, like indoors development where you have control
of the illumination and control of how the environment looks, so the application looks
okay with it. This approach works, for example, with advertisement
billboards. Art works in art galleries, or dealership
presentations in dealership stores, or retail stores, and so on. Be careful with the amount of data that you’re
going to use in your virtual reality application because you have to allocate memory for the
camera rendering which is going to be in high definition, so that is some cost in there
as well. You cannot use for your treaty data, plus
the geographical position tracking system. It will take some memory, and that system,
we are going to get the plain ground where we are going to address our development of
the scene. As conclusions, Chiaroscuro is deeply embedded
in computer graphics. The art influence ing colour, typography for
the interfaces. … that opens space for more data to be computed,
and more space for data to be computed means you have more room to improve your graphics. The way to learn is getting information from
other tools, whether by books, documentaries, presentations, scientific papers, et cetera. And apply those concepts with trial and error
experimentation. Target mobile first, desktop after, and later
other mediums. What is next for photorealism? We’re following closely what video game consoles
can do. We expect to have better retracing methods
and more accurate illumination models in the future, more virtual reality and augmented
reality is happening, and it’s expected to increase. Use WebGL or hardware accelerated APIs for
2D and 3D rendering as much as possible, because with that power at your disposal to be used,
you can render or analyse more data. Don’t be afraid of it. Dig into it, and use that power to extend
your design and artist ic abilities. I hope that I wrote some using information
for you today, and I clear some questions that you might have. This is the a long topic and hard to fill
it in a 25-minute talk. If you want to keep talking, drop me a line
at my email address or you can write to me on Twitter. We can talk about this, or something that
I call .NET hyper realism. Thank you so much for your attention, and
for being here after the great party last night! [Applause]

Leave a Reply

Your email address will not be published. Required fields are marked *