NVIDIA placed its bets on AI very early and is now reaping the benefits, as evidenced by its incredible growth in the past year. It all began with gaming's Deep Learning Super Sampling, or DLSS, a technique focused on accelerating game performance with the power of AI (specifically, a trained neural network). That's when NVIDIA started putting Tensor Cores in all GeForce graphics cards from the RTX series onward; with the advent of real-time ray tracing, there was a strong need to recoup as much performance as possible.
Over time, NVIDIA evolved DLSS. Version 2.0 delivered much higher quality while maintaining its status of performance accelerator; version 3.0 added Frame Generation, which unlocked new levels of performance, especially in CPU-bound games; and version 3.5 focused on improving the quality of ray tracing under upscaling with the new Ray Reconstruction feature that just debuted in Cyberpunk 2077 to widespread acclaim.
In the final segment of the recent 'AI Visuals' roundtable hosted by Digital Foundry, NVIDIA's VP of Applied Deep Learning Research Bryan Catanzaro said he believes future releases of DLSS, perhaps in version 10, could take care of every aspect of rendering in a neural, AI-based system.
Back in 2018 at the NeurIPS conference, we actually put together a really cool demo of a world that was being rendered by a neural network, like, completely but it was being driven by a game engine. So, basically, what we were doing was using the game engine to generate information about where things are and then using that as an input to a neural network that would do all the rendering, so it was responsible basically for every part of the rendering process. Just getting that thing to run in real time in 2018 was kind of a visionary thing. The image quality that we got from it certainly wasn't anything close to Cyberpunk 2077, but I think long term this is where the graphics industry is going to be headed. We're going to be using generative AI more and more for the graphics process. Again, the reason for that is going to be the same as it is for every other application of AI, we're able to learn much more complicated functions by looking at huge data sets than we can by manually constructing algorithms bottom up.
I think we're going to have increased realism and also hopefully make it cheaper to make awesome AAA environments by moving to much, much more neural rendering. I think that's going to be a gradual process. The thing about the traditional 3D Pipeline and the game engines is that it's controllable: you can have teams of artists build things and they have coherent stories, locations, everything. You can actually build a world with these tools.
We're going to need those tools for sure. I do not believe that AI is gonna build games in a way where you just write a paragraph about making a cyberpunk game and then pop comes out something as good as Cyberpunk 2077. I do think that let's say DLSS 10 in the far future is going to be a completely neural rendering system that interfaces with a game engine in different ways, and because of that, it's going to be more immersive and more beautiful.
Catanzaro refers to this 'driving game' first showcased at December 2018's NeurIPS conference in Montreal, Canada. Needless to say, the quality wasn't great, but AI is capable of major improvements in a relatively brief amount of time.
It's not far-fetched at all to imagine that in around ten years, DLSS could be capable of entirely replacing the traditional rendering methods. NVIDIA is already working on more neural techniques, such as radial caching and texture compression, which could be added to the DLSS suite as it expands to replace additional parts of the rendering process. If that turns out to be the direction, though, NVIDIA might have to greatly increase the amount of Tensor Cores available in its GPUs.
We'll keep a close eye on the new research papers as they are the best indication of what's to come from NVIDIA in the field of neural rendering.