The Unreal Engine 5 demo footage released this afternoon by Epic Games is going to be hoarding all the attention for quite some time. Not only is it a demonstration of truly impressive next-gen visuals and the first footage we've seen to be confirmed as running on Sony's PlayStation 5, but it also showcased some genuinely impressive new technologies.
Luckily, Epic didn't just drop the demo and ran off into hiding or something. Some of the key engineers, including founder Tim Sweeney, appeared in an interview with Eurogamer's Digital Foundry to discuss what was showcased in great detail.
Let's begin with the question that will be on most of our readers' minds: what was the rendering solution of the Unreal Engine 5 demo on PS5? According to Vice President of Engineering Nick Penwarden, it was 1440P 'most of the time'.
Interestingly, it does work very well with our dynamic resolution technique as well. So, when GPU load gets high we can lower the screen resolution a bit, and then we can adapt to that. In the Unreal Engine 5 demo we actually did use dynamic resolution, although it ends up rendering at about 1440p most of the time.
Well, PC Gamer received word from Epic's Chief Technical Officer that even an RTX 2070 Super could run the Unreal Engine 5 demo at 'pretty good performance'. Technically, NVIDIA's graphics card even sports an inferior nominal TFLOPS value compared to the PS5 (9 vs 10.28), so that's great news.
Tim Sweeney expanded on that in the following statement to Digital Foundry, pointing to PC SSDs being able to deliver 'awesome' performance, too (while HDDs are probably going the way of the dodo rather quickly).
A number of different components are required to render this level of detail, right? One is the GPU performance and GPU architecture to draw an incredible amount of geometry that you're talking about - a very large number of teraflops being required for this. The other is the ability to load and stream it efficiently. One of the big efforts that's been done and is ongoing in Unreal Engine 5 now is optimising for next generation storage to make loading faster by multiples of current performance. Not just a little bit faster but a lot faster, so that you can bring in this geometry and display it, despite it not all fitting and memory, you know, taking advantage of next generation SSD architectures and everything else... Sony is pioneering here with the PlayStation 5 architecture. It's got a God-tier storage system which is pretty far ahead of PCs. On a high-end PC with an SSD and especially with NVMe, you get awesome performance too.
In fact, Sweeney confirmed the key features will be available across all next-generation platforms. These are micro polygon geometry powered by the Nanite technology and real-time GI powered by Lumen.
You know, the philosophy behind it goes back to the 1980s with the idea of REYES: Render Everything Your Eye Sees. It's a funny acronym which means that given essentially infinite detail available, it's the engine's job to determine exactly what pixels need to be drawn in order to display it. It doesn't mean drawing all 10 billion polygons every frame because some of them are much, much smaller than the pixel. It means being able to render and an approximation of it which misses none of the detail that you're able to perceive and once you get to that point, you're done with geometry. There's nothing more you can do. And if you rendered more polygons, you wouldn't notice it because they just contribute infinitesimally to each pixel on the screen.
Over the course of the stunning Unreal Engine 5 demo, 'hundreds of billions of polygons' were displayed according to Epic. What's the secret to the Nanite technology, though? Here's again Nick Penwarden with a brief, yet strikingly clear explanation.
I suppose the secret is that what Nanite aims to do is render effectively one triangle for pixel, so once you get down to that level of detail, the sort of ongoing changes in LOD are imperceptible.
Essentially, game developers will be able to import incredibly high-quality assets, such as film-quality source art, into Unreal Engine 5 and then Nanite will take care of streaming and scaling everything in real time with no perceptible quality loss. It gets even better, as studios won't have to manually tweak the LOD (Level of Detail), manage polygon/draw count budgets or anything like that, thus saving valuable development time. The same assets won't even have to be remade if the game is to be ported down to current-generation platforms - everything will be managed by the engine.
Epic CTO Kim Libreri explained that the secret here is really in the temporal accumulation component.
Temporal accumulation, you know - more than just normal temporal anti-aliasing - it's is a huge part of how we're able to make things look as good as this. The global illumination, without a temporal intelligence, there's no way you could do it on hardware yet. We're actually doubling down on the understanding of how temporal can help us, and there's been so many huge improvements in quality because of having a temporal component. It's the way that we get close to movie rendering - without those samples (and they're not just necessarily pure screen-space samples, there's loads of things you can do to temporally accumulate), the GI would not work anywhere near as well as it does without it.
Interestingly, while Epic did confirm to Digital Foundry that there will be hardware ray tracing support in the engine, it wasn't featured in any way in this debut Unreal Engine 5 demo.
Even without raytracing, this certainly gave us a taste of next-generation graphics, though the application of these techniques in actual games may be some time off still. Nevertheless, we'll keep following and reporting all the latest in graphics technology until then - stay tuned.