Stephan Richter, Hassan Abu AlHaija, and Vladlen Koltun of Intel Labs have written a paper outlining their new G-buffer encoder which takes photorealism to a whole new level.
If you grew up with old computer games, you may have nostalgia for them but you knew your brain was filling in a lot of gaps. Programmers felt the same way, and even in the late 1970s the quest for photorealism was on. One obvious way, given unlimited storage and bandwidth, is to take footage of every possible situation, but when games have 60 hours of content and even more with variation that isn't realistic. Even in generated games like "The Division 2" you quickly know you are hearing the same canned speech over and over. And that is just audio. For graphics it can get tiresome but that's built into games.
Imagine an open world game like Skyrim where it looked like a real world.
Which one looks better is a matter of personal taste. Some older people swear film stock and albums are better than digital, just like some wine experts claim they can identify the best wines, but it's truly subjective. Which one looks more realistic? That is clear.
The new technique isn't perfect but it also isn't strictly academic. For their proof-of-concept they basically used AI/machine'learning/neural networks (heck, throw in Internet of Things if you are reading this from 2015) to take similar images and cleverly replace those in the code.
They didn't invent HRNet or semantic segmentation but they sure made it cooler. And it is a big step closer to what we were thinking about when we were playing Pong.
Intel Labs Gives Photorealism Enhancement A Giant AI Boost
Comments