At Facebook Reality Labs, we are building a future where real and virtual worlds can mix freely, making our daily lives easier, more productive, and better connected. Power consumption is one of the challenges in getting to that future. In order to get to augmented reality and virtual reality (AR/VR) devices that can be worn comfortably for however long you want, up to and including all day, VR headsets must become more power-efficient, and AR glasses have to consume much less power still. As part of building AR/VR systems that get to that next level, we’re developing graphics systems that dramatically decrease power consumption without compromising image quality.
DeepFovea is one of several neural network-based approaches that we’ve developed to meet this challenge. DeepFovea is a rendering system that applies the recently invented AI concept of generative adversarial networks (GAN) to mimic the way our peripheral vision perceives the world in everyday life, leveraging that perceptually matched architecture to deliver unprecedented graphics efficiency. DeepFovea’s neural rendering goes well beyond the traditional foveated rendering systems used in Oculus products today, enabling the generation of images that are perceptually indistinguishable from full-resolution images yet require fewer than 10 percent as many pixels to be rendered. Existing foveated rendering methods require rendering about half as many pixels as the full-resolution image, so DeepFovea’s order-of-magnitude reduction in rendering requirements represents a new milestone in perceptual rendering.
We first presented the DeepFovea method at SIGGRAPH Asia in November 2019. Today, we are publishing the complete demo to our DeepFovea repository to help the graphics research community deepen its exploration into state-of-the-art perceptual rendering.