Reality Labs

Inside Facebook Reality Labs: Research updates and the future of social connection

September 25, 2019

TL;DR: Previously in our “Inside Facebook Reality Labs” series, we explored groundbreaking research on Codec Avatars, ultra-realistic reconstructions of real-world spaces, and the future of social presence. Today, we explore new technological innovations from FRL and how we believe they’ll reinvent the way we connect and interact with each other in virtual reality — and beyond.

Virtual reality (VR) today lets us explore digitally created worlds and interact with friends, while augmented reality (AR) lets us create and share new experiences, primarily through our mobile phones. Facebook Reality Labs (FRL) researchers are pushing the state of the art forward in both VR and AR. This work will ultimately help us seamlessly overlay virtual objects on top of the real world and enhance our experience of daily life. We believe that future will be delivered in the form of lightweight, all-day wearable AR glasses.

Recommended Reading

Today at Oculus Connect 6, our annual conference bringing together the VR and AR community, we revealed the next wave of innovations from FRL, from mapping technologies to incredibly accurate full-body virtual avatars. These advancements from FRL bring the possibility of that AR-powered future one step closer to reality.

Enabling true social presence in VR

To start, we’re working to enable an even deeper sense of connection in VR than today’s 2D technologies provide. Our goal is to make virtual interactions feel as natural as in-person interactions. We call this “social presence.” It’s  the 3D-enabled feeling that you’re physically sharing the same space with someone else, even though you may be miles apart — and that you can communicate your ideas and emotions seamlessly and effortlessly. To accomplish that in VR, you need lifelike avatars — virtual stand-ins that faithfully reproduce your facial expressions, gestures, and your voice.

Introducing full-body Codec Avatars

Earlier this year, we shared our research on Codec Avatars — incredibly lifelike digital representations of the heads of real people. These avatars can be animated in real time, opening the way for effortless, live, unscripted interaction in VR. The ability for two people to communicate via their photorealistic avatars in VR as naturally as they would if they were in the same physical room is a first.

In this video, sensors in Technical Program Manager Danielle Belko’s and Research Science Director Yaser Sheikh’s headsets measure their facial expressions. These measurements are then translated in real-time into audio and visual signals they perceive as a picture-perfect representation of the other person’s likeness, movements, and voice.

However, enabling true social presence requires more than just heads. Body language is critical to our ability to communicate. That’s why today we’re introducing full-body Codec Avatars. While you won’t find this technology in a consumer product anytime soon, we imagine a future where people will be able to create ultra-realistic avatars of themselves with just a few quick snaps of their phone cameras and animate them via their headsets. And that future will usher in a new wave of fully immersive VR.

Shared spaces in VR

Personal spaces like our living rooms or office environments have meaning in our lives. We want to take this meaning into VR, making it possible for people to spend time together in the places that matter to them, even when they’re not able to physically be there. That’s why we’re investing in creating the most realistic 3D models of real-world spaces possible.

In June, we showed our latest ultra-realistic 3D models of real-world environments and released a data set of 18 spaces to the research community. These simulated environments capture subtle details of rooms, like mirror reflections and rug textures, which are necessary to make them virtually indistinguishable from the real thing. Through a combination of a high-accuracy depth capture system, state-of-the-art simultaneous localization and mapping (SLAM) technology, a cutting-edge camera rig, and a dense reconstruction system, we’re able to achieve a level of fidelity that’s unprecedented in VR.

Today at OC6 we showed a demo developed by one of our tech teams using the 3D modeling techniques pioneered by FRL. In it, two people can be co-present in one of these 3D reconstructed spaces using Oculus Avatars, and watch a video or play a game together. This is among the first times that people have been able to have a shared experience in a personalized 3D environment, and it’s an exciting glimpse of what the future holds.

The reconstructions we’ve shown today, which were created using 3D positioning algorithms and special depth-sensing cameras, are proofs of concept in the research phase. And while this technology is still a ways off, we’re exploring how we can enable people to recreate their own personal spaces in the future.

Bringing technologies — and people — together

In the space sharing video above, you saw non-lifelike Oculus Avatars in a lifelike space. But things get really exciting when we put FRL’s ultra-realistic full-body Codec Avatar bodies in ultra-realistic environments — and, some day, in personalized environments. This approaches true teleportation, where anyone can be with the important people in their lives and spend time in personally meaningful places via VR. Imagine joining an extended family get-together on the opposite side of the globe. VR also has the potential to enable remote work, giving you collaboration tools — like monitor-quality virtual screens, holograms, and whiteboards you could configure any way you wanted — that could never exist in the real world. It would also have a hugely positive effect on how we live, eliminating commutes and transforming the way we collaborate and create.

This research also has implications for AR glasses. In the years ahead, AR glasses will open up countless possibilities, including the ability to teleport while you’re on the go. That will allow you to stay present, connected, and deeply in tune with the people and places around you and at a distance — both physically and virtually. In fact, using your avatar, you’ll be able to teleport anywhere in the world. That will require the ability to virtualize spaces on a much larger scale. And our research teams at FRL have started to work on this challenge.

Introducing LiveMaps

At FRL, our research teams are starting to build the core infrastructure that will underpin tomorrow’s AR experiences. 

How does this work? Think of the 3D spaces we showed above, and now picture this capability at planet scale. Using machine vision alongside localization and mapping technology, LiveMaps will create a shared virtual map. To deliver this technology at this scale, we envision that LiveMaps will rely on crowd-sourced information captured by tomorrow’s smart devices. To populate the first generation of maps, our researchers are exploring mapping our own campuses and the use of small pieces of geotagged public images to generate point clouds — a common technique used in navigation mapping technology today.

Rather than reconstructing your surroundings in real time, AR glasses will one day tap into these 3D maps. That means drastically reduced compute power will be needed for your glasses, enabling them to run on a mobile chipset. With these 3D spaces, your avatar could teleport anywhere in the world.

In addition to teleportation, LiveMaps will one day allow you to search and share real-time information about the physical world. This will enable a powerful assistant to bring you personalized information tied to where you are, instantaneously. It will also give you an overlay that will allow you to anchor virtual content in the real world. Imagine getting showtimes just by looking at a movie theater’s marquee, or calendar reminders from your personal assistant who can also tell when you’ve left behind something you’ll need. 

We’re still in the research phase, and we’re committed to doing that research out in the open, sharing our progress on the road to AR glasses. We’ll also continue this work with privacy in mind, engaging global experts for their feedback as we think through best practices and how people can retain control of their information. This technology could radically change the way that humans connect in the future, and we’re excited to see where it takes us.

From VR headsets to AR glasses

We often think of VR and AR as two distinct computing platforms whereas they are actually two sides of the same coin. Virtual reality lets us immerse ourselves in digital content, giving us the power to defy distance and physics while inventing and inhabiting new worlds. Augmented reality takes virtual objects and adds them to our perception of the real world for an enhanced experience of the everyday. Taken together, these technologies stand to reinvent the way we connect with the world around us — and with each other.