“A great aspect of working at Oculus Research is the ability to build large, highly-skilled virtual teams that span nearly every discipline,” adds Research Scientist Douglas Lanman, who joined the effort after identifying the algorithmic challenges: What’s the best way to split a 3D scene across six different displays? And how can you do it in a way that accounts for eye movements and the limitations of today’s graphics hardware? “This sort of computational imaging problem is one that I’ve worked for more than a decade and just the type of interdisciplinary challenge my team typically tackles within Oculus Research,” Lanman adds. “As a result, I jumped at the opportunity to get involved.”
At that time, Lanman had been looking for an opportunity to collaborate with McGill University Professor Derek Nowrouzezahrai. When Lanman picked up the phone, Professor Nourezezhrai introduced him to Olivier Mercier, a PhD candidate in Computer Graphics—and the perfect scientist for the job, provided he’d accept.
“Not only did we need to make existing algorithms more than 1,000 times faster, but we needed to integrate them into a complex optical testbed together with state-of-the-art eye tracking,” recalls Lanman. “So I had to convince Olivier to take a risk on diving into a new research topic for half a year, rather than the usual short summer internship.”
In the end, the work and the people behind it were enough to close the deal. “It was clear right away that this project was something special, and that the right team was assembled to successfully complete it,” says Mercier. “I was thrilled that my skills could help solve some of the remaining problems, so I gladly agreed to join the effort to bring the multifocal machine to life.”