Hacking Our Eyes for Better VR Headsets
(Inside Science) -- Virtual reality technology has come a long way since its wacky roots in the 80s and 90s. It is difficult to describe the joy today’s VR can provide, be it slicing glowing neon blocks to a K-pop song with light sabers, or dodging bullets while controlling the speed of time.
However, if your vision is 20/20, you may still be disappointed with the relatively low resolution of even the best VR headsets. Today, high-definition TVs and smartphones boast “retinal resolution,” where the pixels are smaller than what the human eye can discern. In comparison, the relatively grainy display inside VR headsets may seem lackluster -- so bad that you can see the gaps between individual pixels as if you are looking through a screen door.
But smaller and denser pixels aren’t the only solution to this problem. Engineers are looking into other ways to improve VR headset displays -- by taking advantage of the anatomy of our eyes.
Millions of little pixels
The iPhone 11's crisp, clear display packs a whopping 3.3 million pixels on a 5.8-inch screen. Although many VR displays contain even more pixels, they somehow look worse.
That's because when you look at your phone, the 3.3 million pixels only occupy a sliver in your entire field of view, while a VR headset's pixels stretch across a much larger area. To put it in numbers, a smartphone screen held at arm's length occupies roughly 20 degrees of your field of view, while a VR headset often needs to fill more than 90 degrees for each eye.
“If you want the same number of pixels per degree -- to have retinal resolution at that large a field of view, you’ll need about 60 million pixels per headset,” said Ed Tang, co-founder and CEO of Avegant, an augmented reality display technology company.
Even with today's state-of-the-art technology, it's hard to pack that many pixels into a headset without it getting impractically large.
There are three competing challenges stemming from the same problem. “One is the field of view, another is the resolution, another is the compactness,” said Brian Wheelwright, an optical scientist from Facebook Reality Labs, which focuses on research in virtual and augmented reality technologies. Facebook acquired Oculus VR in 2014.
While packing twenty times more pixels into the same area may not be impossible, this isn’t the only issue associated with attempting a brute force solution -- the computing power needed to process these pixels will also sky-rocket.
Luckily, there is a short cut.
A way to cheat the system
When you reach the end of this sentence, focus your eyes on the period and see how many words you can read without taking your eyes off this period right here. How far did you get before the text become completely undecipherable?
“By the time you’re five, ten degrees from the center of your vision, you’ve already lost most of your resolution,” said Tang.
Out of the approximately 120 degrees of arc that makes up the field of vision for each eye, we only see high resolution for the few degrees of arc near the center. That’s partially because the rods and cones are not distributed evenly across our retinas, but are concentrated around the fovea -- an area on the retina directly opposite the pupil.
“We have very high resolution in the fovea, but the resolution drops off very, very rapidly outside of the center. So, if we can design displays that are more compatible, or more inspired by how your eye actually works, we can get that efficiency that you naturally have in your eyes,” said Tang.
And engineers are already working on prototypes known as foveated displays that take advantage of this.
“You can have a VR display with a main display, and a central part that’s a different display with a higher resolution,” said Wheelwright. He showcased his team's working prototype of this approach during a presentation at the Frontiers in Optics conference in Washington D.C. last month.
Instead of uniformly distributing pixels across an entire image, the Facebook team concentrates the pixels in the center. Normally this would make the center look bigger, but the team used lenses and mirrors to shrink the very detailed inset down and place it in the middle of the lower resolution VR display, like a high-resolution donut hole inside a low-resolution donut.
The prototype still has some major limitations. “It's not a very big inset and it's not tracked to the eye, so it doesn't move,” Wheelwright said. For their contraption to be practical, the donut hole will have to move with the viewer’s eyes.
Following the eye
Let’s do another experiment. This time you will need a mirror. Through the mirror, look at your left eye, then look at your right eye. Did you notice that you can’t see your eyes move?
“It’s called saccadic motion -- when your eyes jump around,” said Tang. Unless we are tracking a smoothly moving object, our eyes typically move in these jumpy, saccadic motions, sometimes up to several times a second.
“Our brains actually block out our vision right before the eye moves,” said Tang. “Your brain reduces your visual acuity, your eye moves, and then your brain kind of turns your vision back on, and this happens multiple times a second.”
You may have experience with the stopped-clock illusion, when you glanced at a moving clock and thought the second hand stayed still for a little too long.
“When your brain sees the next image [after the saccadic movement], it literally goes back in time and fills in that timeline in your head. That means up to 50% of your waking hours is just completely made up by your brain,” Tang said.
Engineers may be able to take advantage of these quirks of our body when designing VR headsets, for example by using movable micromirrors and switchable lenses that can shift the donut hole around when we are temporarily “blind” during saccadic eye movements.
Earlier this year, HTC debuted the first commercially available VR headset with built-in eye tracking capability. The headset combines eye tracking data with software to enable foveated rendering, a solution that aims to make up for the lack in graphics processing power in current hardware. In foveated rendering, the headset allocates more computer resources to render the pixels the user is looking at in real time, and less resources for the pixels peripheral to the user’s vision.
“Like any new technology, there definitely will be an adoption phase -- figuring out what are the killer applications,” said Tang. “I mean, 10 years ago, when the iPhone first came out, I knew very few people with a smartphone. Now everybody has one.”
The price for VR headsets has dropped noticeably over the past few years. The Oculus Quest, a standalone VR headset without the need to be hooked up to a computer, was launched earlier this year with a price tag of $399. And for those who already own a PlayStation 4, the point of entry is even lower at around $250 for the PlayStation VR system.
If foveation display with eye tracking can prove itself as a worthy improvement, we might see the technology being incorporated into future headsets more cheaply. For now, the HTC Vive Pro Eye -- the only headset on the market with eye-tracking capability -- will cost you an eye-popping $1,599.