top of page
Search

Light Fields and Micro-Lens Arrays for Waymo's Autonomous Vehicles?

Back in 2018, Google was rumored to have purchased the startup Lytro, which specialized in cameras using light-field technology. It was later revealed that several Lytro employees had simply transitioned to working for Google. Lytro, at any rate, was now defunct—even though its website now redirects to German camera company Raytrix, which continues offering light-field cameras (also known as plenoptic cameras).

But what are light-field cameras? And why might Google be interested in them?


What Is a Light-Field Camera?


Light field cameras differ from traditional cameras in that they have a layer of micro-lenses which light from the main lens travels through before reaching the sensor. What this means is that the sensor is able to process multiple images of the same scene from slightly different angles. This makes it possible to change the image’s focal point after the picture has been taken. It also makes it possible to determine the distance from the camera to any point in the image using calculations of parallax.


To get a more intuitive sense of how this works, here’s a diagram of how a traditional camera focuses on an image:



When the lines representing the edges of the image converge precisely on the sensor, the image is in focus. If the distance to the focal object changes, though, the camera will need to be refocused, because when the lines converge at a point in front of or behind the sensor, the image will blur.


Now, here’s a diagram of how a plenoptic camera works:



As you can see, the detector at the back of the camera captures light from all the microlenses in the array, and thus the camera can recreate focused images now matter where the lines converge. The microlens array allows the camera to gather far more information about the image being captured. (The “plen” in plenoptic is from the Latin plenus, which means “full.”) The camera can then use computation to refocus an already captured picture—or to determine distance to any point in the image.


But what’s the advantage of using a light-field camera as opposed to a traditional camera?


Lytro’s (now Raytrix’s) cameras are used to create images with more of the captured scene in focus—a feature called Focus Spread. They’re also good for capturing focused images of rapidly changing scenes. Even autofocus features on traditional cameras tend to be too slow to capture things like a knock-out punch in boxing, or ball hitting bat in baseball, etc. With a light-field camera, you hit the button and get all the information about the scene you need. You can make adjustments to the depth of focus later using software.


Plenoptic cameras also allow for larger apertures, which means they can pull more details from scenes with low light. And of course you can use them to create 3D images using the same parallax principle that allows humans to see in all three dimensions with our two eyeballs.

But why would Google be interested in this technology, especially considering that it never really caught on commercially?


Light-Field Cameras and Self-Driving Vehicles


Google’s parent company Alphabet also owns an autonomous driving technology company called Waymo, which already has vehicles on the roads in Arizona. These vehicles rely, not on cameras, but on lidar as their main sensory technology. Indeed, most engineers working on self-driving systems today are betting that lidar will deliver better results than cameras or other types of sensors.

But whether lidar will ultimately turn out to be the more effective option is far from certain. Tesla, which is perhaps the highest-profile company working in the autonomous driving field, has famously staked its fortunes on cameras. And the fact is both types of sensors have major advantages and drawbacks. The perception layer to self-driving technology remains a challenge to be solved.


What this could mean is that Waymo is looking beyond lidar, and that’s why the company is interested in plenoptic technology.


Lidar’s biggest advantages are its ability to measure distance to objects in the path of a vehicle directly and the precision of the 3D point clouds it creates. Using two cameras to establish distance to an object is possible, but it requires computation of parallax, adding a level of processing to the vehicle’s perception that could result in latency—and could be susceptible to bugs. This why most self-driving vehicle developers like lidar.


But the light from lidar travels in straight lines from the emitter. If a laser bounces off a raindrop, the lidar detector has no way of knowing how small that raindrop is. Meanwhile, comparing the images from two cameras could theoretically give you a sense of how miniscule that raindrop is, and it will quite likely give you some visibility into what’s going on behind that raindrop. Moreover, the more cameras you pull information from, the easier it becomes not only to establish distance, but also to see around obstacles, and correct for any other distortions in any individual camera view.


Some companies are applying this insight by developing software that melds information from entire arrays of cameras that would be mounted on the vehicle. But another way to get data from multiple points of view is of course with light-field cameras, whose microlens arrays can be made to serve the same function as a bunch of separate cameras.


One of the reasons Waymo’s vehicles are operating in Arizona is that their sensors are easily tripped up by rain and snow. Most camera systems on today’s autonomous vehicles don’t fare much better in bad weather, but camera arrays or light-field cameras could change that.


It’s probably not a coincidence that the advantages of light-field cameras—Focus Spread, rapid focus adjustments, 3D image rendering including depth cues, and corrections for low light—happen to address some of the major challenges of perception faced by autonomous vehicle developers. Seeing around raindrops and snowflakes is just the beginning.


Light-field cameras have also been taken up by designers in the VR and AR fields. Combined with holographic technologies, they can recreate highly detailed images of an object in 3D. But, theoretically, it’s not just individual objects these cameras could capture, but entire scenes. This ability to create precise 3D maps could have applications for Google’s navigation tools, but it could also prove invaluable in the effort to reach full autonomy for its vehicles. This is because a complete, 3D recreation of a vehicle’s surroundings produced by light-field technology would deliver much more information to the self-driving systems than even the most precise point cloud created by lidar.


Whether Waymo will be outfitting their autonomous taxies with light-field cameras in the near future remains to be seen. But we can say one thing with a high level of certainty: someone, somewhere is working with light-field cameras to see if they can outperform other sensory systems on autonomous vehicles.


Also Check Out:

And

And


Follow us on:


You can also sign up for our newsletter, so you never miss a post. Just scroll down to the form in the footer.

344 views0 comments
bottom of page