top of page
Search
Writer's pictureDennis J. Junk

Cameras vs. LiDAR in the Race for Fully Autonomous Vehicles

Updated: Sep 27, 2020

The human visual system is pretty amazing. What we rarely consider, though, is that much of the system’s operations take place, not in the eyes themselves, but in the brain.


Our depth-perception, for instance, relies on several 2-dimensional cues (which artists manipulate to give the illusion of distance), but the most powerful cue comes from the brain’s assessment of how different the retinal images are between our two eyes.


Two Cornell University engineers recently used a similar approach to gauging distance as part of the ongoing effort to develop self-driving vehicles. Positioning cameras high on opposite sides of the windshield, they used software relying on the parallax effect to determine the distance to objects in the vehicle’s path. The system using this configuration scored 66% on a benchmark test for which the previous top score for a camera-based system was just 30%.


There is a self-driving technology that scores higher though. Using much of the same software, a configuration with no cameras at all scores around 86%. This high-performing technology relies on LiDAR, which stands for light detection and ranging.


The easiest way to think of LiDAR is that it’s like sonar, only with beams of light instead of waves of sound. The sensors beam light in the direction of travel, record when and where that light is bounced back to the sensors, and then use the time interval between beaming and picking up the reflection as a gauge of distance.


Before we conclude that the sensors are superior, though, we must note that the competition between the two approaches—cameras and LiDAR—wasn’t completely fair. The Cornell researchers used software designed for LiDAR to interpret the data coming in from their cameras. This software translates the data into a 3-dimensional “point cloud.” The point patterns are then assessed for areas of relatedness to known objects so they can be identified and responded to appropriately.


So, the dual-camera setup used a point cloud application originally created for LiDAR. The researchers chose this setup because their goal was merely to show that they could “close the gap” between the performance of LiDAR and cameras.


Why Not Just Use LiDAR?


But why bother closing the gap if LiDAR works so well? For one thing, the sensor systems are expensive. For smaller vehicles like cars, they add around $10,000 to the price tag. For larger vehicles like trucks, the cost is even higher. The sensor systems are also big, heavy, and draw a lot of energy. The two-camera system, by contrast, is both light and inexpensive.


Until this latest demonstration of cameras’ effectiveness, most people in the self-driving vehicle industry believed LiDAR was the only way to achieve the necessary performance benchmarks. While lasers still provide the most accuracy as of now, the doors have been blown wide open for the improvement of camera-based systems.


These results come as a vindication for Elon Musk, who has bet big on cameras for the self-driving systems in his Tesla vehicles. “Anyone relying on lidar is doomed,” he has gone on record saying. Some critics insist Musk only says this because he’s painted himself into a corner by promising full self-driving capabilities would soon be available with the company’s existing cars. To overcome the cost issue in the short-term, cameras are really his only option. But LiDAR, like all emerging technologies, will eventually get cheaper, and with that reduction in cost over time there may also be an increase in accuracy.


Why Not Both?


LiDAR is more expensive, but it doesn’t rely on any processing to measure distance. This direct gauging is important because with every added step in the process, more potential for error sneaks in. Of course, human eyes do a pretty decent job responding to objects through the windshield without LiDAR, so advances in machine-learning could potentially overcome any such challenges.


Still, at this point in the development of self-driving technology, it seems cameras and LiDAR have different, perhaps complementary, strengths and weaknesses. The downsides to LiDAR are cost, weight, and energy draw—but these are compensated in large part by instant distance calculations. Knowing that something is in the vehicle’s path is more important than knowing what exactly that something is, a task cameras may be better equipped to handle.


Meanwhile, cameras are light and less expensive, and they may be better at object recognition. But unfamiliar environments could lead to issues with the parallax measurements, resulting in poor estimates of distance. These tradeoffs have led some engineers to suggest that the best approach may be to combine cameras with LiDAR, so automated vehicles can have the best of both worlds.


The biggest difficulty for engineers to overcome in their quest for safe self-driving vehicles comes from what are called “edge-cases.” These are rare circumstances the system is unlikely to have ever encountered before. Machine learning sounds great, but how is a machine supposed to learn about situations it’s never experienced before. To be fair, humans get tripped up by novel events all the time too, but our minds, for now, are far more flexible than any machine’s.


Whether the eventual solution to general driving conditions and edge scenarios ends up being cameras, LiDAR, or a combination of the two, the ultimate challenge is going to be convincing human drivers that any driverless vehicles they’re sharing the roads with are safe—at least as safe as the ones driven by other humans. It’ll be exciting to watch how the technology advances, and it will be just as exciting to see how the autonomous vehicles work once that technology is fully developed.


Also Check Out:


Follow us on:

176 views0 comments

Comments


bottom of page