top of page
Search

From Cameras to Lidar and on to Camera Arrays for Autonomous Vehicles

Tesla Founder Elon Musk famously said, “Lidar is a fool’s errand,” before predicting that “Anyone relying on lidar is doomed.” His comments came in the context of a question about whether Tesla’s vehicles could integrate data from lidar into their onboard computer vision systems. This was noteworthy because at the time most companies working on autonomous driving technologies were betting on lidar as the most likely tool to successfully deliver the kind of environmental awareness necessary for self-driving.

The advantages of lidar include more direct measurements of distance, an ability to generate precise and comprehensive 3D point clouds of scenes around a vehicle, and slightly better performance in rain and snow. Companies like Cruise, Waymo, and Uber are banking on the likelihood that these elements of the technology will give them the boost they need to be first across the finish line in the race to full autonomy.


But, for Musk and a few others, lidar represents something of a “crutch” that sidesteps some of the main challenges engineers must overcome before truly self-driving vehicles take to the roads. True, lidar can reliably calculate the distance to objects around a vehicle, but there’s more to identifying—and responding to—an object than knowing its shape and where it’s located. How, for instance, can you distinguish between a red light and a green light with a technology relying on time intervals between bouncing light beams?


Add to these limitations the much higher price tag for lidar systems, not to mention a much higher energy draw, and you see why some in the industry still think cameras and other sensors hold the most promise.


As of today, though, the outcome of this debate is far from settled. For one thing, Tesla’s new “Full Self-Driving” mode is a misnomer, as drivers are still required to stay alert with their hands on the wheel while the vehicles are in operation. Waymo has even dropped the term self-driving from its marketing materials because they feel Musk has watered down its meaning.


To date, no one’s achieved full autonomy (Level 5), and, depending on how you measure performance, Tesla vehicles probably aren’t the closest to reaching this level. Waymo is at Level 4—autonomous but with a backup driver onboard—while Tesla is considered to be at Level 3, at best. The catch is that Waymo vehicles have much more limited ranges, as they rely on pre-generated maps.


The main takeaway, however, is that Tesla is hardly dominating its competitors. So the question remains open whether the first fully autonomous vehicles will use cameras or lidar.


The More Cameras the Merrier


While most of the major players in the autonomous vehicle game have been betting on lidar for its distance-measuring capabilities, a study by Cornell researchers in 2019 found that two cameras mounted high behind the vehicle’s windshield could perform almost as well as lidar at detecting how far the vehicle is from an object in its path. With two overlapping fields of view, the system can use an algorithm to calculate parallax, and hence come up with a measure of distance to any object. It’s the same principle we rely on for depth perception with our two eyes.


Since cameras are so much less costly than lasers, engineers saw these results as a huge opportunity. But if two cameras are better than one, are three cameras—or more—still better? Indeed, California-based machine imaging company Light has discovered that using multiple cameras can solve several issues beyond the difficulty of establishing distance to objects.


“We want to be in the next-generation Level 4 and Level 5 platforms that will be tested next year,” said Light’s CEO Dave Grannan. But the company’s main contribution to this field isn’t sensory hardware. Its Clarity perception platform provides the software that combines data from multiple cameras with overlapping fields of view.


“Our clear advantage is that we provide lidar-like accuracy for the cost of cameras,” Grannan said. “It costs tens of thousands of dollars for lidar, while our cameras and ASIC [application-specific integrated circuit] costs OEMs $250 to $260.”


Grannan suggests Clarity not only matches lidar but in many regards surpasses it: “We can cover a longer range—up to 1000 meters, compared with 200 or so for lidar. The systems we are building can cost a few thousand dollars instead of tens of thousands of dollars. And our systems use less power, a key feature for electric vehicles.”


The Advantages of Light Fields


Lidar uses lasers the way sonar uses sound. The sensors emit a light signal and then measure the time it takes for that signal to be reflected. This provides a direct measure of distance, which is what makes the technology so useful in avoiding obstacles. But the lasers are directed along a straight line, so if the path is occluded by, say, a snowflake, the resolution of the resulting point cloud could be greatly distorted.


Combining visual data from an array of cameras, on the other hand, allows the system to analyze entire light fields. Viewing an object from multiple angles gives you a better sense of its contours in three dimensions. That same snowflake that tripped up the lidar shows up at different spots in the field of view of each camera, indicating both its location and its miniscule dimensions. So the falling object that throws off the single laser beam can be easily corrected for as part of a light field. Light fields created from camera arrays can even help the system pick up details in low light conditions.


(Another approach uses a single light field camera, which uses a multi-lens array in addition to the standard image sensor of most cameras. The multiple lenses in this case work in a way similar to the multiple cameras companies like Light are using.)


The main drawback to light field technology to date has been that the cameras in the array are nearly impossible to fix in place, so every vibration makes it necessary to recalibrate them. This is an especially big problem for autonomous vehicles. One pothole or rough patch on the road and suddenly the sensors’ view is blurred and distorted. But there are already some viable solutions for this issue that rely on finding fixed points in the overlapping images and recalibrating with the software instead of rearranging the cameras themselves.


Lidar probably isn’t going anywhere soon. (Clarity will be able to integrate data from other types of systems, according to Grannan, including lidar.) Multi-sensor platforms may well be the solution to the challenge of navigating in the whole gamut of conditions. Now engineers can mix and match a number of different sensors, and with any luck they’ll find an optimally effective formula in time to launch the next revolution in transportation.


And

Follow us on:

Facebook: https://www.facebook.com/convoytechnologies/ LinkedIn: https://www.linkedin.com/company/convoy-technologies/ Twitter: https://twitter.com/ConvoyTechnolo1 You can also sign up for our newsletter, so you never miss a post. Just scroll down to the form in the footer.


148 views0 comments

コメント


bottom of page