As I understand, lidars don't work well in rain/snow/fog. So in the real world, where you have limited resources (research and production investment, people talent, AI training time and dataset breadth, power consumption) that you could redistribute between two systems (vision and lidar), but one of the systems would contradict the other in dangerous driving conditions — it's smarter to just max out vision and ignore lidar altogether.
Why does this matter? You have to slow down in rain/snow/fog anyway, so only having cameras available doesn't hurt you all that much. But then in clear weather lidar can only help.
No, it isn't "smarter." Camera-only driving is the product of a stubborn dogmatic boss who can't admit a fundamental error. "Just make it work" is a terrible approach to engineering.
Nothing works perfectly in all conditions and scenarios. Sensor fusion has been the most logical approach now, and into the foreseeable future.
Computer vision does not work exactly like human vision, closely equating the two has tended to work out poorly in extreme circumstances.
High performance fully automated driving that relies solely on vision is a losing bet.
Why does that strategy absolutely require the lidar to be absent from the car? When was less technology the solution to a software problem?
Limited resources? Billions per year are being thrown at the base technology. We have the capital deployed to exhaust every path ten times over.
The Swiss cheese model would like to disagree.
When you have sensor ambiguity sounds like the perfect time to fail safely and slow to a halt unless the human takes over.
Do cameras work well in those conditions? Nope. Also cameras don't work well with certain answer of glare, so as a consumer I'd rather have something over-engineered for my safety to cover all edge cases...
Evidence clearly shows otherwise.
Also, military sensor use shows the best answer is to have as many different types of sensors as possible and then do sensor fusion. So machine vision, lidar, radar, etc.
That way you pick up things that are missed by one or more sensor types, catches problems and errors from any of them, and end up with the most accurate ‘view’ of the world - even better than a normal human would.
It’s what Waymo is doing, and they also unsurprisingly, have the best self driving right now.
This is silly. Cameras are cheap. Have both. Sensors that sense differently in different conditions is not an exotic new problem. The kalman filter has existed for about a billion years and machine learning filters do an even better job.
> lidars don't work well in rain/snow/fog.
Neither do cameras, or eyeballs.