logoalt Hacker News

nlitenedtoday at 12:26 PM11 repliesview on HN

As I understand, lidars don't work well in rain/snow/fog. So in the real world, where you have limited resources (research and production investment, people talent, AI training time and dataset breadth, power consumption) that you could redistribute between two systems (vision and lidar), but one of the systems would contradict the other in dangerous driving conditions — it's smarter to just max out vision and ignore lidar altogether.


Replies

RobotToastertoday at 12:34 PM

> lidars don't work well in rain/snow/fog.

Neither do cameras, or eyeballs.

show 2 replies
zozbot234today at 12:37 PM

Why does this matter? You have to slow down in rain/snow/fog anyway, so only having cameras available doesn't hurt you all that much. But then in clear weather lidar can only help.

show 1 reply
Zigurdtoday at 2:38 PM

No, it isn't "smarter." Camera-only driving is the product of a stubborn dogmatic boss who can't admit a fundamental error. "Just make it work" is a terrible approach to engineering.

show 2 replies
brktoday at 2:33 PM

Nothing works perfectly in all conditions and scenarios. Sensor fusion has been the most logical approach now, and into the foreseeable future.

Computer vision does not work exactly like human vision, closely equating the two has tended to work out poorly in extreme circumstances.

High performance fully automated driving that relies solely on vision is a losing bet.

philistinetoday at 3:27 PM

Why does that strategy absolutely require the lidar to be absent from the car? When was less technology the solution to a software problem?

show 1 reply
zemvpferreiratoday at 12:30 PM

Limited resources? Billions per year are being thrown at the base technology. We have the capital deployed to exhaust every path ten times over.

show 1 reply
heisenbittoday at 12:33 PM

The Swiss cheese model would like to disagree.

Yossarrian22today at 2:12 PM

When you have sensor ambiguity sounds like the perfect time to fail safely and slow to a halt unless the human takes over.

theappsecguytoday at 2:25 PM

Do cameras work well in those conditions? Nope. Also cameras don't work well with certain answer of glare, so as a consumer I'd rather have something over-engineered for my safety to cover all edge cases...

lazidetoday at 2:35 PM

Evidence clearly shows otherwise.

Also, military sensor use shows the best answer is to have as many different types of sensors as possible and then do sensor fusion. So machine vision, lidar, radar, etc.

That way you pick up things that are missed by one or more sensor types, catches problems and errors from any of them, and end up with the most accurate ‘view’ of the world - even better than a normal human would.

It’s what Waymo is doing, and they also unsurprisingly, have the best self driving right now.

idiotsecanttoday at 12:41 PM

This is silly. Cameras are cheap. Have both. Sensors that sense differently in different conditions is not an exotic new problem. The kalman filter has existed for about a billion years and machine learning filters do an even better job.

show 1 reply