logoalt Hacker News

Animatsyesterday at 11:50 PM1 replyview on HN

This is encouraging. The title is a bit much. "Potential points of attack for understanding what deep learning is really doing" would be more accurate but less attention-grabbing.

It might lead to understanding how to measure when a deep learning system is making stuff up or hallucinating. That would have a huge payoff. Until we get that, deep learning systems are limited to tasks where the consequences of outputting bullshit are low.


Replies

hodgehog11today at 12:03 AM

> measure when a deep learning system is making stuff up or hallucinating

That's a great problem to solve! (Maybe biased, because this is my primary research direction). One popular approach is OOD detection, but this always seemed ill-posed to me. My colleagues and I have been approaching this from a more fundamental direction using measures of model misspecification, but this is admittedly niche because it is very computationally expensive. Could still be a while before a breakthrough comes from any direction.

show 1 reply