logoalt Hacker News

bArraytoday at 1:34 PM0 repliesview on HN

> A Bayesian decision-theoretic agent needs explicit utility functions, cost models, prior distributions, and a formal description of the action space. Every assumption must be stated. Every trade-off must be quantified. This is intellectually honest and practically gruelling. Getting the utility function wrong doesn’t just give you a bad answer; it gives you a confidently optimal answer to the wrong question.

I was talking somebody through Bayesian updates the other day. The problem is that if you mess up any part of it, in any way, then the result can be completely garbage. Meanwhile, if you throw some neural network at the problem, it can much better handle noise.

> Deep learning’s convenience advantage is the same phenomenon at larger scale. Why specify a prior when you can train on a million examples? Why model uncertainty when you can just make the network bigger? The answers to these questions are good answers, but they require you to care about things the market doesn’t always reward.

The answer seems simple to me - sometimes getting an answer is not enough, and you need to understand how an answer was reached. In the age of hallucinations, one can appreciate approaches where hallucinations are impossible.