logoalt Hacker News

behnamohtoday at 1:04 AM2 repliesview on HN

I'm tired of these posts; LLMs are good for happy-path demos, that's it. And even then, their success rate depends on the prompter already knowing the answer!

Literally any out-of-distribution project in which I used LLMs lead to catastrophic failure. The models can't "see" stuff outside their training data.


Replies

semiquavertoday at 1:14 AM

I legitimately can’t tell if you’re being serious. It kind of seems like you might be trying to parody LLM detractors that will never admit to their usefulness. If you’re serious, why choose to say so in this post, which includes hard evidence that you’re wrong?

show 1 reply
JProtherotoday at 1:43 AM

Do you understand the purported result, and the verification? I don't, but I'm confident that Andrew Strominger wouldn't have agreed to put his name on this if he didn't think it was correct and interesting.

The human authors have positions at the Institute for Advanced Study (Einstein's old institution), Vanderbilt, Harvard (Strominger) and Cambridge in the UK.

If you have to gauge this by the reputation of the experts involved in it as I do, that seems like a good list to me.