I'm tired of these posts; LLMs are good for happy-path demos, that's it. And even then, their success rate depends on the prompter already knowing the answer!
Literally any out-of-distribution project in which I used LLMs lead to catastrophic failure. The models can't "see" stuff outside their training data.
I'm tired of these posts; LLMs are good for happy-path demos, that's it. And even then, their success rate depends on the prompter already knowing the answer!
Literally any out-of-distribution project in which I used LLMs lead to catastrophic failure. The models can't "see" stuff outside their training data.