logoalt Hacker News

donperignonlast Sunday at 7:42 AM5 repliesview on HN

LLM’s are basically glorified slot machines. Some people try very hard to come up with techniques or theories about when the slot machine is hot, it’s only an illusion, let me tell you, it’s random and arbitrary, maybe today is your lucky day maybe not. Same with AI, learning the “skill” is as difficult as learning how to google or how to check stackoverflow, trivial. All the rest is luck and how many coins do you have in your pocket.


Replies

mikeshi42last Sunday at 11:36 AM

There's plenty of evidence that good prompts (prompt engineering, tuning) can result in better outputs.

Improving LLM output through better inputs is neither an illusion, nor as easy as learning how to google (entire companies are being built around improving llm outputs and measuring that improvement)

show 1 reply
gloomydaylast Sunday at 10:40 AM

This is not a good analogy. The parameters of slot machines can be changed to make the casino lose money. Just because something is random, doesn't mean it is useless. If you get 7 good outputs out of 10 from an LLM, you can still use it for your benefit. The frequency of good outputs and how much babysitting it requires determine whether it is worth using or not. Humans make mistakes too, although way less often.

show 1 reply
simonwlast Sunday at 12:18 PM

Learning how to Google is not trivial.

show 2 replies
jstummbilliglast Sunday at 8:07 AM

We know what random* looks like: a coin toss, the roll of a die. Token generation is neither.

show 2 replies