logoalt Hacker News

teekerttoday at 10:21 AM5 repliesview on HN

Idk I try talk like cavemen to claude. Claude seems answer less good. We have more misunderstandings. Feel like sometimes need more words in total to explain previous instructions. Also less context is more damage if typo. Who agrees? Could be just feeling I have. I often ad fluff. Feels like better result from LLM. Me think LLM also get less thinking and less info from own previous replies if talk like caveman.


Replies

WarmWashtoday at 3:50 PM

In the regular people forums (twitter, reddit), you see endless complaints about LLMs being stupid and useless.

But you also catch a glimpse of how the author of the complaint communicates in general...

"im trying to get the ai to help with the work i am doing to give me good advice for a nice path to heloing out and anytim i askin it for help with doing this it's total trash i dunt kno what to do anymore with this dum ai is so stupid"

show 1 reply
altmanaltmantoday at 4:38 PM

Why say more word when less word do. Save time. Sea world.

show 2 replies
jaccolatoday at 11:07 AM

Yes because in most contexts it has seen "caveman" talk the conversations haven't been about rigorously explained maths/science/computing/etc... so it is less likely to predict that output.

cyanydeeztoday at 11:02 AM

Fluff adds probable likeness. Probablelikeness brings in more stuff. More stuff can be good. More stuff can poison.