Idk I try talk like cavemen to claude. Claude seems answer less good. We have more misunderstandings. Feel like sometimes need more words in total to explain previous instructions. Also less context is more damage if typo. Who agrees? Could be just feeling I have. I often ad fluff. Feels like better result from LLM. Me think LLM also get less thinking and less info from own previous replies if talk like caveman.
Why say more word when less word do. Save time. Sea world.
Yes because in most contexts it has seen "caveman" talk the conversations haven't been about rigorously explained maths/science/computing/etc... so it is less likely to predict that output.
Fluff adds probable likeness. Probablelikeness brings in more stuff. More stuff can be good. More stuff can poison.
In the regular people forums (twitter, reddit), you see endless complaints about LLMs being stupid and useless.
But you also catch a glimpse of how the author of the complaint communicates in general...
"im trying to get the ai to help with the work i am doing to give me good advice for a nice path to heloing out and anytim i askin it for help with doing this it's total trash i dunt kno what to do anymore with this dum ai is so stupid"