logoalt Hacker News

jerftoday at 3:14 PM1 replyview on HN

LLM speak isn't even quite the average either. It's something more like the average, then pushed through more training to turn it into the agents we think of today (a fresh-off-the-training-set LLM really is in some sense that "fancy autocomplete" that people called it for a while), then trained by the AI companies to be generally inoffensive and do the other things they want them to do. All of the further actions push the agents away from the original LLM average. The similarity of the "LLM tone" across multiple models and multiple companies, and the fact I don't think this tone has been super directly trained for, strongly suggests that the process of converting the raw LLM into the desirable agents we all use is some sort of strong strange attractor for the LLMs that are pushed through that process.

Maybe they are training for that tone now, either deliberately or accidentally. But my belief that they weren't initially comes from the fact that it's a new tone that I doubt anyone designed with deliberation. It bears strong resemblance to "corporate bland", but it is also clearly distinct from it in that we could all tell those two apart very easily.


Replies

exe34today at 4:30 PM

Like foxes coming up with floppy ears.