> I thought people here got excited about technology.
I am passionately excited about technology that serves people, but the current hype around AI is not that.
LLMs are, fundamentally, a super cool development. That we can now generate large regions of text that are statistically likely to be perceived as accurate is phenomenally neat.
But that's all they are: statistical text prediction engines. They do not think or reason; they generate next tokens based on prior context[1]. Unfortunately, because they predict really convincing text, a lot of people are willing to believe them to be more capable than they are. As a consequence, the companies developing these technologies have chosen to lean into rhetoric that exacerbates these beliefs because it means more money for them. The people running these companies fundamentally do not care about the human experience when they could instead care about profits[2].
AI, as an industry, is mostly hype — and it's a particularly insidious form of hype that preys on people's desire for cool, new things at the expense of our collective long-term well-being. The exponential ramp-up of AI data centers is wreaking havoc on natural ecosystems and local communities; the humans "employed" for RLHF are severely underpaid and exploited, and are mostly powerless to make other choices; the damage to our digital infrastructure and community is visible daily (how many "are you a human?" captchas do you get today versus ten years ago? how many useless pull requests do repositories big and small receive on a daily basis? how many "look I vibe coded a shitty app in three days and it's riddled with bugs, let me post about it" posts do we see now?). And this is all to say nothing of the rapidity with which huge swaths of people have just decided they don't care about other people fundamentally; they'd rather AI-generate some garbage company logo than employ a graphic designer to do a better job; they'd rather AI-generate copytext rather than hire an editor; they'd rather reach for the cheap, built-from-the-labor-of-others-without-respect-to-them tool that outsources all creativity and effort and gives them an immediately available "eh, it's good enough" solution. Before long, we will be inundated with "good enough", and we will forget what it was like to have good.
I'm excited about technology. I am not excited about the current incarnation of this technology.
[1] I am fundamentally not interested in sophistic arguments that "this is how humans work!" We don't know how humans work, so I make a choice to maintain a belief — based on my own experiences and learning — that LLMs do not accurately reflect the workings of a human brain.
[2] See "Empire of AI" by Karen Hao.
> But that's all they are: statistical text prediction engines. They do not think or reason; they generate next tokens based on prior context[1]. Unfortunately, because they predict really convincing text, a lot of people are willing to believe them to be more capable than they are. As a consequence, the companies developing these technologies have chosen to lean into rhetoric that exacerbates these beliefs because it means more money for them. The people running these companies fundamentally do not care about the human experience when they could instead care about profits[2].
The reductionist, mechanical explanation of what AIs do is not the full picture, and it almost belies first-hand experience with frontier models. AIs know more and can reason better than most humans in increasingly many contexts.
Yes, this means they produce "convincing text." But there's more than one way LLM output can be convincing. The easiest way isn't with rhetorical tricks or sycophancy—it's arguing compellingly, solving difficult problems, and producing good code. The frontier models have all improved dramatically in these respects over the past 1.5 years.
I find the people who go "its just statistical" "its just picking the next word" have probably not really understood what the actual tool can do. Ultimately its arguable whether humans are just statistical also, our brains are pattern matching machines. It's just not sensible to boil it down complex behavior to a fundamental building block. It's not hype (though there is hype), vast amounts of people are getting real value out of it. I've been coding 40+ years, it's super obvious to me the utility of AI tools.