logoalt Hacker News

groby_b01/15/20263 repliesview on HN

LLMs are bad at creating abstraction boundaries since inception. People have been calling it out since inception. (Heck, even I got a twitter post somewhere >12 months old calling that out, and I'm not exactly a leading light of the effort)

It is in no way size-related. The technology cannot create new concepts/abstractions, and so fails at abstraction. Reliably.


Replies

TeMPOraL01/15/2026

> The technology cannot create new concepts/abstractions, and so fails at abstraction. Reliably.

That statement is way too strong, as it implies either that humans cannot create new concepts/abstractions, or that magic exists.

show 3 replies
w0m01/15/2026

I believe his argument is that now that you've defined the limitation, it's a ceiling that will likely be cracked in the relatively near future.

show 1 reply
131hn01/16/2026

There’s only one way to implement a mission, an algorithm, a task. But there’s an infinity of path, inconsistants, fuzzy and always subjective way to live. Thàt’s our lives, that’s the code LLM are trained on. I do not think, and hope, it will ever change much