logoalt Hacker News

smj-edisontoday at 5:42 PM1 replyview on HN

I think one thing I've heard missing from discussions though is that each level of abstraction needs to be introspectable. LLMs get compared to compilers a lot, so I'd like to ask: what is the equivalent of dumping the tokens, AST, SSA, IR, optimization passes, and assembly?

That's where I find the analogy on thin ice, because somebody has to understand the layers and their transformations.


Replies

fn-motetoday at 5:51 PM

“Needs to be” is a strong claim. The skill of debugging complex problems by stepping through disassembly to find a compiler error is very specialized. Few can do it. Most applications don’t need that “introspection”. They need the “encapsulation” and faith that the lower layers work well 99.9+% of the time, and they need to know who to call when it fails.

I’m not saying generative AI meets this standard, but it’s different from what you’re saying.

show 1 reply