logoalt Hacker News

antonvstoday at 1:21 AM1 replyview on HN

The claim seems extremely unlikely to me. LLM comprehension is very sophisticated by any metric, the idea that something as trivial as concatenative syntactic structure would make a significant difference is implausible.

LLMs handle deeply nested syntax just fine - parentheses and indentation are not the hard part. Linearization is not a meaningful advantage.

In fact, it’s much more likely to be a disadvantage, much as it is for humans. Stack effects are implicit, so correct composition requires global reasoning. A single missing dup breaks everything downstream. LLMs, and humans, are much more effective when constraints are named and localized, not implicit and global.


Replies

rescrvtoday at 1:37 AM

I’m not claiming forth should be used as is. I’ve opened the benchmark so others can reproduce the result I share in the post: https://github.com/rescrv/stack-bench