logoalt Hacker News

Recursive Language Models

145 pointsby schmuhblasterlast Saturday at 11:29 AM23 commentsview on HN

Comments

Legend2440last Saturday at 5:50 PM

Isn't this just subagents? You call another LLM to go read a file and extract some piece of information or whatever, so that you don't clutter up the main context with the whole file.

Neat idea, but not a new idea.

show 5 replies
bob1029last Saturday at 5:12 PM

> The key insight is that long prompts should not be fed into the neural network (e.g., Transformer) directly but should instead be treated as part of the environment that the LLM can symbolically interact with.

How is this fundamentally different from RAG? Looking at Figure 4, it seems like the key innovation here is that the LLM is responsible for implementing the retrieval mechanism as opposed to a human doing it.

show 1 reply
zed31726last Saturday at 4:51 PM

T̶u̶r̶t̶l̶e̶s̶ LLMs all the way down

show 1 reply
mccoyblast Saturday at 5:13 PM

My wishlist for 2026: Anthropic / OpenAI expose “how compaction is executed” to plugin authors for their CLI tools.

This technique should be something you could swap in for whatever Claude Code bakes in — but I don’t think the correct hooks or functionality is exposed.

show 1 reply
cubefoxlast Saturday at 5:14 PM

Seems similar to this paper: https://arxiv.org/abs/2510.14826