logoalt Hacker News

Aperockytoday at 5:06 AM1 replyview on HN

This is correct, but it misses out an important dimension.

You can inject philosophy into the agent and ensure that it sticks to it. The LLM will, with sufficient drilling, begrudgingly implement it, most important of which is SIMPLE>COMPLEX on all levels and you have to either manually or agentically continuously monitor this.

Alternatively, LLM will use its tiny context window to build a true spaghetti that even itself cannot fix any more. This is the default path, and the path that way too many has taken.


Replies

prmphtoday at 8:09 AM

> Alternatively, LLM will use its tiny context window to build a true spaghetti that even itself cannot fix any more.

And this is (probably) what is happening to the Claude Code product itself. The harness itself has regressed and is increasingly unstable. I get lots of weird glitches:

- I scroll back in the conversation and keep seeing the the same sections repeated, I am not actually able to see the earlier parts of the conversation because of this.

- The whole CLI UI glitches out such that you can't even make sense of what you are seeing. This is usually fixed by resizing the terminal window

- The previous edit in the conversation history gets lost when I escape it to provide direction

- The CLI sometimes consumes huge amounts of memory (more than 10GB per window, multiplied by the number of windows I'm working in)

- Etc