logoalt Hacker News

jgiliasyesterday at 10:19 PM1 replyview on HN

What you’re doing is the so called “slot machine AI”, where you put some tokens in, pray, and hope to get what you want out. It doesn’t work that way (not well, at least)

The LLM under the hood is essentially a very fancy autocomplete. This always needs to be kept in mind when working with these tools. So you have to focus a lot on what the source text is that’s going to be used to produce the completion. The better the source text, the better the completion. In other words, you need to make sure you progressively fill the context window with stuff that matters for the task that you’re doing.

In particular, first explore the problem space with the tool (iterate), then use the exploration results to plan what needs doing (iterate), when the plan looks good and makes sense, only then you ask to actually implement.

Claude’s built in planning mode kind of does this, but in my opinion it sucks. It doesn’t make iterating on the exploration and the plan easy or natural. So I suggest just setting up some custom prompts (skills) for this with instructions that make sense for the particular domain/use case, and use those in the normal mode.


Replies

ameliusyesterday at 10:23 PM

With this kind of workflow you run out of tokens quickly, in my experience.

show 1 reply