logoalt Hacker News

turnsoutyesterday at 9:41 PM3 repliesview on HN

It seems intuitive that a naive self-generated Skill would be low-value, since the model already knows whatever it's telling itself.

However, I've found them to be useful for capturing instructions on how to use other tools (e.g. hints on how to use command-line tools or APIs). I treat them like mini CLAUDE.mds that are specific only to certain workflows.

When Claude isn't able to use a Skill well, I ask it to reflect on why, and update the Skill to clarify, adding or removing detail as necessary.

With these Skills in place, the agent is able to do things it would really struggle with otherwise, having to consume a lot of tokens failing to use the tools and looking up documentation, etc.


Replies

evmakiyesterday at 9:54 PM

> I ask it to reflect on why, and update the Skill to clarify, adding or removing detail as necessary.

We are probably undervaluing the human part of the feedback loop in this discussion. Claude is able to solve the problem given the appropriate human feedback — many then jump to the conclusion that well, if Claude is capable of doing it under some circumstances, we just need to figure out how to remove the human part so that Claude can eventually figure it out itself.

Humans are still serving a very crucial role in disambiguation, and in centering the most salient information. We do this based on our situational context, which comes from hands-on knowledge of the problem space. I'm hesitant to assume that because Claude CAN bootstrap skills (which is damn impressive!), it would somehow eventually do so entirely on its own, devoid of any situational context beyond a natural language spec.

show 1 reply
YZFyesterday at 10:09 PM

A pattern I use a lot is after working with the LLM on a problem, directing it, providing additional context and information, ask it to summarize its learning into a skill. Then the next session that has a similar theme can start with that knowledge.