This is unsurprising and irrelevant.
When you create a skill for a particular model, you don't typically ask the model to create the skill based solely on its own latent knowledge. Otherwise, you'd expect the effect to be similar to telling the model 'make a plan before acting, make not mistakes'.
But that's what the paper's authors did!
When they say 'self-generated' they don't allow the model any tool access at all, not even web search.
It would be much more interesting if they had tested skills that were created in one of these ways:
A) The model interviews a human and then creates the skill, or
B) The model executes one or more deep research tasks in order to gather information, or
C) Some combo of the above.
I had to scroll too far to find this take. 100%.
This is like saying the CLAUDE.md or AGENTS.md is irrelevant because the LLM generated it.
> Otherwise, you'd expect the effect to be similar to telling the model 'make a plan before acting, make not mistakes'.
Have there not been previous iterations of these tools where such techniques were actually effective?
> This is unsurprising and irrelevant. When you create a skill for a particular model, you don't typically ask the model to create the skill based solely on its own latent knowledge.
This!
The only surprising part about the paper is that somebody wrote a paper on skills without a good understanding of the topic.