Not so obvious, because the model still needs to look up the required doc. The article glances over this detail a little bit unfortunately. The model needs to decide when to use a skill, but doesn’t it also need to decide when to look up documentation instead of relying on pretraining data?
Removing the skill does remove a level of indirection.
It's a difference of "choose whether or not to make use of a skill that would THEN attempt to find what you need in the docs" vs. "here's a list of everything in the docs that you might need."
I believe the skills would contain the documentation. It would have been nice for them to give more information on the granularity of the skills they created though.
This seems like the crux of the problem and I suspect this is more of a problem with popular frameworks like Next.js, rather than a general skills issue.
A human isn’t going to read the docs when implementing something they think they already know. Why would an agent?
We go to the docs when we’re stuck or something doesn’t work how we expect, but rarely before.
The question is, should LLMs be trained to behave differently? Maybe?
One way I work around this is instructing agents to always provide external references (eg links to documentation) when crafting implementation plans. That sort of forces to agent to go find supporting docs.