There is almost no point in telling an agent to build a skill without augmenting it's knowledge on the thing it's writing about as you're just piping output to input without expanding the information in the system. If you get an agent to perform a bunch of research online, distil that down to information that the models tend not to get right or is newer than what is in their training data or simply better aligns with your desired workflow than what they generate out of the box - that's going to create a far more useful skill. I use a skill that gets activated when creating a skill to help guide this approach: https://github.com/sammcj/agentic-coding/blob/main/Skills/sk...
I find it useful for it to automatically update skills after trying them out in the wild. It can then improve the skills with real feedback. Seems to work well but I didn't do real research on it.
Absolutely, they didn't give the agents autonomy to research or any additional data. No documentation, no web search, no reference materials.
What's the point of building skills like this?