Agree. I'd add that a aha moment to skills is AI agents are pretty good at writing skills. Let's say you have developed an involved prompt that explains how to hit an API (possibly with the complexity of reading credentials from an env var or config file) or run a tool locally to get some output you want the agent to analyze (example, downloading two versions of python packages and diffing them to analyze changes). Usually the agent reading the prompt it's going to leverage local tools to do it (curl, shell + stdout, git, whatever) every single time. Every time you execute that prompt there is a lot thinking spent on deciding to run these commands and you are burning tokens (and time!). As an eng you know that this is a relatively consistent and deterministic process to fetch the data. And if you were consuming it yourself, you'd write a script to automate it.
So you read about skills (prompt + scripts) to make this more repeatable and reduce time spent thinking. At that point there are two paths you can go down -- write the skill and prompt yourself for the agent to execute -- or better -- just tell the agent to write the skill and prompt and then you lightly edit it and commit it.
This may seem obvious to some, but I've seen engineers create skills from scratch because they have a mental model around skills being something that people must build for the agent, whereas IMO skills are you just bridging a productivity gap that the agent can't figure out itself (for now), which is instructing it to write tools to automate its own day to day tedium.
Completely agree with both points. Skills replacing one-off microservices and agents writing their own skills feel like two sides of the same coin to me. I’m a solo developer building a markdown-first slide editing app. The core format is just Markdown with --- slide separators, but it has custom HTML comment directives for layouts (<!-- layout: title -->, <!-- layout: split -->, etc.) and content-type detection for tables, code blocks, and Mermaid diagrams. It’s a small DSL, but enough that an LLM without context will generate slides that don’t render optimally. Right now my app is designed for copy-paste from external LLMs, which means users have to manually include the format spec in their prompts every time. But your comment about agents writing skills made me realize the better path: I could just ask Claude Code to read my parser and layout components, then generate a Slide_Syntax_Guide skill for me. The agent already understands the codebase—it can write the definitive spec better than I could document it manually.
The example Datasette plugin authoring skill I used in my article was entirely written by Claude Opus 4.5 - I uploaded a zip file to its the Datasette repo in it (after it failed to clone that itself for some weird environment reason) and had it use its skill-writing skill to create the rest: https://claude.ai/share/0a9b369b-f868-4065-91d1-fd646c5db3f4