How likely are we to look back on Agent/MCP/Skills as some early Netscape peculiarity? I would dive into adoption if I didn't think some new thing would beat the paradigm in a fortnight.
Is a skill essentially a reusable prompt that is inserted at the start of any query? The marketing of Agents/MCP/skills/etc is very confusing to me.
The agentic development scene has slowly turned into a full-blown JavaScript circus—bright lights, loud chatter, and endless acts that all look suspiciously familiar. We keep wrapping the same old problems in shiny new packages, parading them around as if they’re groundbreaking innovations. How long before the crowd grows tired of yet another round of “RFC” performances?
Why does this need to be a standard in the first place. This isn't DDR5 lol, it's literally just politely asking the model to remember some short descriptions and read a corresponding file when it thinks appropriate. I feel like these abstractions are supposed to make Claude sound more sophisticated because WOW now we can give the guy new skills! But really they're just obfuscating the "data as code" aspect of LLMs which is their true power (and vulnerability ofc).
i like how Anthropic has positioned themselves as the true AI research company and donating “standards” like that.
Although Skills are just md files but it’s good to see them “donate” it.
There goal seems to be simple: Focus on coding and improving it. They’ve found a great niche and hopefully revenue generating business there.
OpenAI on the other hand doesn’t give me same vibes, they don’t seem very oriented. They’re playing catchup with both Google models and Anthropic
I feel inspired and would like to donate my standard for Agent Personas to the community. A persona can be defined by a markdown file with the following frontmatter:
---
persona: hacker
description: logical, talks about computers a lot, enjoys coffee, somewhat snarky and arrogant
---
<more details here>It was just a few months ago that the MCP spec added a concept called "prompts" which are really similar to skills.
And of course Claude Code has custom slash commands which are also very similar.
Getting a lot of whiplash from all these specifications that are hastily put together and then quickly forgotten.
They published a specification, that doesn’t yet make it a standard.
All the talk about "open" standards from AI companies feels like VC-backed public LLM experiments. Even if these standards fade, they help researchers create and validate new tools. I see this especially with local models. The rise of CLI-based LLM coding tools lets me use models like GPT OSS 20B to build apps locally and offline.
Skills are a pretty awkward abstraction. They emerged to patch a real problem, generic models require fine-tuning via context, which quickly leads to bloated context files and context dilution (ie more hallucinations)
But skills dont really solve the problem. Turning that workaround into a standard feels strange. Standardizing a patch isn’t something I’d expect from Anthropic, it’s unclear what is their endgame here
I'd love to see way more interest, rigor, tooling, etc in the industry regarding Skills, I really think they have solved the biggest problems that killed Expert Systems back in the day. I'd love to see the same enthusiasm as MCPs for these, I think in the long term they will be much more important than MCPs (still complementary).
I wish agentic skills were something other than a system prompt or a series of step-by-step instructions. feels like anthropicide and opportunity here to do something truly groundbreaking but ended up with prompt engineering.
I'm curious about the `license` field in the specification: https://agentskills.io/specification.
Could one make a copyleft type license such that the generated code must be licensed free and open and under the same license? How enforceable are licenses on these skills anyway, if one can take in the whole skill with an agent and generate a legally distinct but functionally close variant?
Interesting move. One thing I’m curious about is how opinionated the standard is supposed to be. In practice, agent “skills” tend to blur the line between capabilities, tools, and workflows, especially once statefulness and retries enter the picture. Is the goal here mostly interoperability between agent frameworks, or do you see this evolving into something closer to a contract for agent behavior over time? I can imagine standardization helping a lot, but only if it stays flexible enough to avoid freezing today’s agent design assumptions.
If anyone wants to use Skills in Gemini CLI or any other llm tool - check out something I have created, open-skills https://github.com/BandarLabs/open-skills
It does code execution in an apple container if your Skill requires any code execution.
It also proves the point that Skills are basically repackaged MCPs (if you look into my code).
I have been switching between OpenCode and Claude - one thing I like about OpenCode is the ability to define custom agents. These can be ones tailored to specific workflows like PR reviews or writing change logs. I haven't yet attempted the equivalent of this with skills in Claude.
These two solutions look feel and smell like the same thing. Are they the same thing?
Any OpenCode users out there have any hot or nuanced takes?
They really do love standards
I had developed a tool for Roo Code, and have moved over to anti-gravity with no problem, that basically gives playwright the ability to develop and test user scripts in an automated fashion.
It is functionally a skill. I suppose once anti-gravity supports skills, I will make it one officially.
Finally I can share this beauty with a wider world:
https://github.com/alganet/skills/blob/main/skills/left-padd...
Love seeing this become an open standard. We just shipped the first universal skill installer built on it:
npx ai-agent-skills install frontend-design
20 of the most starred Claude skills ever, now open across Claude Code, Cursor, Amp, VS Cod : anywhere that supports the spec. Would love some feedback on it
github.com/skillcreatorai/Ai-Agent-Skills
Is it possible to provide a llm a skill through the mcp resource feature?
My company has a plugin marketplace in a git repo where we host our shared skills. It would be nice if we could plug that into the web interface.
Still can’t symlink skills from Claude code to codex tho :/
Our lab has been experimenting with “meta skills” that allow creation of skills to use later after a particular workflow.
Paper & applications published here: https://earthpilot.ai/metaskills/
Argh word creep.
It has been published as an open specification.
Whether it is a standard isn't for them to declare.
Is codex working well with python notebooks?
Tired of having to learn the Next New Thing (tm) that'll be replaced in a month.
“Agent skills” seems more like a pattern than something that needs a standard. It’s like announcing you’ve created a standard for “singletons” or “monads”
This is the right direction but this implementation is playdough and this needs to be a stone mansion. I’m working on a non-LLM AI model that will blow this out of the water.
What is the difference between 3rd party skills and connectors? How do you access/install 3rd party skills in claude code?
Claude's skills thing just leveled up from personal toy to full enterprise push - org admins shoving Notion/Figma/Atlassian workflows straight into the model? That's basically turning Claude into your company's AI front door. The open standard bit is smart though, means every partner skill keeps funneling tokens back their way. But good luck when every PM wants their custom agent snowflake and your infra bill triples overnight.
Might add this to the next https://hackernewsai.com/ newsletter.
[dead]
There's a pattern I keep seeing: LLMs used to replace things we already know how to do deterministically. Parsing a known HTML structure, transforming a table, running a financial simulation. It works, but it's like using a helicopter to cross the street: expensive, slow, and not guaranteed to land exactly where you intended.
The real opportunity with Agent Skills isn't just packaging prompts. It's providing a mechanism that enables a clean split: LLM as the control plane (planning, choosing tools, handling ambiguous steps) and code or sub-agents as the data/execution plane (fetching, parsing, transforming, simulating, or executing NL steps in a separate context).
This requires well-defined input/output contracts and a composition model. I opened a discussion on whether Agent Skills should support this kind of composability:
https://github.com/agentskills/agentskills/issues/11