logoalt Hacker News

michaeldwanyesterday at 8:58 PM3 repliesview on HN

I credit containerization, k8s, and terraform for preventing vendor lock in. Compute like EC2 or GCE are effectively interoperable. Ditto for managed services for k8s or Postgres. The new products Anthropic is shipping is more like Lambda. Vendor kool-aid lots of people will buy into.

What grinds my gears is how Anthropic is actively avoiding standards. Like being the only harness that doesn't read AGENTS.md. I work on AI infra and use different models all the time, Opus is really good, but the competition is very close. There's just enough friction to testing those out though, and that's the point.


Replies

danudeytoday at 1:03 AM

> The new products Anthropic is shipping is more like Lambda. Vendor kool-aid lots of people will buy into.

Counterpoint: there are probably tons of people out there who were hacking together lousy versions of these same tools to somehow spin up Claude to generate the release notes for their PRs or analyze their Github Issues every week. This is a smarter, faster, easier, and likely far more secure way of implementing the same thing, which will make the people using those things much better.

In the meantime, it wouldn't be surprising if other AI companies started doing similar things; I could see Cursor, for example, adding a similar sort of hosted cursor 'Do Github Things' option for enterprises, and if they do then that means more variety and less lock-in (assuming the competitors have similar features).

From my perspective it's no different than writing a Claude skill, which is something it seems like everyone is doing these days; it's just that in this case the 'skill' is hosted somewhere else, on (likely) more reliable architecture and at cheaper scale.

JohnMakinyesterday at 9:27 PM

I think there is lock-in, despite those things - for containerization, you're still a lot of the times beholden to the particular runtime that provider prefers, and whatever weird quirks exist there. Migrating can have some surprises. K8s, usually you will go managed there, and while they provide the same functionality, AKS != EKS != GKE at all, at least in terms of managing them and how they plug into everything else. In terraform, migrating from AWS provider to GCP provider will hold a lot of surprises for you for what looks like it should be the exact same thing.

My point was, I don't think it mattered much, and it feels like an ok comparison - cloud offerings are mostly the exact same things, at least at their core, but the ecosystem around them is the moat, and how expensive it is to migrate off of them. I would not be surprised at all if frontier AI model providers go much the same way. I'm pretty much there already with how much I prefer claude code CLI, even if half the time I'm using it as a harness for OpenAI calls.

fragmedeyesterday at 10:01 PM

There's a tiny amount of friction. Enough that I'll be honest and say that I spend the majority of my time with one vendor's system, but compared the to the fiction of moving from one cloud to another, eg AWS to GCP, the friction between opening Claude code vs codex is basically zero. Have an active subscription and have Claude.md say "read Agents.md".

Claude Code routines sounds useful, but at the same time, under AI-codepocalypse, my guess is it would take an afternoon to have codex reimplement it using some existing freemium SaaS Cron platform, assuming I didn't want to roll my own (because of the maintenance overhead vs paying someone else to deal with that).

show 1 reply