logoalt Hacker News

csoups14yesterday at 10:14 PM2 repliesview on HN

Would imagine it's the simplest answer: they're flying by the seat of their pants, there's 1000 things happening every day that demand attention and there's not enough of it to go around. They toss their LLM at it, give it a cursory glance, and ship it. A quick glance at the Claude Code source code bears the result of this process out. The fundamental question is, if their model is so powerful, why do they keep fucking up such simple things? We're led to believe this is a serious company with a model so powerful they can't release it to the general public.


Replies

stefan_yesterday at 10:42 PM

Hermes is one of these OpenClaw clones, so this was certainly intentional, not a model hallucinating something.

I think the problem is clear. Anthropic saw their usage go up much more than their capacity could handle. There are a few tried and true solutions to this, like "increase the price" or "restrict signups so you can guarantee service to what you have already sold".

Then there is the "large scale fraud" option, where you materially change and degrade the service you have already sold. Just because you have obfuscated and mislead in how you describe the product you are selling doesn't mean you get to capture the cash flow of 1 year subscriptions then not honor that contract for the full duration.

jiggawattsyesterday at 10:22 PM

I doubt an AI would be stupid enough to write code like that without being explicitly prompted to do so. It's so... specific.

That specific nature would mean it would get caught by even the most cursory of code reviews.

Even if I was just "scanning my eyeballs over the code" without properly reading it, this would jump out as very odd and make me pause.