logoalt Hacker News

lenerdenatoryesterday at 10:50 PM1 replyview on HN

I can tell you why this won't go this way:

Customers.

When you sell them a technological solution to their problem, they expect it to work. When it doesn't, someone needs to be responsible for it.

Now, maybe I'm wrong, but I don't see any of the current AI leaders being like, "Yeah, you're right, this solution didn't meet your customer's needs, and we'll eat the resulting costs." They didn't get to be "thought leaders" in the current iteration of Silicon Valley by taking responsibility for things that got broken, not at all.

So that means you will need to take responsibility for it, and how can you make that work as a business model? Well, you pay someone - a human - who knows what they're looking at to review at least some of the code that the AI generates.

Will some of that be AI-aided? Of course. Can you make a lot of the guesswork go away by saying "use commonly-accepted design patterns" in your CLAUDE.md? Sure. But you'll still need someone to enforce it and take responsibility at the end of the day if it screws up.


Replies

asdffyesterday at 11:03 PM

You are thinking in terms of the next few years not the next few centuries. Plenty of software sold today fails to meet expectations and no one eats costs.