logoalt Hacker News

zmmmmmyesterday at 10:01 PM28 repliesview on HN

I think AI rescue consulting is going to be come a significant mode of high value consulting, similar to specialists who come in to try and deal with a security breach or do data recovery.

Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable. It will become a special kind of process to clean room out such a mess and rebuild it fresh (probably still with AI) after distilling out core design principles to avoid catastrophic breakdown.

Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first, place but it will take us 20 years to learn them, just like original software eng took a lot longer than expected to reach a stable set of design principles (and people still argue about them!).


Replies

leocyesterday at 10:09 PM

> Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable.

Wow, it’s true, AI really is set to match human performance on large, complex software systems! ;)

show 3 replies
andsoitistoday at 2:15 AM

> Purely AI written systems will scale to a point of complexity that no human can ever understand

But won’t those more complex systems presumably solve more complex problems than the systems that humans could build? Or within a comparable time?

I think it is reasonably safe to assume at this point in the game that these AI systems are increasingly able to reason rigorously about novel problems presented to them, of ever increasing complexity and sophistication.

ramozyesterday at 10:15 PM

A non-technical friend of mine has just won some hospital contracts after vibecoding w/ Claude an inventory management solution for them. They gave him access to IT dept servers and he called me extremely lost on how to deploy (cant connect Claude to them) and also frustrated because the app has some sort of interesting data/state issues.

show 13 replies
abhiyerrayesterday at 10:48 PM

Heh. Got a customer recently around this. Entire infrastructure and CI/CD vibecoded. They half implemented Kubernetes in Github Actions that were several thousand lines long and impossible to understand.

I think the problem will get worst. I dislike the marketing around AI, but I do think it is a useful tool to help those who have experience move faster. If you are not an expert, AI seems to create a complex solution to whatever it is you were trying to do.

show 1 reply
blipvertyesterday at 10:10 PM

Reminds me of the quote in the original Westworld movie:

“ These are highly complicated pieces of equipment… almost as complicated as living organisms.

In some cases, they’ve been designed by other computers.

We don’t know exactly how they work.”

Now how did that work out ;-)

show 1 reply
fookeryesterday at 10:31 PM

This might not pan out to be the glorious victory of human craft as you’re imagining it to be.

Here’s a slightly different future - these AI rescue consultants are bots too, just trained for this purpose.

Plausible?

I have already experienced claude 4.7 handle pretty complex refactors without issues. Scale and correctness aren’t even 1% of the issue it was last year. You just have to get the high level design right, or explicitly ask it critique your design before building it.

show 4 replies
m463today at 12:18 AM

> Purely AI written systems will scale to a point of complexity that no human can ever understand

I think it will be needless verbose complexity.

I kind of imagine someone having an unlimited budget of free amazon stuff shipped to their house.

In theory, they are living a prosperous life of plenty.

In reality, they will be drowning in something that isn't prosperity.

show 1 reply
gerdesjyesterday at 11:17 PM

"Purely AI written systems will scale to a point of complexity"

You have not seen the spreadsheets that accounts run the firm on.

Bloody kids!

show 1 reply
badtupletoday at 12:34 AM

I've already done a handful of these gigs for early vibecoded products that had collapsed in on themselves. The scope of work was to stabilize the product and only make existing features work.

The issues have all been structural, not local. It's easier to treat it like a rewrite using the original as a super detailed product spec. Working on the existing codebase works, but you have to aggressively modularize everything anyway to untangle it rather than attack it from the top down.

All of these projects have gone well, but I haven't run into a case where a feature they thought was implemented isn't possible. That will happen eventually.

It's honestly good, quick work as a contractor. But I do hope they invest in building expertise from that point rather than treating it like a stable base to continue vibecoding on.

show 1 reply
taurathtoday at 1:44 AM

We already know them but everyone is busy throwing them in the trash. It’s all gas and no breaks or handling right now.

hughwyesterday at 11:15 PM

But it's so easy now to redo it all ground up, and if models improve, do it better next time.

I exaggerate only a little.

show 2 replies
Aperockyyesterday at 11:39 PM

> reach a stable set of design principles

Are you sure about this? Yes, there is a stable set, but they are used in all of the wrong places, particularly in places where they don't belong because juniors and now AIs can recite them and want to use them everywhere. That's not even discussing whether the stable set itself is correct or not - it's dubious at this point.

spamizbadtoday at 12:00 AM

What you're describing really isn't a new problem for organizations. Historically it's been a team of humans not using AI who gets over their skis and they have to have other more capable humans (also not using AI) to bail them out.

jimbokunyesterday at 11:47 PM

Those design principles it will take us 20 years to learn are just the principles for writing good maintainable, debug-able, understandable code today. Will just take 20 years to figure out they still apply when AI writes the code, too.

show 2 replies
orevyesterday at 10:12 PM

As the models keep improving, wouldn’t you be able to task a newer AI to “clean up this mess”?

show 7 replies
onlyrealcuzzotoday at 12:22 AM

> Purely AI written systems will scale to a point of complexity that no human can ever understand

In their current forms, it's unlikely for a product that actually needs to work.

It's not getting that complex and working with current LLMs.

alhazrodyesterday at 10:43 PM

The complexity you would come to the rescue to solve, would that be from AI or from the style of programming you let the AI have? I mean, you have very different problems if you use functional style vs object-oriented. It is up to the programmer to realize they want a functional style and request that from the AI, as much as possible. Even AI cannot imagine every state transition, unless it is so smart that it should be the one telling you what to do.

jatorayesterday at 11:57 PM

Interesting perspective. Fundamentally at conflict with the data, science, and 20+ year trends of AI coding systems - to the point of dogmatism. But interesting from a sociological point of view.

whimsicalismyesterday at 11:34 PM

I'm sure AI capabilities will plateau any moment now..

jiggawattsyesterday at 10:43 PM

> I think AI rescue consulting is going to be come a significant mode of high value consulting

I thought the same when I saw development outsourced to Indians that struggled to write a for loop.

I was wrong.

It turns out that customers will keep doubling down on mistakes until they’re out of funds, and then they’ll hire the cheapest consultants they can find to fix the mess with whatever spare change they can find under the couch cushions.

Source: being called in with a one week time budget to fix a mess built up over years and millions of dollars.

show 1 reply
m101yesterday at 11:58 PM

is this true because training companies have not been training AI for both performance and brevity (or some other metric like that)? If this becomes a much more serious issue surely they would adjust the training processes

dborehamtoday at 1:42 AM

My current business plan!

altairprimeyesterday at 10:28 PM

Financial auditing with pre-AI technical chops will be uniquely niche-valuable, too :)

hgs6yesterday at 11:40 PM

Have you watched Jurassic Park? That story is not about Dinos.

luxuryballstoday at 12:46 AM

This is def true but I also wonder if AI models and context sizes and capabilities will scale to keep up and eventually be able to untangle the mess.

uuyyyesterday at 10:04 PM

AI janitors

show 2 replies
jcgrilloyesterday at 10:19 PM

> Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first...

It's really nowhere near as complicated as making distributed systems reliable. It's really quite simple: read a fucking book.

Well, actually read a lot of books. And write a lot of software. And read a lot of software. And do your goddamn job, engineer. Be honest about what you know, what you know you don't know, and what you urgently need to find out next.

There is no magic. Hard work is hard. If you don't like it get the fuck out of this profession and find a different one to ruin.

We all need to get a hell of a lot more hostile and unwelcoming towards these lazy assholes.

show 1 reply