logoalt Hacker News

ethbr1yesterday at 1:19 PM2 repliesview on HN

I'll take a crack.

> what do LLMs disrupt? If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud.

"Cost of developing code" is a trivial and incomplete answer.

Coding LLMs disrupt (or will, in the immediate future)

(1) time to develop code (with cost as a second order effect)

(2) expertise to develop code

None of the analogs you provided are a correct match for these.

A closer match would be Excel.

It improved the speed and lowered the expertise required to do what people had previously been doing.

And most importantly, as a consequence of especially the latter more types of people could leverage computing to do more of their work faster.

The risk to B2B SaaS isn't that a neophyte business analyst is going to recreate you app overnight...

... the risk is that 500+ neophyte business analysts each have a chance of replacing your SaaS app, every day, every year.

Because they only really need to get lucky once, and then the organization shifts support to in-house LLM-augmented development.

The only reason most non-technology businesses didn't do in-house custom development thus far was that ROI on employing a software development team didn't make sense for them. Suddenly that's no longer a blocker.

To the point about cloud, what did it disrupt?

(1) time to deploy code (with cost as a second order effect)

(2) expertise to deploy code

B2B SaaS should be scared, unless they're continuously developing useful features, have a deep moat, and are operating at volumes that allow them to be priced competitively.

Coding agents and custom in-house development are absolutely going to kill the 'X-for-Y' simple SaaS clone business model (anything easily cloneable).


Replies

agentultrayesterday at 2:54 PM

This seems to assume that these non-technical people have the expertise to evaluate LLM/agent generated solutions.

The problem of this tooling is that it cannot deploy code on its own. It needs a human to take the fall when it generates errors that lose people money, break laws, cause harm, etc. Humans are supposed to be reviewing all of the code before it goes out but you’re assumption is that people without the skills to read code let alone deploy and run it are going to do it with agents without a human in the loop.

All those non-technical users have to do is approve that app, manage to deploy and run it themselves somehow, and wait for the security breach to lose their jobs.

show 1 reply
sifaryesterday at 7:42 PM

>> 2) expertise to develop code

This is wrong. Paradoxically, you need expertise to develop code with LLM.

show 1 reply