logoalt Hacker News

Towaway69last Wednesday at 5:55 PM16 repliesview on HN

What the article doesn't touch on is the vendor lock-in that is currently underway. Many corps are now moving to an AI-based development process that is reliant on the big AI providers.

Once the codebase has become fully agentic, i.e., only agents fundamentally understand it and can modify it, the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.

Sure it will be - perhaps - possible to interchange the underlying AI for the development of the codebase but will they be significantly cheaper? Of course, the invisible hand of the market will solve that problem. Something that OPEC has successfully done for the oil market.

Another issue here is once the codebase is agentic and the price for developers falls sufficiently that it will significant cheaper to hire humans again, will these be able to understand the agentic codebase? Is this a one-way transition?

I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices and the global economy, fundamentally everything is getting better.


Replies

_the_inflatorlast Wednesday at 8:03 PM

I have similar concerns.

We will miss SaaS dearly. I think history is repeating just with DVD and streaming - we simply bought the same movie twice.

AI more and more feels the same. Half a year ago Claude Opus was Anthropics most expensive model - boy, using Claude Opus 4.6 in the 500k version is like paying 1 dollar per minute now. My once decent budgets get hit not after weeks but days (!) now.

And I am not using agents, subagents which would only multiply the costs - for what?

So what we arrive more and more is the same as always: low, medium, luxury tier. A boring service with different quality and payment structures.

Proof: you cannot compensate with prompt engineering anymore. Month ago you fixed any model discrepancies by being more clever and elaborate with your prompts etc.

Not anymore. There is a hidden factor now that accounts for exactly that. It seems that the reliance on skills and different tiers simply moves us away from prompt engineering which is considered more and more jailbreaking than guidance.

Prompt engineering lately became so mundane, I wonder what vendors were really doing by analyzing the usage data. It seems like that vendors tied certain inquiries with certain outcomes modeled by multistep prompting which was reduced internally to certain trigger sentences to create the illusion of having prompted your result while in fact you haven't.

All you did was asking the same result thousands of user did before and the LLM took an statistical approach to deliver the result.

mojosamyesterday at 12:47 PM

> Once the codebase has become fully agentic, i.e., only agents fundamentally understand it

What exactly do we mean this? Because it is obviously common for human coders to tackle learning how an unfamiliar and complex codebase works so that they can modify it (new hires do it all the time). I can think this means one of two things:

* The code and architecture being produced by agents takes approaches that are abnormally complex or inscrutable to human reviewers. Is that what folks working with cutting edge agents are seeing? In which case, such code obviously isn’t beeping reviewed; it can’t be.

* the code and architecture being produced by agents can still be understood by human reviewers, but it isn’t actually being reviewed by anyone — since reviewing pull requests isn’t always fun or easy, and injecting in-depth human review slows everything down a lot — and so no one understands how the code works. (I keep thinking about the AI maximalist who recently said he woke up to 75 pull requests from his agent, like that was a good thing)

And maybe it’s a combination of the two: agent-generated pull requests are incrementally harder to grok, which makes reviewing more painful and take longer, which means more of them go without in-depth reviews.

But if your claim is true, the bottom line is that it means no one is fully reviewing code produced by agents.

show 3 replies
eaglelamplast Wednesday at 7:42 PM

No one ever asks how much it costs Facebook or Uber to serve requests because it is irrelevant, they set prices to maximize their profit like any good monopolist. Similarly the future cartel of big providers will charge their captive users whatever they can get away with, not the cost of inference.

The current discourse around "AI", swarms of agents producing mountains of inscrutable spaghetti, is a tell that this is the future the big players are looking for. They want to create a captive market of token tokers who have no hope of untangling the mess they made when tokens were cheap without buying even more at full price.

SaucyWronglast Wednesday at 6:46 PM

This is a great point, and I routinely use it as an argument for why seasoned professionals should work hard to keep their skills and why new professionals should build them in the first place. I would never be comfortable leasing my ability to perform detailed knowledge work from one of these companies.

Sometimes the argument lands, very often it doesn't. As you said, a common refrain is, "but prices won't go up, cost to serve is the highest it will ever be." Or, "inference is already massively profitable and will become more so in the future--I read so on a news site."

And that remark, for me, is unfortunately a discussion-ender. I just haven't ever had a productive conversation with somebody about this after they make these remarks. Somebody saying these things has placed their bets already and are about to throw the dice.

mdavid626yesterday at 7:39 AM

There is no such thing as agentic codebase. If humans don’t understand it, nothing really does. Agents give zero fuck about anything. If they burn 100 or million tokens to add a feature, they don’t care. It’s the developers responsibility to keep it under control.

show 2 replies
shmobotyesterday at 10:27 AM

Lately I also wonder about the geopolitical lock-in and balkanization of the internet. US won't have this problem I guess. But with all that's happening in the world right now and the current trends, for the rest of us we need to think hard what AI company we trust with our data or trust to still have access to once we're on the other side of the wall.

show 1 reply
gengstrandyesterday at 6:00 PM

If only the AI understands your code, then vendor lock-in and exposure to price hikes will be the least of your problems. I don't think that you will be able to add Claude as the Dev-On-Call to your pagerduty schedule. If you are in an industry that requires due diligence and you get sued for bugs that cause material damage and human suffering, then I don't think the "blame it on Claude" defense is going to land well in court. I cover these topics on https://www.exploravention.com/blogs/soft_arch_agentic_ai/ which is a blog I wrote recently.

sanderjdyesterday at 2:36 PM

I'm beginning to develop the opinion that the next step in this process will (or at least should) be local and/or self-hosted inference.

The latest qwen models are already very useful, and the smaller ones can be run locally on my laptop. These are obviously not as good as the latest frontier models, and that's extremely noticeable for the development workflow, but maybe in a year or two, they will be competitive with the proprietary models we have today, which are incredibly capable. I also expect compute for inference to continue getting cheaper.

The current lock in for me is the UX of Claude Code / codex cli, but this is a very small moat that will definitely be commoditized soon.

hyttioaoayesterday at 10:19 AM

I've been thinking this for a while now as well. If they keep subsidizing for long enough there might be a large gap of humans that changes jobs, didn't get into the field in the first place. Then the only way out is to keep buying those tokens.

dahartyesterday at 3:02 PM

What do you mean about vendor lock-in? I haven’t yet seen any meaningful barriers to switching between different companies’ coding agents. Are you talking about AI market lock-in and not vendor-specific lock-in?

> these loss making AI companies will eventually need to recoup

This is true, and while AI spend continues to rise, I’m starting to think once the dust settles and the true costs emerge and stable profits are achieved, that it may be expensive enough that it’s a limiting force.

show 1 reply
shmelyesterday at 12:38 PM

I think it will be more similar to the cloud. I remember people predicted that once you move to the cloud, you'll realize how expensive it actually is, but the cost of migration back will be high. While, yes, the cloud is expensive, most people realized that it is kinda worth it.

emporaslast Wednesday at 8:17 PM

Code is so low entropy that smaller and more economical models will be up to the task the same as gigantic models from big providers are today.

No worries there, the huge improvements we see today from GPT and Claude, are at their heart just Reinforcement Learning (CoT, chain of thought and thinking tokens are just one example of many). RL is the cheapest kind of training one can perform, as far as I understand. Please correct me if that's not the case.

In the economy the invisible hand manages to produce everything cheaper and better all the time, but in the digital space the open source invisible hand makes everything completely free.

show 1 reply
fantasizrlast Wednesday at 6:23 PM

this is a good point. Some of the ai companies are trying to hook cs students so they'll only know "dev" as a function of their products. First one's free as they say (the drug dealers).

show 1 reply
vovaviliyesterday at 12:57 PM

Oil market doesn't have an equivalent of open-source LLMs, self-hosting and cloud providers.

pj_mukhyesterday at 10:33 AM

"Just like oil prices and the global economy, fundamentally everything is getting better." (implied /s)

I remember having to pay a pretty penny to have a 3 minute conversation with my dad working half way across the world. Now I can video call my nephew for 45 minutes without blinking an eye. What happened?

Why will Intelligence be like Oil and not Broadband?

Aurornislast Wednesday at 7:22 PM

> the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.

I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance.

Every genre-defining startup seems to go through this same cycle where the naysayers tell us that it's all going to collapse once the investment money runs out. This was definitely true for technologies without use cases (remember the blockchain-all-the-things era?) but it is not true for businesses that have actual users.

Some early players may go bust by chasing market share without a real business plan, like the infamous Webvan grocery delivery service. But even Webvan was directionally correct, with delivery services now a booming business sector.

Uber is another good example. We heard for years that ridesharing was a fad that would go away as soon as the VC money ran out. Instead, Uber became a profitable company and almost nobody noticed because the naysayers moved on to something else.

AI is different because the hardware is always getting faster and cheaper to operate. Even if LLM progress stalled at Opus 4.6 levels today, it would still be very useful and it would get cheaper with each passing year as hardware improved.

> I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices

Comparing compute costs to oil prices is apples to oranges. Oil is a finite resource that comes out of the ground and the technology to extract it doesn't improve much over decades. AI compute gets better and cheaper every year because the technology advances rapidly. GPU servers that were as expensive as cars a few years ago are now deprecated and available for cheap because the new technology is vastly faster. The next generation will be faster still.

If you're mentally comparing this to things like oil, you're not on the right track

show 3 replies