logoalt Hacker News

artur44last Wednesday at 6:52 PM3 repliesview on HN

A lot of the debate here swings between extremes. Claims like “AI writes most of the code now” are obviously exaggerated especially coming from a nontechnical author but acting like any use of AI is a red flag is just as unrealistic. Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human. Is there a bubble? Sure, valuations look frothy. But like the dotcom era, a correction doesn’t invalidate the underlying shift it just clears out the noise. The hype is inflated, the technology is real.


Replies

artur44last Thursday at 8:07 PM

I think some wires got crossed. My point wasn’t that LLMs can’t produce useful infra or complex code clearly they can, as many examples here show. It’s just that neither extreme narrative AI writes everything now vs. you can’t trust it for anything serious reflects how teams actually work. LLMs are great accelerators for boilerplate, declarative configs, and repetitive logic, but they don’t replace engineering judgement they shift where that judgement is applied. That’s why I see AI as real, transformative tech inside an overhyped investment cycle, not as magic that removes humans from the loop.

Daishimanlast Thursday at 12:38 AM

> Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human.

I no longer believe this. A friend of mine just did a stint a startup doing fairly sophisticated finance-related coding and LLMs allowed them to bootstrap a lot of new code, get it up and running in scalable infra with terraform, and onboard new clients extremely quickly and write docs for them based on specs and plans elaborated by the LLMs.

This last week I extended my company's development tooling by adding a new service in a k8s cluster with a bunch of extra services, shared variables and configmaps, and new helm charts that did exactly what I needed after asking nicely a couple of times. I have zero knowledge of k8s, helm or configmaps.

show 2 replies
jillesvangurplast Thursday at 7:52 AM

The thing to remember about the dotcom era was that while there were a lot of bad companies at the time with a lot of clueless investors behind them, quite a few companies made it through the implosion of that bubble and then prospered. Amazon, Google, eBay, etc. are still around.

More importantly, the web is now dominant for enterprise SaaS applications, which is a category of software that did not really exist before the web. And the web post–dot-com bubble spawned a lot of unicorns.

In short, there was an investment bubble. But the core tech was fine.

AI feels like one of those things where the tech is similarly transformational (even more so, actually). It’s another investment bubble predicated on the price of GPUs, which is mostly making Nvidia very rich right now.

Right now the model makers are getting most of the funding and then funneling non-trivial amounts to Nvidia (and their competitors). But actually the value creation is in applications using the models these companies create. And the innovation for that isn’t coming from the likes of Anthropic, OpenAI, Mistral, X.ai, etc. They are providing core technology, but they seem to be struggling to do productive things in terms of UX and use cases. Most of the interesting things in this space are coming from smaller companies figuring out how to use the models these companies produce. Models and GPUs are infrastructure, not end-user products.

And with the rise of open-source models, open algorithms, and exponentially dropping inference costs, the core infrastructure technology is not as much of a moat as it may seem to investors. OpenAI might be well funded, but their main UI (ChatGPT) is surprisingly limited and riddled with bugs. That doesn’t look like the polished work of a company that knows what they are doing. It’s all a bit hesitant and copycat. It’s never going to be a magic solution to everyone’s problems.

From where I’m sitting, there is clear untapped value in the enterprise space for AI to be used. And it’s going to take more than a half-assed chat UI to unlock that. It’s actually going to be a lot of work to build all of that. Coding tools are, so far, the most promising application of reasoning models. It’s easy to see how that could be useful in the context of ERP/manufacturing, CRM, traditional office applications, and the financial world.

Those each represent verticals with many established players trying to figure out how to use all this new stuff — and loads more startups eager to displace them. That’s where the money is going to be post-bubble. We’ve seen nothing yet. Just like after the dot-com bubble burst, all the money is going to be in new applications on top of the new infrastructure. It’s untapped revenue. And it’s not going to be about buying GPUs or offering benchmark-beating models. That’s where all the money is going currently. That’s why it is a bubble.