logoalt Hacker News

AI's Dial-Up Era

469 pointsby nowflux11/03/2025433 commentsview on HN

Comments

saltysalt11/03/2025

Not sure the dial-up analogy fits, instead I tend to think we are in the mainframe period of AI, with large centralised computing models that are so big and expensive to host, only a few corporations can afford to do so. We rent a computing timeshare from them (tokens = punch cards).

I look forward to the "personal computing" period, with small models distributed everywhere...

show 18 replies
indigodaddy11/03/2025

Funny how this guy thinks he knows exactly what's up with AI, and how "others" are "partly right and wrong." Takes a bit of hubris to be so confident. I certainly don't have the hubris to think I know exactly how it's all going to go down.

show 4 replies
geon11/04/2025

The LLM architectures we have now have reached their full potential already, so going further would require something completely different. It isn’t a matter of refining the existing tech, whereas the internet of 1997 is virtually technologically identical to what we have today. The real change has been sociological, not technological.

To make a car analogy; the current LLMs are not the early cars, but the most refined horse drawn carriages. No matter how much money is poured into them, you won’t find the future there.

show 4 replies
kaoD11/03/2025

> If you told someone in 1995 that within 25 years [...] most people would find that hard to believe.

That's not how I remember it (but I was just a kid so I might be misremembering?)

As I remember (and what I gather from media from the era) late 80s/early 90s were hyper optimistic about tech. So much so that I distinctly remember a ¿german? TV show when I was a kid where they had what amounts to modern smartphones, and we all assumed that was right around the corner. If anything, it took too damn long.

Were adults outside my household not as optimistic about tech progress?

show 4 replies
runarberg11/03/2025

The vast majority of the dot-com comparison that I personally see are economic, not technological. People (or at least the ones I see) are claiming that the bubble mechanics of e.g. circular trading and over-investments are similar to the dot-com bubble, not that the AI technology is somehow similar the internet (it obviously isn’t). And to that extent we are in the year 1999 not 1995.

When this article are claiming both sides of the debate, I believe only one of them are real (the ones hyping up the technology). While there are people like me who are pessimistic about the technology, we are not in any position of power, and our opinion on the matter is basically a side noise. I think a much more common (among people with any say in the future of this technology) is the believe that this technology is not yet at a point which warrants all this investment. There were people who said that about the internet in 1999, and they were proven 100% correct in the months that followed.

show 1 reply
ecommerceguy11/04/2025

I'm getting ai fatigue. It's ok to rewrite quick emails that i'm having brain farts on but anything deep it just sucks. I certainly can't see paying for it.

show 3 replies
bena11/03/2025

“But the fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.”

Because some notable people dismissed things that wound up having profound effect on the world, it does not mean that everything dismissed will have a profound effect.

We could just as easily be "peak Laserdisc" as "dial-up internet".

show 2 replies
mjr0011/03/2025

While I mostly agree with the article's premise (that AI will cause more software development to happen, not less) I disagree with two parts:

1. the opening premise comparing AI to dial-up internet; basically everyone knew the internet would be revolutionary long before 1995. Being able to talk to people halfway across the world on a BBS? Sending a message to your family on the other side of the country and them receiving it instantly? Yeah, it was pretty obvious this was transformative. The Krugman quote is an extreme, notable outlier, and it gets thrown out around literally every new technology, from blockchain to VR headsets to 3DTVs, so just like, don't use it please.

2. the closing thesis of

> Consider the restaurant owner from earlier who uses AI to create custom inventory software that is useful only for them. They won’t call themselves a software engineer.

The idea that restaurant owners will be writing inventory software might make sense if the only challenge of creating custom inventory software, or any custom software, was writing the code... but it isn't. Software projects don't fail because people didn't write enough code.

show 5 replies
gizajob11/04/2025

Great analysis but one thing overlooked is that current gen advanced AI could in five or ten years (or less) be run from the smartphone or desktop, which could negate all the capex from the hyperscalers and also Nvidia, which presents a massive target for competitors right now. The self same AI revolution we’re seeing created right now could take itself down if AI tooling becomes widespread.

show 1 reply
mvdtnz11/04/2025

It's more like the Segway era when people with huge stakes in Segway tried to convince the world we were about to rebuild entire cities around the new model.

sailfast11/03/2025

I recall the unit economics making sense for all these other industries and bubbles (short of maybe tulips, which you could plant…) . Sure there were over-valuation bubbles because of speculatory demand, but right now the assumption seems to be “first to AGI wins” but that… may not happen.

The key variable for me in this house of cards is how long folks will wait before they need to see their money again, and whether these companies will go in the right direction long enough given these valuations to get to AGI. Not guaranteed and in the meantime society will need to play ball (also not a guarantee)

hi_hi11/04/2025

The article seems well researched, has some good data, and is generally interesting. It's completely irrelevant to the reality of the situation we are currently in with LLMs.

It's falling into the trap of assuming we're going to get to the science fiction abilities of AI with the current software architectures, and within a few years, as long as enough money is thrown at the problem.

All I can say for certain is that all the previous financial instruments that have been jumped on to drive economic growth have eventually crashed. The dot com bubble, credit instruments leading to the global financial crisis, the crypto boom, the current housing markets.

The current investments around AI that we're all agog at are just another large scale instrument for wealth generation. It's not about the technology. Just like VR and BioTech wasn't about the technology.

That isn't to say the technology outcomes aren't useful and amazing, they are just independant of the money. Yes, there are Trillions (a number so large I can't quite comprehend it to be honest) being focused into AI. No, that doesn't mean we will get incomprehensible advancements out the other end.

AGI isn't happening this round folks. Can hallucinations even be solved this round? Trillions of dollars to stop computers lying to us. Most people where I work don't even realise hallucinations are a thing. How about a Trillion dollars so Karen or John stop dismissing different viewpoints because a chat bot says something contradictory, and actually listen? Now that would be worth a Trillion dollars.

Imagine a world where people could listen to others outside of their bubble. Instead they're being given tools that re-inforce the bubble.

show 1 reply
felixfurtak11/04/2025

People keep comparing the AI boom to the Dotcom bubble. They’re wrong. Others point to the Railway Mania of the 1840s — closer, but still not quite right.

The real parallel is Canal Mania — Britain’s late-18th-century frenzy to dig waterways everywhere. Investors thought canals were the future of transport. They were, but only briefly.

Today’s AI runs on GPUs — chips built for rendering video games, not thinking machines. Adapting them for AI is about as sensible as adapting a boat to travel across land. Sure, it moves — but not quickly, not cheaply, and certainly not far.

It works for now, but the economics are brutal. Each new model devours exponentially more power, silicon, and capital. It just doesn't scale.

The real revolution will come with new, hardware built for the job (that hasn't been invented yet) — thousands of times faster and more efficient. When that happens, today’s GPU farms will look like quaint relics of an awkward, transitional age: grand, expensive, and obsolete almost overnight.

show 4 replies
bitwize11/03/2025

Recently, in my city, the garbage trucks started to come equipped with a device I call "The Claw" (think Toy Story). The truck drives to your curb where your bin is waiting, and then The Claw extends, grasps the bin, lifts it into the air and empties the contents into the truck before setting it down again.

The Claw allows a garbage truck to be crewed by one man where it would have needed two or three before, and to collect garbage much faster than when the bins were emptied by hand. We don't know what the economics of such automation of (physical) garbage collection portend in the long term, but what we do know is that sanitation workers are being put out of work. "Just upskill," you might say, but until Claw-equipped trucks started appearing on the streets there was no need to upskill, and now that they're here the displaced sanitation workers may be in jeopardy of being unable to afford to feed their families, let alone find and train in some new marketable skill.

So no, we're in the The Claw era of AI, when business finds a new way to funge labor with capital, devaluing certain kinds of labor to zero with no way out for those who traded in such labor. The long-term implications of this development are unclear, but the short-term ones are: more money for the owner class, and some people are out on their ass without a safety net because this is Goddamn America and we don't brook that sort of commie nonsense here.

show 1 reply
RyanOD11/04/2025

Every few years I find myself thinking, "Wow...the latest tech is amazing! We were in the stone ages just a few years ago."

I don't expect that to cease in my lifetime.

show 1 reply
byronic11/04/2025

how much does the correction here hew to making an AI model just look like standardized API calls with predictable responses? If you took away all the costs (data centers, water consumption, money, etc) I still wouldn't use an LLM as a first choice because it's wrong enough of the time to make it useless -- I have to verify everything it says, which is how I would have approached a task in the first place. If we put that analogy into manufacturing, it's "I have to QA everything off of the line _without exception_ and I get frequent material waste"

If you make the context small enough, we're back at /api/create /api/read /api/update /api/delete; or, if you're old-school, a basic function

zkmon11/04/2025

The only problem is, similarity with dotcom might only go thus far. For example, dotcom bubble itself might not have a similar thing in the past at that time. This is because the overall world context is different and interaction of social, political and economic forces is different.

So, when people say something about future, they are looking into the past to draw some projections or similar trends, but they may be missing the change in the full context. The considered factors of demand and automation might be too few to understand the implications. What about political, social and economic landscape? The systems are not so much insulated to study using just a few factors.

slackr11/03/2025

There’s a big difference between the fibre infrastructure left by the dotcom crash, and the GPUs that AI firms will leave behind.

delegate11/04/2025

In the dial-up era, the industry was young, there were no established players, it was all a big green field.

The situation is far from similar now. Now there's an app for everything and you must use all of them to function, which is both great and horrible.

From my experience, current generation of AI is unreliable and so cannot be trusted. It makes non-obvious mistakes and often sends you off on tangents, which consumes energy and leads to confusion.

It's an opinion I've built up over time from using AI extensively. I would have expected my opinion to improve after 3 years, but it hasn't.

arcticbull11/03/2025

People tend to equate this to the railroad boom when saying that infrastructure spending will yield durable returns into the future no matter what.

When the railroad bubble popped we had railroads. Metal and sticks, and probably more importantly, rights-of-way.

If this is a bubble, and it pops, basically all the money will have been spent on Nvidia GPUs that depreciate to 0 over 4 years. All this GPU spending will need to be done again, every 4 years.

Hopefully we at least get some nuclear power plants out of this.

show 11 replies
lilerjee11/04/2025

What are the disadvantages of AI?

The author didn't mention them.

AI companies robbed so much data from the Internet free and without permission.

Sacrificing the interests of owners of websites.

It's not sustainable.

It's impossible for AI to go far.

show 1 reply
wazoox11/04/2025

There are some gross approximations in the comparison. Oversized fibre optics networks laid out in the late 90s were used for years and may even be in part still used today; today's servers and GPUs will be obsolete in 3 to 5 years, and not worth their weight in scrap metal in 10.

The part about Jevons' paradox is interesting though.

slackr11/03/2025

There’s a big difference between the fibre infrastructure left by the dotcom crash, and the GPUs that AI firms will leave behind

innagadadavida11/04/2025

One thing the analysis for textiles vs cars misses I the complexity of the supply chain and the raw materials / components that need to be procured to make the end product. Steel/textiles have simple supply chains and they went through a boom/bust cycle as the demand plateaued. But cars on the other hand will not go through the same pattern - there are too many logistical things that need to line up and the trust factor in each of those steps as well as the end product is quite high.

Software is similar to cars - the individual components that need to be properly procured and put together is very complex and trust will be important - will you trust that you as a restaurant owner vibe coded your payment stack properly or will you just drop in the 3 lines to integrate with Stripe? I think most of the non-tech business owners will do the latter.

0xbadcafebee11/04/2025

It's clear that AI is useful. It's not yet clear how useful. Hype has always obscured real value, and nobody knows the real value until the hype cycle completes.

What is clear, is that we have strapped a rocket to our asses, fueled with cash and speculation. The rocket is going so fast we don't know where we're going to land, or if we'll land softly, or in a very large crater. The past few decades have examples of craters. Where there are potential profits, there are people who don't mind crashing the economy to get them.

I don't understand why we're allowing this rocket to begin with. Why do we need to be moving this quickly and dangerously? Why do we need to spend trillions of dollars overnight? Why do we need to invest half the fucking stock market on this brand new technology as fast as we can? Why can't we develop it in a way that isn't insanely fast and dangerous? Or are we incapable of decisions not based on greed and FOMO?

show 1 reply
_ink_11/04/2025

> The other claims that AI will create more jobs than it destroys.

Maybe it's my bubble, but so far I didn't hear someone saying that. What kind of jobs should that be, given that both forms, physical and knowledge work, will be automatable sooner or later?

show 1 reply
hufdr11/04/2025

What makes this analogy great is that nobody in the dial up days could imagine Google or YouTube. We’re in the same place now nobody knows who becomes “the Google of AI,” and that uncertainty usually means a new platform is being born.

idiotsecant11/04/2025

I would go so far as to say we are still in the computing dial-up era. We're at the tail end, maybe - we don't write machine code any longe, mostly, and we've abstracted up a few levels but we're still writing code. Eventually computing is something that will be everywhere, like air, and natural language interfaces will be nearly exclusively how people interact with computing machines. I don't think the idea of 'writing software' is something that will stick around, I think we're in a very weird and very brief little epoch where that is a thing.

BoredPositron11/04/2025

It took a long long time going from a walking bike to the one we know now. It's not going to be different from AI. Transformers will only get us so far and for the rest we need another tock. AGI is not going to happen with this generation of hardware. We are hitting spatial scaling limits in video and image generation and we are hitting limits with LLMs.

yapyap11/03/2025

Big bias shining through in comparing AI to the internet.

Because we all know how essential the internet is nowadays.

wosined11/04/2025

The thing is that the average person now thinks AI is revolutionary. Thus, if you form the analogy correctly, then it tells us that the average person is wrong and that AI is NOT revolutionary. (I'm not arguing either case.)

Arn_Thor11/04/2025

There is one key way in which I believe the current AI bubble differs from the TMT bubble. As the author points out, much of the TMT bubble money was spent building infrastructure that benefited us many decades later.

But in the case of AI, that argument is much harder to make. The cost of compute hardware is astronomic relative to the pace of improvements. In other words, a million dollars of compute today will be technically obsolete (or surpassed on a performance/watt basis) much faster than the fiber optic cables laid by Global Crossing.

And the AI data centers specialized for Nvidia hardware today may not necessarily work with the Nvidia (or other) hardware five years from now—at least not without major, costly retrofits.

Arguably, any long-term power generation capacity put down for data centers of today would benefit data centers of tomorrow, but I'm not sure much such investment is really being made. There's talk of this and that project, but my hunch and impression is that much of it will end up being small-scale local power generation from gas turbines and the like, which is harmful for the local environment and would be quickly dismantled if the data center builders or operators hit the skids. In other words, if the bubble bursts I can't imagine who would be first in line to buy a half-built AI data center.

This leads me to believe this bubble has generated much less useful value to benefit us in future than the TMT bubble. The inference capacity we build today is too expensive and ages too fast. So the fall will be that much more painful for the hyperscalers.

23434dsf11/04/2025

HN is struggling to understand

show 1 reply
dude25071111/03/2025

> Consider the restaurant owner from earlier who uses AI to create custom inventory software that is useful only for them.

That is the real dial-up thinking.

Couldn't AI like be their custom inventory software?

Codex and Claud Code should not even exist.

show 2 replies
righthand11/03/2025

More like AI’s Diaper-Up Era aka AI’s Analogy Era to Mask It’s Shortcomings

polynomial11/04/2025

Great comment threads here, but the OP article leans too much on AI generated text that is heavy on empty synthetic rhetoric and generative AI clichés.

topranks11/04/2025

Dial-up suggests he knows that many orders of magnitude of performance increase will happen, like with internet connectivity.

I’m not sure that’s a certainty.

dg011/03/2025

Nice article, but somewhat overstates how bad 1995 was meant to be.

A single image generally took nothing like a minute. Most people had moved to 28.8K modems that would deliver an acceptable large image in 10-20 seconds. Mind you, the full-screen resolution was typically 800x600 and color was an 8-bit palette… so much less data to move.

Moreover, thanks to “progressive jpeg”, you got to see the full picture in blocky form within a second or two.

And of course, with pages was less busy and tracking cookies still a thing of the future, you could get enough of a news site up to start reading in less time that it can take today.

One final irk is that it’s little overdone to claim that “For the first time in history, you can exchange letters with someone across the world in seconds”. Telex had been around for decades, and faxes, taking 10-20 seconds per page were already commonplace.

teiferer11/04/2025

> Regardless of which specific companies survive, this infrastructure being built now will create the foundation for our AI future - from inference capacity to the power generation needed to support it.

Does that comparison with the fiber infra from the dotcom era really hold up? Even when those companies went broke, the fiber was still perfectly fine a decade later. In contrast, all those datacenters will be useless when the technology has advanced by just a few years.

Nobody is going to be interested in those machines 10 years from now, no matter if the bubble bursts or not. Data centers are like fresh produce. They are only good for a short period of time and useless soon after. They are being constantly replaced.

show 1 reply
skywhopper11/04/2025

Really tired of seeing the story about how, “sure Worldcom et al went bankrupt but their investments in fiber optics gave us the physical infrastructure of the Internet today.”

I mean, sort of, but the fiber optics in the ground have been upgraded several by orders of magnitude of its original capacity by replacing the transceivers on either end. And the fiber itself has lasted and will continue to last for decades.

Neither of those properties is true of the current datacenter/GPU boom. The datacenter buildings may last a few decades but the computers and GPUs inside will not and they cannot be easily amplified in their value as the fiber in the ground was.

hnburnsy11/04/2025

So weird, I asked AI (Grok) just yesterday how far along we are towards post-scarcity and it replied...

>We’re in the 1950s equivalent of the internet boom — dial-up modems exist, but YouTube doesn’t.

show 1 reply
bigwheels11/03/2025

> Benchmark today’s AI boom using five gauges:

> 1. Economic strain (investment as a share of GDP)

> 2. Industry strain (capex to revenue ratios)

> 3. Revenue growth trajectories (doubling time)

> 4. Valuation heat (price-to-earnings multiples)

> 5. Funding quality (the resilience of capital sources)

> His analysis shows that AI remains in a demand-led boom rather than a bubble, but if two of the five gauges head into red, we will be in bubble territory.

This seems like a more quantitative approach than most of "the sky is falling", "bubble time!", "circular money!" etc analyses commonly found on HN and in the news. Are there other worthwhile macro-economic indicators to look at?

It's fascinating how challenging it is meaningfully compare current recent events to prior economic cycles such as the y2k tech bubble. It seems like it should be easy but AFAICT it barely even rhymes.

show 1 reply
simultsop11/04/2025

> MIT Professor, 1993' quote

words to live by...

jdkee11/04/2025

Reads like it was written by ChatGPT.

nickphx11/04/2025

Dial-up was actually useful though.

blazespin11/04/2025

KIMI just proposed linear attention. I mean, one breakthrough, and blammo, the whole story changes.

weare13811/04/2025

The last AI bubble was AI's dial-up era because it was the dial-up era:

https://en.wikipedia.org/wiki/AI_winter

https://www.youtube.com/watch?v=sV7C6Ezl35A

gnarlouse11/04/2025

I feel like this article is too cute. The internet, and the state of the art of computing in general has been driven by one thing and one thing alone: Moore’s Law. In that very real sense, it means that the semiconductor and perhaps more generally even just TSMC is responsible for the rise of the internet and the success of it.

We’re at the end of Moore’s Law, it’s pretty reasonable to assume. 3nm M5 chips means there are—what—a few hundred silicon atoms per transistor? We’re an order of magnitude away from .2 nm which is the diameter of a single silicon atom.

My point is, 30 years have passed since dial up. That’s a lot of time to have exponentially increasing returns.

There’s a lot of implicit assumption that “it’s just possible” to have a Moore’s Law for the very concept of intelligence. I think that’s kinda silly.

show 1 reply
hansmayer11/04/2025

More like Bullshit Era

sanskarix11/04/2025

[dead]

show 2 replies

🔗 View 12 more comments