> In many advanced software teams, developers no longer write the code; they type in what they want, and AI systems generate the code for them.
What a wild and speculative claim. Is there any source for this information?
The post script was pretty sobering. It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise. This is a pretty depressing place to be, because most emerging technologies provide us with exciting new possibilities whereas this technology seems only exciting for management stressed about payroll.
It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
This says it all:
> I haven’t met anyone who doesn’t believe artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy.
You’re trying to weigh in on this topic and you didn’t even _talk_ to a bear?
A lot of the debate here swings between extremes. Claims like “AI writes most of the code now” are obviously exaggerated especially coming from a nontechnical author but acting like any use of AI is a red flag is just as unrealistic. Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human. Is there a bubble? Sure, valuations look frothy. But like the dotcom era, a correction doesn’t invalidate the underlying shift it just clears out the noise. The hype is inflated, the technology is real.
What if...
there's an AI agent/bot someone wrote that has the prompt:
> Watch HN threads for sentiments of "AI Can't Do It". When detected, generate short "it's working marvelously for me actually" responses.
Probably not, but it's a fun(ny) imagination game.
What you make of this memo really depends on who you are and how you're positioned. The dot-com era was absolutely a bubble. Tons of companies died, but the internet itself didn't go away, and the people who backed the right companies did extremely well. The 2007 housing bubble, on the other hand, was a totally different kind of event: broad, systemic, long lasting, and painful for almost everyone.
AI looks a lot more like the former. Some companies will fail, valuations will swing, but the underlying technology isn't going anywhere. In fact, many of the AI firms that will end up mattering are probably still undervalued because we're early in what will likely be another decade long technology expansion.
If you're managing a portfolio that needs quick returns and can't tolerate a correction, then sure, it probably feels like a bubble, because at some point people will take profits and the market will reset.
But if you're an entrepreneur or a long-term builder, that framing is almost irrelevant. This is where the next wave of value gets created. It's never smooth and it's never easy, but the long-term opportunity is enormous.
The question is, can SV extract several trillion dollars out of the global economy over the next few years with the help of LLMs and GPUs? And the follow-up question: will LLMs help grow the global economy by this amount - because if not, then extracting the money will lead to problems in other parts of the world. And last not least, will LLMs -given enough money to train them on ever bigger data sets- magically turn into AGI?
IMHO for now LLMs are just clever text generators with excellent natural language comprehension. Certainly a change of many paradigms in SWE. Is it also a $10T extra for the valley?
I've enjoyed Howard Marks writing/thinking in the past, but this is clearly a person who thinks they understand the topic but doesn't have the slightest clue. Someone trying to be relevant/engaged before really thinking on what is fact vs. fiction.
There is $8 trillion said to be earmarked to build 100 AI data centers[1]. At 10% hurdle rate, the industry will have to generate $800 billion a year to pay it off, while GPUs are replaced every three years by faster chips.
If you watch Ilyas recent interview, “it’s very hard to discuss AGI, because no one knows how to build it yet[2]”.
[1] https://finance.yahoo.com/news/ibm-ceo-says-no-way-103010877... [2] https://youtu.be/aR20FWCCjAs?si=DEoo4WQ4PXklb-QZ
He thinks "AI" "may be capable of taking over cognition", which shows he doesn't understand how LLM work...
The amount of flak that this article is getting on HN is telling of something. Not sure of what, but it's for sure indicative of something.
I am shocked at the discourse over this. I'm either ahead of the curve or behind; but its undeniable that AI can and does write most the code. Not trivial, if you spend some time and dig deep into simple appearing web apps like https://microphonetest.com or https://internetspeed.my you'd be amazed at how fast they went from mvp to full feature. Trivial to think anyone could pull off something like that in hours.
As usual I don't take financial advice from Hacker News comments and do well.
The memo itself is an excellent walk through historical bubbles, debt, financing, technological innovation, and much more, all written in a way that folks with a cursory knowledge of economics can reasonably follow along with.
A+, excellent writing.
The real meat is in the postscript though, because that's where the author puts to paper the very real (and very unaddressed) concerns around dwindling employment in a society where not only does it provide structure and challenge for growth, but is also fundamentally required for survival.
> I get no pleasure from this recitation. Will the optimists please explain why I’m wrong?
This is what I, and many other, smarter "AI Doomers" than myself have been asking for quite some time, that nobody has been able or willing to answer. We want to be wrong on this. We want to see what the Boosters and Evangelists allegedly see, we want to join you and bring about this utopia you keep braying about. Yet when we hold your feet to the fire, we get empty platitudes - "UBI", or "the government has to figure it out", or "everyone will be an entrepreneur", or some other hollow argument devoid of evidence or action. We point to AI companies and their billionaire owners blocking regulation while simultaneously screeching about how more regulation is needed, and are brushed off as hysterical or ill-informed.
I am fundamentally not opposed to a world where AI displaces the need for human labor. Hell, I know exactly what I'd do in such a world, and I think it's an excellent thought exercise for everyone to work through (what would you do if money and labor were no longer necessary for survival?). My concern - the concerns of so many, many of us - are that the current systems and incentives in place lead to the same outcome: no jobs, no money, and no future for the vast majority of humanity. The author sees that too, and they're way smarter than I am in the economics department.
I'd really, really love to see someone demonstrate to us how AI will solve these problems. The fact nobody can or will speaks volumes.
Too long and aouthor does not have a clue on the fact that currently generational models are almost only useful for software development. Other than that it is mostly fluff.
If you look at the chart at the bottom comparing Dec 99 to today....
> during the internet bubble of 1998-2000, the p/e ratios were much higher
That is true, the current players are more profitable, but the weight in SPX percentages looks to be much higher today.
Why is so much invested in AI but not in fusion power?
Whether it's a bubble depends on pricing. Is it worth the price, is it worth the future price, and by how much?
In the case of AI coding, yes: AI does exceptionally well at search (something we have known for quite some time, and have a variety of ML solutions for).
Large codebases have search and understanding as top problems. Your ability to make horizontal changes degrades as teams scale. Most stability, performance, quality, etc., changes are are horizontal.
Ironically, I think it's possible that AI's effectiveness at broad search give software engineers additional effectiveness, by being their eyes. Yes, I still review every claude code PR I submit, and yes, I typically take longer to create a claude code PR than a manual one. But I can be more satisfied that the parallel async search agents and massive grep commands are searching more locations, more quickly, and more thoroughly than I would.
Yes, it probably is a bubble (overvalued). No, that doesn't mean it's going to go away. The market is simply overcorrecting as it determines how to price it. Which--net net, is a positive effect, as it encourages economic growth within a developing sector.
Bubble is also not the most important concern--it's rather a concern that the bubble is in the one industry that's not in the red. More important to worry about are other economic conditions outside of AI and tech, which are causing general instability and uncertainty rather than investor appetite. Market recalibrating on a developing industry is fine, as long as it's not your only export.
Originally submitted here: https://news.ycombinator.com/item?id=46212259
The AI/LLM movement is either utterly transformational or it’s not. By the former I mean there is no daylight between it and the latter.
If it’s not transformational then this is a bubble and the market will right itself soon after, e.g buying data centers for cheap. LLMs will then exist as a useful but limited tool that becomes profitable with the lower capex.
If it is transformational then we don’t have the societal structure to responsibly incorporate such a shift.
The conservative guess is it won’t be transformational, that the current applications of the tech are useful but not in a way that justifies the capex, and that some version of agents and chat bots will continue to be built out in the future but with a focus on efficiency. Smaller models that require less power to train and run inference that are ubiquitous. Eventually many will run on device.
I guess there’s also another version of the future that’s quasi-transformational. Instead of any massive breakthrough there’s a successful govt coup or regulatory capture. Perfectly functioning normal stuff is then replaced with LLM assisted or augmented versions everywhere. This version is like the emergence of the automobile in the sense that the car fundamentally altered city planning, where and how people live, but often at the expense of public transportation that in hindsight may have sorely been missed.
One thing I don't hear people talking about very is about how AI is going to make money in any other way other than cutting employment.
With the internet, and especially with the internet being accessible by anyone anywhere in the world in the late 2000s and early 2010s globally, that growth was more obvious to me. I don't see where this occurs with AI. I don't see room for "growth", I see room for cutting. We were already connected before, globalization seems to have peaked in that sense.
For anyone who hasn’t read it yet, you should know that the author never answers that question.
> I don’t know any more about AI than most generalist investors.
This statement is redundant; the article screams with the author's ignorance.
The term “populist demagoguery” always calls to mind Report on an Investigation of the Peasant Movement in Hunan https://www.marxists.org/reference/archive/mao/selected-work...
"Yes, peasant associations are necessary, but they are going rather too far."
Is it a bubble? Maybe it’s just the landlords up to the old tricks again.
> To build it requires companies to invest a sum of money unlike anything in living memory.
Do we know this? Smaller more carefully curated training sets are proving to be valuable and gaining traction. It seems like the strategy of throwing huge amounts of data at LLMs is specific to companies that are attempting to dominate this space regardless of cost. It may turn out that more modest and better optimized methodologies will end up winning this race, much like WebVan flamed out taking huge amounts of investment money with them but now Instacart serves the same sector in a way that actually works robustly and profitably.
I bought a subscription to claude code to use at work. I’ve never paid for a tool to use at work that wasn’t paid by my employer. I have to admit, it may not just be a flash in the pan.
The amount of people who think because something has a few useful edge cases being incompatible with a bubble is staggeringly high. Dot com was a bubble, and yet we still use the internet widely today. Real-estate was a bubble, and people still need a place to live and work.
Just because YOU find the technology helpful, useful, or even beneficial for some use cases does NOT mean it has been overvalued. This has been the case for every single bubble, including the Dutch Tulip mania.
about AI replacing coders, the question is not if it is doing so, but if the companies where it does so extensively will be more profitable then the others.
“It’s a bet on A.G.I. or bust,” Dr. Korinek said.
Yes. It is a bubble. Also a useful tool...but 100% a bubble. There's going to unfortunately be a bunch of folks caught by it.
There’s not much serious debate on IF there’s a bubble. There is and it’s a big one.
The debate is more on what happens from here and how does that bubble deflate. Gradually and controlled where weaker companies shut down and the strong thrive, or a massive implosion that wipes most everyone in the sector out in a hard reset.
[dead]
Author states that he’s neither an investor or a techie. Why is this on the front page?
Ai is currently a bubble. But that is just a short term phenomenon. Ultimately what AI currently is and what the trend-line indicates what AI will become will change the economy in ways that will dwarf the current bubble.
But this is only if the trend-line keeps going, which is a likely possibility given the last couple of years.
I think people are making the mistake that AI is a bubble and therefore AI is completely bullshit. Remember: The internet was a bubble. It ended up changing world.
The problem is that people conflate the current wave of transformer based ANNs with AI (as a whole). AI certainly has the potential to disrupt employment of humans. Transformers as they exist today not so much.
AI's potential isn't defined by the potential of the current crop of transformers. However, many people seem to think otherwise and this will be incredibly damaging for AI as a whole once transformer tech investment all but dries out.
I think this gives an excellent framework for how to think of this. Is it a bubble? Who knows is a perfectly valid answer.
I do think there’s something quite ironic that one of the frequent criticisms of LLMs are that they can’t really say “I don’t know”. Yet if someone says that they get criticised. No surprises that our tools are the same.
Look for the quote "coding is at a world class level"...
Of course it's a bubble. Valuations are propped up by speculative spending and AI seems unable to make enough profit to make back the continued spending.
Now, that's not to say AI isn't useful and we won't have AGI in the future. But this feels alot like the AI winter. Valuations will crash, a bunch of players will disappear, but we'll keep using the tech for boring things and eventually we'll have another breakthrough.
I think Betteridge's Law of Headlines applies here
This thread is just full of people discussing why industrial looms are bad. The factory owners don’t think looms are bad. You can either learn how to be useful in the new factory or you can start throwing shoes.
>I find the resulting outlook for employment terrifying. I am enormously concerned about what will happen to the people whose jobs AI renders unnecessary, or who can’t find jobs because of it. The optimists argue that “new jobs have always materialized after past technological advances.” I hope that’ll hold true in the case of AI, but hope isn’t much to hang one’s hat on, and I have trouble figuring out where those jobs will come from. Of course, I’m not much of a futurist or a financial optimist, and that’s why it’s a good thing I shifted from equities to bonds in 1978.
It's no wonder that the "AI optimists", unless very tendentious, try to focus more on "not needing to work because you'll get free stuff" rather than "you'll be able to exchange your labor for goods".
it always was
This is one of the few times I think Betteridge's law is wrong.
"Coding performed by AI is at a world-class level". Once I hit that line, I stopped reading. This tells me this person didn't do proper research on this matter.
Everyday someone says/asks this statement/question. The "(Is) AI (is) a bubble" statement/question is now a bubble.
A take I saw recently is: if people are still asking "are we in a bubble" then we are not yet in a bubble.
"Coding, which we called "computer programming" 60 years ago, is the canary in the coal mine in terms of the impact of AI."
And before that
"Grace Hopper: [I started to work on the] Mark I, second of July 1944. There was no so such thing as a programmer at that point. We had a code book for the machine and that was all. It listed the codes and what they did, and we had to work out all the beginning of programmingand writing programs and all the rest of it."
"Hopper: I was a mathematical officer. We did coding, we ran the computer, we did everything. We were coders. I wrote [programs for] both Mark I and Mark II."
http://archive.computerhistory.org/resources/text/Oral_Histo...