logoalt Hacker News

Horses: AI progress is steady. Human equivalence is sudden

208 pointsby pbuitoday at 12:26 AM126 commentsview on HN

Comments

twodavetoday at 3:30 AM

Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:

  1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
  
  2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.
show 3 replies
ibletoday at 1:29 AM

People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other.

The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.

If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.

show 5 replies
richardlestoday at 2:18 AM

I've also noticed that LLMs are really good at speeding up onboarding. New hires basically have a friendly, never tired mentor available. It gives them more confidence in the first drafted code changes / design docs. But I don't think the horse analogy works.

It's really changing cultural expectations. Don't ping a human when an LLM can answer the question probably better and faster. Do ping a human for meaningful questions related to product directions / historical context.

What LLMs are killing is:

- noisy Slacks with junior folks questions. Those are now your Gemini / chat gpt sessions.

- tedious implementation sessions.

The vast majority of the work is still human led from what I can tell.

show 2 replies
namesbctoday at 2:10 AM

Software engineers used to know that measuring lines of code written was a poor metric for productivity...

https://www.folklore.org/Negative_2000_Lines_Of_Code.html

show 2 replies
billisonlinetoday at 2:07 AM

An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term "AGI." But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.

I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.

show 1 reply
jsheardtoday at 1:43 AM

Cost per word is a bizarre metric to bring up. Since when is volume of words a measure of value or achievement?

show 2 replies
mark242today at 3:41 AM

Someone who makes horseshoes then learns how to make carburetors, because the demand is 10x.

https://en.wikipedia.org/wiki/Jevons_paradox

s17ntoday at 1:26 AM

This is a fun piece... but what killed off the horses wasn't steady incremental progress in steam engine efficiency, it was the invention of the internal combustion engine.

show 3 replies
personjerrytoday at 1:25 AM

I think it's a cool perspective, but the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative.

And that really is the entire question at this point: Which domains will AI win in by a sufficient margin to be worth it?

show 1 reply
barbazootoday at 1:25 AM

Engine efficiency, chess rating, AI cap ex. One example is not like the other. Is there steady progress in AI? To me it feels like it’s little progress followed by the occasional breakthrough but I might be totally off here.

show 4 replies
burroisolatortoday at 1:31 AM

"In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.

And not very long after, 93 per cent of those horses had disappeared.

I very much hope we'll get the two decades that horses did."

I'm reminded of the idiom "be careful what you wish for, as you might just get it." Rapid technogical change has historically lead to prosperity over the long term but not in the short term. My fear is that the pace of change this time around is so rapid that the short term destruction will not be something that can be recovered from even over the longer term.

show 3 replies
1970-01-01today at 1:46 AM

How about we stop trying the analogy clothing on and just tell it like it is? AI is unlike any other technology to date. Just like predicting the weather, we don't know what it will be like in 20 months. Everything is a guesstimate.

show 1 reply
ternustoday at 2:55 AM

Regarding horses vs. engines, what changed the game was not engine efficiency, but the widespread availability of fuel (gas stations) and the broad diffusion of reliable, cheap cars. Analogies can be made to technologies like cell phones, MP3 players, or electric cars: beyond just the quality of the core technology, what matters is a) the existence of supporting infrastructure and b) a watershed level of "good/cheap enough" where it displaces the previous best option.

COAGULOPATHtoday at 2:44 AM

> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.

But would you rather be a horse in 1920 or 2020? Wouldn't you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?

The only way horses conceivably have it worse is that there are fewer of them (a kind of "repugnant conclusion")...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.

show 2 replies
jameslktoday at 1:46 AM

> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.

> Then in December, Claude finally got good enough to answer some of those questions for us.

> … Six months later, 80% of the questions I'd been being asked had disappeared.

Interesting implications for how to train juniors in a remote company, or in general:

> We find that sitting near teammates increases coding feedback by 18.3% and improves code quality. Gains are concentrated among less-tenured and younger employees, who are building human capital. However, there is a tradeoff: experienced engineers write less code when sitting near colleagues.

https://pallais.scholars.harvard.edu/sites/g/files/omnuum592...

sothatsittoday at 1:39 AM

This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:

1. The release of Claude Code in February

2. The release of Opus 4.5 two weeks ago

In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.

Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.

show 1 reply
websiteapitoday at 1:44 AM

funny how we have all of this progress yet things that actually matter (sorry chess fans) in the real world are more expensive: health care, housing, cars. and what meager gains there are seem to be more and more concentrated in a smaller group of people.

plenty of charts you can look at - net productivity by virtually any metric vs real adjusted income. the example I like are kiosks and self checkout. who has encountered one at a place where it is cheaper than its main rival and is directly attributable to (by the company or otherwise) to lower prices?? in my view all it did was remove some jobs. that's the preview. that's it. you will lose jobs and you will pay more. congrats.

even with year 2020 tech you could automate most work that needs to be done, if our industry wouldn't endlessly keep disrupting itself and have a little bit of discipline.

so once ai destroys desk jobs and the creative jobs, then what? chill out? too bad anyone who has a house won't let more be built.

show 10 replies
pbwtoday at 2:13 AM

This is food for thought, but horses were a commodity; people are very much not interchangeable with each other. The BLS tracks ~1,000 different occupations. Each will fall to AI at a slightly different rate, and within each, there will be variations as well. But this doesn't mean it won't still subjectively happen "fast".

kgk9000today at 2:32 AM

I think the author's point is that each type of job will basically disappear roughly at once, shortly after AI crosses the bar of "good enough" in that particular field.

show 1 reply
cuttothechasetoday at 2:31 AM

>>This was a five-minute lightning talk given over the summer of 2025 to round out a small workshop.

Glad I noticed that footnote.

Article reeks of false equivalences and incorrect transitive dependencies.

byronictoday at 2:17 AM

my favorite part was where the graphs are all unrelated to each other

tomxortoday at 3:04 AM

Terrible comparison.

Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.

Now LLMs, what is their purpose? What is the purpose of a human?

I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).

They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.

leowoo91today at 2:16 AM

We still have chess grandmasters if you have noticed..

show 1 reply
kazinatortoday at 1:52 AM

Ironically, you could use the sigmoid function instead of horses. The training stimulus slowly builds over multiple iteration and then suddenly, flip: the wrong prediction reverses.

pansa2today at 3:01 AM

> 90% of the horses in the US disappeared

Where did they go?

show 1 reply
john-radiotoday at 2:19 AM

I've never visited this blog before but I really enjoy the synthesis of programming skill (at least enough skill to render quick graphs and serve them via a web blog) and writing skill here. It kind of reminds me of the way xkcd likes to drive home his ideas. For example, "Surpassed by a system that costs one thousand times less than I do... less, per word thought or written, than ... the cheapest human labor" could just be a throwaway thought, and wouldn't serve very well on its own, unsupported, in a serious essay, and of course the graph that accompanies that thought in Jones's post here is probably 99.9% napkin math / AI output, but I do feel like it adds to the argument without distracting from it.

(A parenthetical comment explaining where he ballparked the measurements for himself, the "cheapest human labor," and Claude numbers would also have supported the argument, and some writers, especially web-focused nerd-type writers like Scott Alexander, are very good at this, but text explanations, even in parentheses, have a way of distracting readers from your main point. I only feel comfortable writing one now because my main point is completed.)

glitchctoday at 2:16 AM

Conclusion: Soylent..?

show 1 reply
wrstoday at 1:42 AM

Point taken, but it's hard to take a talk seriously when it has a graph showing AI becoming 80% of GDP! What does the "P" even stand for then?

WhyOhWhyQtoday at 1:47 AM

Humans design the world to our benefit, horses do not.

show 1 reply
AstroBentoday at 1:48 AM

Cool, now lets make a big list of technologies that didn't take off like they were expected to

johnsmith1840today at 2:08 AM

I mean it's hard to argue that if we invented a human in a box (AGI) human work would be irrelevent. But I don't know how we could watch current AI and anyone can say we have that.

The big thing this AI boom has showed us that we can all be thankful to have seen is what a human in a box will eventually look like. The first generation of humans to be able to see that is a super lucky experience to have.

Maybe it's one massive breakthrough away or maybe it's dozens away. But there is no way to predict when some massive breakthrough will occur Illya said 5-20 that really just means we don't know.

narratortoday at 1:38 AM

Wait till the robots arrive. That they will know how to do a vast range of human skills, some that people train their whole lives for, will surprise people the most. The future shock I get from Claude Code, knowing how long stuff takes the hard way, especially niche difficult to research topics like the alternate applicable designs of deep learning models to a modeling task, is a thing of wonder. Imagine now that a master marble carver shows up at an exhibition and some sci-fi author just had robots make a perfect beautiful equivalent of a character from his novel, equivalent in quality to Michaelangelo's David, but cyberpunk.

blondie9xtoday at 2:26 AM

This post is kind of sad. It feels like he's advocating for human depopulation since the trajectory aligns with horse populations declining by 93% also.

show 1 reply
conartist6today at 2:02 AM

I thought this was going to be about how much more intelligent horses are than AIs and I was disappointed

fizlebittoday at 1:55 AM

yeah but machines don't produce horseshit, or do they? (said in the style of Vsauce)

echelontoday at 1:51 AM

> And not very long after, 93 per cent of those horses had disappeared.

> I very much hope we'll get the two decades that horses did.

> But looking at how fast Claude is automating my job, I think we're getting a lot less.

This "our company is onto the discovery that will put you all out of work (or kill you?)" rhetoric makes me angry.

Something this powerful and disruptive (if it is such) doesn't need to be owned or controlled by a handful of companies. It makes me hope the Chinese and their open source models ultimately win.

I've seen Anthropic and OpenAI employees leaning into this rhetoric on an almost daily basis since 2023. Less so OpenAI lately, but you see it all the time from these folks. Even the top leadership.

Meanwhile Google, apart from perhaps Kilpatrick, is just silent.

show 2 replies
adventuredtoday at 1:42 AM

It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.

As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.

I think it's a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There's no time to form powerful unions to stop what's happening, it's already far too late.

show 6 replies
kangstoday at 1:32 AM

hello faster horses