logoalt Hacker News

The Abstraction Rises

72 pointsby birdculturelast Sunday at 8:15 AM32 commentsview on HN

Comments

conartist6last Sunday at 12:09 PM

It's funny, but I think the accidental complexity is through the roof. It's skyrocketing.

Nothing about cajoling a model to write what you want it to is essential complexity in software dev.

In addition when you do a lot of building with no theory you tend you make lots and lots of new non-essential complexity.

Devtools are no exception. There was already lots of nonessential complexity in them and in the model era is that gone? ...no don't worry it's all still there. We built all the shiny new layers right on top of all the old decaying layers, like putting lipstick on a pig.

show 5 replies
lsytoday at 4:27 PM

LLM coding isn't a new level of abstraction. Abstractions are (semi-)reliable ways to manage complexity by creating building blocks that represent complex behavior, that are useful for reasoning about outcomes.

Because model output can vary widely from invocation to invocation, let alone model to model, prompts aren't reliable abstractions. You can't send someone all of the prompts for a vibecoded program and know they will get a binary with generally the same behavior. An effective programmer in the LLM age won't be saving mental energy by reasoning about the prompts, they will be fiddling with the prompts, crossing their fingers that it produces workable code, then going back to reasoning about the code to ensure it meets their specification.

What I think the discipline is going to find after the dust settles is that traditional computer code is the "easiest" way to reason about computer behavior. It requires some learning curve, yes, but it remains the highest level of real "abstraction", with LLMs being more of a slot machine for saving the typing or some boilerplate.

louiereedersontoday at 11:04 AM

To the point on Jevons Paradox, the number of people/developers joining GitHub had been accelerating as of the last Octoverse report. Related: "In 2023, GitHub crossed 100 million developers after nearly three years of growth from 50 million to 100 million. But the past year alone has rewritten that curve with our fastest absolute growth yet. Today, more than 180 million developers build on GitHub."

https://github.blog/news-insights/octoverse/octoverse-a-new-...

show 1 reply
chrisjjlast Sunday at 10:18 AM

> LLMs ... completing tasks at the scale of full engineering teams.

Ah, a work of fiction.

show 1 reply
thesztoday at 8:26 AM

Fortran is all about symbolic programming. There is no probability in the internal workings of Fortran compiler. Almost any person can learn rules and count on them.

LLMs are all about probabilistic programming. While they are harnessed by a lot of symbolic processing (tokens as simple example), the core is probabilistic. No hard rules can be learned.

And, for what it worth, "Real programmers don't use Pascal" [1] was not written about assembler programmers, it was written about Fortran programmers, a new Priesthood.

[1] https://web.archive.org/web/20120206010243/http://www.ee.rye...

Thus, what I expect is for new Priesthood to emerge - prompt writing specialists. And this is what we see, actually.

show 1 reply
Havoctoday at 12:39 PM

Not a fan of looking at history for cases that look like the could be a step change - a new paradigm. For that it seems safer to extrapolate out from recent experiences. Normally that’s a bad idea but if you’re in uncharted territory it’s the only reference point

slopusilalast Sunday at 10:05 AM

> My concerns about obsolescence have shifted toward curiosity about what remains to be built. The accidental complexity of coding is plummeting, but the essential complexity remains. The abstraction is rising again, to tame problems we haven't yet named.

what if AI is better at tackling essential complexity too?

show 1 reply
jmclnxlast Sunday at 4:13 PM

This is well worth a read!

show 1 reply
rvzlast Sunday at 11:55 AM

> With the price of computation so high, that inefficiency was like lighting money on fire. The small group of contributors capable of producing efficient and correct code considered themselves exceedingly clever, and scoffed at the idea that they could be replaced.

There will always be someone ready to drag down prices of computation low enough so that it is then democratized for all, some may disagree but that would eventually be local inference as computer hardware gets better with clever software algorithms.

In this AI story, you can take a guess who are the "The Priesthood" of the 2020s are.

> You still have to know what you want the computer to do, and that can be very hard. While not everyone wrote computer programs, the number of computers in the world exploded.

One can say, the number of AI agents will explode and surpass humans on the internet in the next few years, and reading the code and understanding what it does when generated from an AI will be even more important than writing it.

So you do not get horrific issues like this [0] since now the comments in the code are now consumed by the LLM and due to their inherent probabilistic and unpredictable nature, different LLMs produce different code and cannot guarrantee that it is correct other than a team of expert humans.

We'll see if you're ready to read (and fix) an abundance of lots of AI slop and messy architectures built by vibe-coders as maintainance costs and security risks skyrocket.

[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...