logoalt Hacker News

bgirardtoday at 6:25 AM8 repliesview on HN

This to me sounds a lot like the SpaceX conversation:

- Ohh look it can [write small function / do a small rocket hop] but it can't [ write a compiler / get to orbit]!

- Ohh look it can [write a toy compiler / get to orbit] but it can't [compile linux / be reusable]

- Ohh look it can [compile linux / get reusable orbital rocket] but it can't [build a compiler that rivals GCC / turn the rockets around fast enough]

- <Denial despite the insane rate of progress>

There's no reason to keep building this compiler just to prove this point. But I bet it would catch up real fast to GCC with a fraction of the resources if it was guided by a few compiler engineers in the loop.

We're going to see a lot of disruption come from AI assisted development.


Replies

jeffreygoestotoday at 7:23 AM

All these people that built GCC and evolved the language did not have the end result in their training set. They invented it. They extrapolated from earlier experiences and knowledge, LLMs only ever accidentally stumble into "between unknown manifolds" when the temperature is high enough, they interpolate with noise (in so many senses). The people building GCC together did not only solve a to technical problem. They solved a social one, agreeing on what they wanted to build, for what and why. LLMs are merely copying these decisions.

show 2 replies
itsyonastoday at 7:48 AM

All right, but perhaps they should also list the grand promises they made and failed to deliver on. They said they would have fully self-driving cars by 2016. They said they would land on Mars in 2018, yet almost a decade has passed since then. They said they would have Tesla's fully self-driving robo-taxis by 2020 and human-to-human telepathy via Neuralink brain implants by 2025–2027.

> - <Denial despite the insane rate of progress>

Sure, but not by what was actually promised. There may also be fundamental limitations to what the current architecture of LLMs can achieve. The vast majority of LLMs are still based on Transformers, which were introduced almost a decade ago. If you look at the history of AI, it wouldn't be the first time that a roadblock stalled progress for decades.

> But I bet it would catch up real fast to GCC with a fraction of the resources if it was guided by a few compiler engineers in the loop.

Okay, so at that point, we would have proved that AI can replicate an existing software project using hundreds of thousands of dollars of computing power and probably millions of dollars in human labour costs from highly skilled domain experts.

raincoletoday at 6:43 AM

> the insane rate of progress

Yeah but the speed of progress can never catch the speed of a moving goalpost!

show 5 replies
fortytoday at 6:59 AM

There are two questions which can be asked for both. The first one is "can these tech can achieve their goals?" which is what you seem debating. The other question is "is a successful outcome of these tech desirable at all?". One is making us pollute space faster than ever, as if we did not fuck the rest enough. They other will make a few very rich people even richer and probably everyone else poorer.

Interesting that people call this "progress" :)

benreesmantoday at 7:42 AM

AI assist in software engineering is unambiguously demonstrated to some done degree at this point: the "no LLM output in my project" stance is cope.

But "reliable, durable, scalable outcomes in adversarial real-world scenarios" is not convincingly demonstrated in public, the asterisks are load bearing as GPT 5.2 Pro would say.

That game is still on, and AI assist beyond FIM is still premature for safety critical or generally outcome critical applications: i.e. you can do it if it doesn't have to work.

I've got a horse in this race which is formal methods as the methodology and AI assist as the thing that makes it economically viable. My stuff is north of demonstrated in the small and south of proven in the large, it's still a bet.

But I like the stock. The no free lunch thing here is that AI can turn specifications into code if the specification is already so precise that it is code.

The irreducible heavy lift is that someone has to prompt it, and if the input is vibes the output will be vibes. If the input is zero sorry rigor... you've just moved the cost around.

The modern software industry is an expensive exercise in "how do we capture all the value and redirect it from expert computer scientists to some arbitrary financier".

You can't. Not at less than the cost of the experts if the outcomes are non-negotiable.

show 1 reply
delaminatortoday at 10:32 AM

In 1908 the Model T could do 45mph.

In 1935 the Auburn 851 S/C Speedster hit 100mph

In 1955 the Mercedes-Benz 300 SL Gullwing did 161mph

In 2025 the Yangwang U9 Xtreme hit 308mph

progress is a decaying exponential - Tsiolkovsky's tyranny

show 2 replies
Ygg2today at 6:44 AM

You can be wrong on every step of your approximation and still be right in the aggregate. E.g. order of magnitude estimate, where every step is wrong but mistakes cancel out.

Human crews on Mars is just as far fetched as it ever was. Maybe even farther due to Starlink trying to achieve Kessler syndrome by 2050.

littlestymaartoday at 7:03 AM

> This to me sounds a lot like the SpaceX conversation

The problem is that it is absolutely indiscernible from the Theranos conversation as well…

If Anthropic stopped making lies about the current capability of their models (like “it compiles the Linux kernel” here, but it's far from the first time they do that), maybe neutral people would give them the benefit of the doubt.

For one grifter that happen to succeed at delivering his grandiose promises (Elon), how many grifters will fail?

show 1 reply