logoalt Hacker News

marcus_holmestoday at 5:49 AM28 repliesview on HN

I think this is a great example of both points of view in the ongoing debate.

Pro-LLM coding agents: look! a working compiler built in a few hours by an agent! this is amazing!

Anti-LLM coding agents: it's not a working compiler, though. And it doesn't matter how few hours it took, because it doesn't work. It's useless.

Pro: Sure, but we can get the agent to fix that.

Anti: Can you, though? We've seen that the more complex the code base, the worse the agents do. Fixing complex issues in a compiler seems like something the agents will struggle with. Also, if they could fix it, why haven't they?

Pro: Sure, maybe now, but the next generation will fix it.

Anti: Maybe. While the last few generations have been getting better and better, we're still not seeing them deal with this kind of complexity better.

Pro: Yeah, but look at it! This is amazing! A whole compiler in just a few hours! How many millions of hours were spent getting GCC to this state? It's not fair to compare them like this!

Anti: Anthropic said they made a working compiler that could compile the Linux kernel. GCC is what we normally compile the Linux kernel with. The comparison was invited. It turned out (for whatever reason) that CCC failed to compile the Linux kernel when GCC could. Once again, the hype of AI doesn't match the reality.

Pro: but it's only been a few years since we started using LLMs, and a year or so since agents. This is only the beginning!

Anti: this is all true, and yes, this is interesting. But there are so many other questions around this tech. Let's not rush into it and mess everything up.


Replies

Alupistoday at 6:35 AM

I'm reminded, once again, of the recent "vibe coded" OCaml fiasco[1].

The PR author had zero understanding why their entirely LLM-generated contribution was viewed so suspiciously.

The article validates a significant point: it is one thing to have passing tests and be able to produce output that resembles correctness - however it's something entirely different for that output to be good and maintainable.

[1] https://github.com/ocaml/ocaml/pull/14369

show 7 replies
bgirardtoday at 6:25 AM

This to me sounds a lot like the SpaceX conversation:

- Ohh look it can [write small function / do a small rocket hop] but it can't [ write a compiler / get to orbit]!

- Ohh look it can [write a toy compiler / get to orbit] but it can't [compile linux / be reusable]

- Ohh look it can [compile linux / get reusable orbital rocket] but it can't [build a compiler that rivals GCC / turn the rockets around fast enough]

- <Denial despite the insane rate of progress>

There's no reason to keep building this compiler just to prove this point. But I bet it would catch up real fast to GCC with a fraction of the resources if it was guided by a few compiler engineers in the loop.

We're going to see a lot of disruption come from AI assisted development.

show 8 replies
gignicotoday at 6:45 AM

Exactly. This flawed argument by which everything will be fixed by future models drives me crazy every time.

show 4 replies
dataflowtoday at 6:29 AM

> Pro: Sure, maybe now, but the next generation will fix it.

Do we need a c2 wiki page for "sufficiently smart LLM" like we do for https://wiki.c2.com/?SufficientlySmartCompiler ?

zozbot234today at 10:51 AM

You didn't even mention that this vibe-coded toy compiler cost $20k in token spend. That's an insane amount of money for what this is.

show 1 reply
welitoday at 9:27 AM

> this is all true, and yes, this is interesting. But there are so many other questions around this tech. Let's not rush into it and mess everything up.

That's a really nice fictitious conversation but in my experience "anti-ai" people would be prone to say "This is stupid LLM's will never be able to write complex code and attempting to do so is futile". If your mind is open to explore how LLM's will actually write complex software then by definition you are not "anti".

Rapzidtoday at 7:21 AM

Two completely valid perspectives.

Unless you need a correctly compiled Linux kernel. In that case one gets exhausting real quick.

show 1 reply
frizlabtoday at 7:36 AM

I think you also forgot: Anti: But the whole thing can only have been generated because GCC and other compilers already exists (and depending on how strong the anti-feeling is: and has been stolen…)!

nikitautoday at 7:27 AM

And not to mention that a C compiler is something we have literally 50 years worth of code for. I still seriously doubt the ability of LLMs to tackle truly new problems.

show 1 reply
rk06today at 7:10 AM

As an Anti, my argument is "if AI will good in future, then come back in the future"

show 1 reply
nvrmndtoday at 6:11 AM

> It's not fair to compare them like this!

As someone who leans pro in this debate, I don't think I would make that statement. I would say the results are exactly as we expect.

Also, a highly verifiable task like this is well suited to LLMs, and I expect within the next ~2 years AI tools will produce a better compiler than gcc.

show 6 replies
abrbhattoday at 7:17 AM

It seems that the cause of the difference in opinion is that the anti camp is looking at the current state while the pro camp looking at the slope and projecting it into the future.

aurareturntoday at 7:21 AM

I don't think this is how pro and anti conversation goes.

I think the pro would tell you that if GCC developers could leverage Opus 4.6, they'd be more productive.

The anti would tell you that it doesn't help with productivity, it makes us less versed in the code base.

I think the CCC project was just a demonstration on what Opus can do now autonomously. 99.9% of software projects out there aren't building something as complex as a Linux compiler.

kaycey2022today at 10:19 AM

Maybe Anthropic can sponsor a research team to polish this using just an agent. A lot of things can be learned from that exercise.

agumonkeytoday at 9:33 AM

This is a pattern I see a lot, in programming languages communities too, where it's a source a joy and dreams first and facts later.

viraptortoday at 9:31 AM

That's such a strawman conversation. Starting from:

> it's not a working compiler, though. And it doesn't matter how few hours it took, because it doesn't work. It's useless.

It works. It's not perfect, but anthropic claims to have successfully compiled and booted 3 different configurations with it. The blog post failed to reproduce one specific version on one specific architecture. I wish anthropic gave us more information about which kernel commits succeeded, but still. Compare this to years that it took for clang to compile the kernel, yet people were not calling that compiler useless.

If anyone thinks other compilers "just work", I invite them to start fixing packages that fail to build in nixos after every major compiler change, to get a dose of real world experience.

red75primetoday at 6:54 AM

> Pro-LLM coding agents: look! a working compiler built in a few hours by an agent! this is amazing!

> Anti-LLM coding agents: it's not a working compiler, though. And it doesn't matter how few hours it took, because it doesn't work. It's useless.

Pro-LLM: Read the freaking article, it's not that long. The compiler made a mistake in an area where only two compilers exist that are up to the task: Linux Kernel.

show 3 replies
ares623today at 6:00 AM

Pretty much. It's missing a tiny detail though. One side is demanding we keep giving hundreds of billions to them and at the same time promising the other side's unemployment.

show 4 replies
quaintdevtoday at 7:32 AM

I read a Youtube comment recently on pro AI video, it was

"The source code of gcc is available online"

NitpickLawyertoday at 6:18 AM

Two thoughts here:

First, remember when we had LLMs run optimisation passes last year? Alphaevolve doing square packing, and optimising ML kernels? The "anti" crowd was like "well, of course it can automatically optimise some code, that's easy". And things like "wake me up when it does hard tasks". Now, suddenly when they do hard tasks, we're back at "haha, but it's unoptimised and slow, laaame".

Second, if you could take 100 juniors, 100 mid level devs and 100 senior devs, lock them in a room for 2 weeks, how many working solutions that could boot up linux in 2 different arches, and almost boot in the third arch would you get? And could you have the same devs now do it in zig?

The thing that keeps coming up is that the "anti" crowd is fighting their own deamons, and have kinda lost the plot along the way. Every "debate" is about promisses, CEOs, billions, and so on. Meanwhile, at every step of the way these things become better and better. And incredibly useful in the right hands. I find it's best to just ignore the identity folks, and keep on being amazed at the progress. The haters will just find the next goalpost and the next fight with invisible entities. To paraphrase - those who can, do, those who can't, find things to nitpick.

show 2 replies
anonnontoday at 8:40 AM

The "Anti" stance is only tenable now if you believe LLMs are going to hit a major roadblock in the next few months around which Big AI won't be able to navigate. Something akin to the various "ghosts in the machine" that started bedeviling EEs after 2000 when transistors got sufficiently small, including gate leakage and sub-threshold current, such that Dennard Scaling came to an abrupt end and clock speeds stalled.

I personally hope that that happens, but I doubt it will. Note also that processors still continued to improve even without Dennard Scaling due to denser, better optimized onboard caches, better branch prediction, and more parallelism (including at the instruction level), and the broader trend towards SoCs and away from PCB-based systems, among other things. So at least by analogy, it's not impossible that even with that conjectured roadblock, Big AI could still find room for improvement, just at a much slower rate.

But current LLMs are thoroughly compelling, and even just continued incremental improvements will prove massively disruptive to society.

resfirestartoday at 8:07 AM

What does this imagined conversation have to do with the linked article? The “pro” and “anti” character both sound like the kind of insufferable idiots I’d expect to encounter on social media, the OP is a very nice blog post about performance testing and finding out what compilers do, doesn’t attempt any unwarranted speculation about what agents “struggle with” or will do “next generation”, how is it an example of that sort of shitposting?

yoyohello13today at 7:19 AM

I think LLMs the technology is very cool and l’m frankly amazed at what it can do. What I’m ‘anti’ about is pushing the entire economy all in on LLM tech. The accelerationist take of ‘just keep going as fast as possible and it will work out, trust me bro’ is the most unhinged dangerous shit I’ve ever heard and unfortunately seems to be the default worldview of those in charge of the money. I’m not sure where all the AI tools will end up, but I am willing to bet big that the average person is not going to be better off 10 years from now. The direction the world is going scares the shit out of me and the usages of AI by bad actors is not helping assuage that fear.

Honestly? I think if we as a society could trust our leaders (government and industry) to not be total dirtbags the resistance to AI would be much lower.

Like imagine if the message was “hey, this will lead to unemployment, but we are going to make sure people can still feed their families during the transition, maybe look in to ways to support subsidies retraining programs for people whose jobs have been impacted .” Seems like a much more palatable narrative than, “fuck you pleb! go retrain as a plumber or die in a ditch. I’ll be on my private island counting the money I made from destroying your livelihood.”

nineteen999today at 9:54 AM

I mean - who would honestly expect an LLM to be able to compete with a compiler with 40 years of development behind it? Even more if you count the collective man years expended in that time. The Claude agents took two weeks to produce a substandard compiler, under the fairly tight direction of a human who understood the problem space.

At the same time - you could direct Claude to review the register spilling code and the linker code of both LLVM/gcc for potential improvements to CCC and you will see improvements. You can ask it not to copy GPL code verbatim but to paraphrase and tell it it can rip code from LLVM as long as the licenses are preserved. It will do it.

You might only see marginal improvements without spending another $100K on API calls. This is about one of the hardest projects you could ask it to bite off and chew on. And would you trust the compiler output yet over GCC or LLVM?

Of course not.

But I wager, that if you _started_ with the LLVM/gcc codebases and asked it to look for improvements - it might be surprising to see what it finds.

Both sides have good arguments. But this could be a totally different ball game in 2, 5 and 10 years. I do feel like those who are most terrified by it are those whose identity is very much tied to being a programmer, and seeing the potential for their role to be replaced and I can understand that.

Me personally - I'm relieved I finally have someone else to blame and shout at rather than myself for the bugs in the software I produce. I'm relieved that I can focus now on the more creative direction and design of my personal projects (and even some work projects on the non-critical paths) and not get bogged down in my own perfectionism with respect to every little component until reaching exhaustion and giving up.

And I'm fascinated by the creativity of some of the projects I see that are taking the same mindset and approach.

I was depressed by it at first. But as I've experimented more and more, I've come to enjoy seeing things that I couldn't ever have achieved even with 100 man years of my own come to fruition.

abbyprogtoday at 8:12 AM

I'm firmly in the anti/unimpressed camp so far - but of course open to see where it goes.

I mean this compiler is the equivalent of handing someone a calculator when it was first invented and seeing that it took 2 hours to multiply two numbers together, I would go "cool that you have a machine that can do math, but I can multiply faster by hand, so it's a useless device to me".

soulofmischieftoday at 6:03 AM

In my experience, it is often the other way around. Enthusiasts are tasked with trying to open minds that seem very closed on the subject. Most serious users of these tools recognize the shortcomings and also can make well-educated guesses on the short term future. It's the anti crowd who get hellbent on this ridiculously unfounded "robots are just parrots and can't ever replace real programmers" shtick.

show 1 reply
raincoletoday at 6:14 AM

Are you trying to demonstrate a textbook example of straw man argument?

show 1 reply