logoalt Hacker News

lateforworktoday at 6:47 PM6 repliesview on HN

Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.

AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.

AI systems are trained on vast bodies of human work and generate answers near the center of existing thought. A human might occasionally step back and question conventional wisdom, but AI systems do not do this on their own. They align with consensus rather than challenge it. As a result, they cannot independently push knowledge forward. Humans can innovate with help from AI, but AI still requires human direction.

You can prod AI systems to think critically, but they tend to revert to the mean. When a conversation moves away from consensus thinking, you can feel the system pulling back toward the safe middle.

As Apple’s “Think Different” campaign in the late 90s put it: the people crazy enough to think they can change the world are the ones who do—the misfits, the rebels, the troublemakers, the round pegs in square holes, the ones who see things differently. AI is none of that. AI is a conformist. That is its strength, and that is its weakness.

[1] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...


Replies

bluGilltoday at 7:58 PM

A week ago there was an artical about Donald Knuth asking an ai to prove something then unproved and it found the proof. I suppose it is possible that the great Knuth didn't know how to find this existing truth - but there is a reason we all doubted it (including me when I mentioned it there)

i have never written a c compiler yet I would bet money if you paid me to write one (it would take a few years at least) it wouldn't have any innovations as the space is already well covered. Where I'm different from other compilers is more likely a case of I did something stupid that someone who knows how to write a compiler wouldn't.

show 3 replies
bigstrat2003today at 8:10 PM

> Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1].

Well, of course. Despite people applying the label of AI to them, LLMs don't have a shred of intelligence. That is inherent to how they work. They don't understand, only synthesize from the data they were trained on.

show 3 replies
thesztoday at 7:26 PM

  > ...generate answers near the center of existing thought.
This is right in the Wikipedia's article on universal approximation theorem [1].

[1] https://en.wikipedia.org/wiki/Universal_approximation_theore...

"n the field of machine learning, the universal approximation theorems (UATs) state that neural networks with a certain structure can, in principle, approximate any continuous function to any desired degree of accuracy. These theorems provide a mathematical justification for using neural networks, assuring researchers that a sufficiently large or deep network can model the complex, non-linear relationships often found in real-world data."

And then: "Notice also that the neural network is only required to approximate within a compact set K {\displaystyle K}. The proof does not describe how the function would be extrapolated outside of the region."

NNs, LLMs included, are interpolators, not extrapolators.

And the region NN approximates within can be quite complex and not easily defined as "X:R^N drawn from N(c,s)^N" as SolidGoldMagiKarp [2] clearly shows.

[2] https://github.com/NiluK/SolidGoldMagikarp

show 1 reply
peeholetoday at 7:57 PM

LLMs still do forEach, it’s like wearing Tommy Hilfiger

show 1 reply
slopinthebagtoday at 7:35 PM

Yeah I think he had a pretty sane take in that article:

>CCC shows that AI systems can internalize the textbook knowledge of a field and apply it coherently at scale. AI can now reliably operate within established engineering practice. This is a genuine milestone that removes much of the drudgery of repetition and allows engineers to start closer to the state of the art.

And also

> The most effective engineers will not compete with AI at producing code, but will learn to collaborate with it, by using AI to explore ideas faster, iterate more broadly, and focus human effort on direction and design. Lower barriers to implementation do not reduce the importance of engineers; instead, they elevate the importance of vision, judgment, and taste. When creation becomes easier, deciding what is worth creating becomes the harder problem. AI accelerates execution, but meaning, direction, and responsibility remain fundamentally human.

Animatstoday at 7:39 PM

I think this article was on HN a few days ago.