logoalt Hacker News

niruitoday at 8:39 AM4 repliesview on HN

I'm feeling people are using AI in the wrong way.

Current LLM is best used to generate a string of text that's most statically likely to form a sentence together, so from user's perspective, it's most useful as an alternative to manual search engine to allow user to find quick answers to a simple question, such as "how much soda is needed for baking X unit of Y bread", or "how to print 'Hello World' in a 10 times in a loop in X programming language". Beyond this use case, the result can be unreliable, and this is something to be expected.

Sure, it can also generate long code and even an entire fine-looking project, but it generates it by following a statistical template, that's it.

That's why "the easy part" is easy because the easy problem you try to solve is likely already been solved by someone else on GitHub, so the template is already there. But the hard, domain-specific problem, is less likely to have a publicly-available solution.


Replies

Kon5oletoday at 9:41 AM

>I'm feeling people are using AI in the wrong way.

I think people struggle to comprehend the mechanisms that lets them talk to computers as if they were human. So far in computing, we have always been able to trace the red string back to the origin, deterministically.

LLM's break that, and we, especially us programmers, have a hard time with it. We want to say "it's just statistics", but there is no intuitive way to jump from "it's statistics" to what we are doing with LLM's in coding now.

>That's why "the easy part" is easy because the easy problem you try to solve is likely already been solved by someone else on GitHub, so the template is already there.

I think the idea that LLM's "just copy" is a misunderstanding. The training data is atomized, and the combination of the atoms can be as unique from a LLM as from a human.

In 2026 there is no doubt LLM's can generate new unique code by any definition that matters. Saying LLM's "just copy" is as true as saying any human writer just copies words already written by others. Strictly speaking true, but also irrelevant.

lijoktoday at 9:08 AM

I think you severely overestimate your understanding of how these systems work. We’ve been beating the dead horse of “next character approximation” for the last 5 years in these comments. Global maxima would have been reached long ago if that’s all there was to it.

Play around with some frontier models, you’ll be pleasantly surprised.

show 1 reply
aembletontoday at 9:32 AM

Which is great because then I can use my domain expertise to add value, rather than writing REST boilerplate code.

show 1 reply
josefrichtertoday at 9:32 AM

Come on, this shows fundamental lack of understanding and experience on your side.