logoalt Hacker News

vntokyesterday at 11:18 AM1 replyview on HN

The parent is specifically talking about producing boilerplate code -a domain in which LLM excell at- and not having had any success at that. It's therefore not a leap of logic to assume they haven't put (enough) effort into getting better at prompting first, which is perfectly fine per se but leans towards a skill issue and not an immutable property of gen AI.

The uncomfortable fact remains that one cannot really expect to get much better results from an LLM without putting some work themselves. They aren't magical oracles.


Replies

tovejyesterday at 4:47 PM

That is not at all what I said, please read my post more carefully before speculating.

I am talking about using LLMs in general, not for boiler plate specifically.

My point about boilerplate is that I have tools that solve this for me already, and do it in a more predictable way.