logoalt Hacker News

jstummbilliglast Saturday at 11:41 PM3 repliesview on HN

That depends on what you mean by "came along". If you mean "once that everyone got around to the idea that LLMs were going to be good at this thing" then sure, but it was not long ago that the majority of people around here were very skeptical of the idea that LLMs would ever be any good at coding.


Replies

make3last Saturday at 11:50 PM

What you're arguing about is the field completely changing over 3 years; it's nothing, as a time for everyone to change their minds.

LLMs were not productified in a meaningful way before ChatGPT in 2022 (companies had sufficiently strong LLMs, but RLHF didn't exist to make them "PR-safe"). Then we basically just had to wait for LLM companies to copy Perplexity and add search engines everywhere (RAG already existed, but I guess it was not realistic to RAG the whole internet), and they became useful enough to replace StackOverflow.

rustystumplast Saturday at 11:44 PM

I dont think this is true. People were skeptical of agi / better than human coding which is not the case. As a matter of fact i think searching docs was one of the first manor uses of llms before code.

nutjob2last Saturday at 11:48 PM

That's because there has been rapid improvement by LLMs.

Their tendency to bullshit is still an issue, but if one maintains a healthy skepticism and uses a bit of logic it can be managed. The problematic uses are where they are used without any real supervision.

Enabling human learning is a natural strength for LLMs and works fine since learning tends to be multifaceted and the information received tends to be put to a test as a part of the process.