logoalt Hacker News

wizzwizz4yesterday at 12:49 PM2 repliesview on HN

From the article:

> There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023.

We're not trending towards superintelligence with these AIs. We're trending towards (and, in fact, have already reached) superintelligence with computers in general, but LLM agents are among the least capable known algorithms for the majority of tasks we get them to do. The problem, as it usually is, is that most people don't have access to the fruits of obscure research projects.

Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special.


Replies

blackqueerirohyesterday at 10:37 PM

> Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special.

I’ll take that bet. How much money would you like to put on this, and we’ll have a neutral third party pick both the untrained child and the LLM.

Let me know.

jnovekyesterday at 2:06 PM

The rate of hallucination has gone down drastically since 2023. As LLM coding tools continue to pare that rate down, eventually we’ll hit a point where it is comparable to the rate we naturally introduce bugs as humans programmers.

show 2 replies