logoalt Hacker News

imiriclast Wednesday at 7:38 AM2 repliesview on HN

It's interesting to see this article in juxtaposition to the one shared recently[1], where AI skeptics were labeled as "nuts", and hallucinations were "(more or less) a solved problem".

This seems to be exactly the kind of results we would expect from a system that hallucinates, has no semantic understanding of the content, and is little more than a probabilistic text generator. This doesn't mean that it can't be useful when placed in the right hands, but it's also unsurprising that human non-experts would use it to cut corners in search of money, power, and glory, or worse—actively delude, scam, and harm others. Considering that the latter group is much larger, it's concerning how little thought and resources are put into implementing _actual_ safety measures, and not just ones that look good in PR statements.

[1]: https://news.ycombinator.com/item?id=44163063


Replies

JackClast Wednesday at 1:10 PM

The difference in fields is key here: AI models are going to have a very different impact in fields where ground truth is available instantly (does the generated code have the expected output?) or takes years of manual verification.

(Not a binary -- ground truth is available enough for AI to be useful to lots of programmers.)

show 1 reply
BlueTemplarlast Wednesday at 8:40 AM

Heh, reminds me of cryptocurrencies...

Or even of the Internet in general.

I guess it's a common pitfall with information or communication technologies ?

(Heck, or with technologies in general, but non-information or communication ones rarely scale as explosively...)

show 1 reply