logoalt Hacker News

jmyeettoday at 4:41 AM3 repliesview on HN

The crazy part to me is that even here on HN there are people who still insist that LLMs don't fabricate things or otherwise lie.

I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.

Sure there is such a thing as a naysayer but there are also people think all forms of valid criticism are just naysaying.


Replies

protocolturetoday at 6:24 AM

>I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.

NFT protocol doesnt really care what the payload is. NFT purveyors likewise dont care what their payload is, as long as they could use the term "NFT".

NFT's are great for certain use cases (Crypto Kitties is still around I believe) but there was never a single moment I considered that owning a weird ape jpeg, even if it was somehow, properly owned by me, would be worth millions of dollars or whatever. Its like trying to sell a "TCP".

That said, future blockchain applications will probably still rely on NFT's in some fashion. Just not the protocol as product weirdness we got for a few years there.

weird-eye-issuetoday at 5:52 AM

I've never seen anyone here claim that AI never hallucinates or can't provide incorrect information.

gertoptoday at 6:08 AM

I've not heard many people claim that LLMs don't hallucinate, however I have seen people (that I previously believed to be smart):

1. Believe LLMs outright even knowing they are frequently wrong

2. Claim that LLMs making shit up is caused by the user not prompting it correctly. I suppose in the same way that C is memory safe and only bad programmers make it not so.