logoalt Hacker News

geonyesterday at 12:23 AM2 repliesview on HN

Having seen LLMs so many times produce incoherent, nonsensical and invalid chains of reasoning...

LLMs are little more than RNGs. They are the tea leaves and you read whatever you want into them.


Replies

rcxdudeyesterday at 1:10 PM

They are clearly getting to useful and meaningful results with at a rate significantly better than chance (for example, the fact that ChatGPT can play chess well even though it sometimes tries to make illegal moves shows that there is a lot more happening there than just picking moves uniformly at random). Demanding perfection here seems to be odd given that humans also can make bizarre errors in reasoning (of course, generally at a lower rate and in a distribution of kinds of errors we are more used to dealing with).

show 1 reply
bongodongobobyesterday at 4:55 AM

Ridiculous. I use it daily and get meaningful, quality results. Learn to use the tools.

show 5 replies