logoalt Hacker News

bilekastoday at 10:48 AM2 repliesview on HN

> To me they seem to be pretty damn smart

That's the sorcery mentioned in the GP, the issue comes when people believe it to be smart however in reality it is just a next word prediction. Gives the impression it's actually thinking, and this is by design. Personally I think it's dangerous in the sense it gives users a false sense of confidence in the LLM and so a LOT of people will blindly trust it. This isn't a good thing.


Replies

jeremyjhtoday at 11:42 AM

I'm curious how you think "word predictor" meaningfully describes an instruct model that has developed novel mathematical proofs that have eluded mathematicians for decades?

edit:

You cannot predict all the actions or words of someone smarter than you. If I could always predict Magnus Carlsen's next chess move, I'd be at least as good at chess as Magnus - and that would have to involve a deep understanding of chess, even if I can't explain my understanding.

I can't predict the next token in a novel mathematical proof unless I've already understood the solution.

show 2 replies
handoflixuetoday at 11:27 AM

What's the difference between "smart" and "next word prediction", at this point? Back when they first came out, sure, but now they can write code and create art.

What would it take for you to concede a future model was smart?

show 1 reply