logoalt Hacker News

101008last Tuesday at 10:14 PM3 repliesview on HN

It's an example that shows that if these models aren't trained in a specific problem, they may have a hard time solving it for you.


Replies

altruioslast Tuesday at 11:15 PM

An analogy is asking someone who is colorblind how many colors are on a sheet of paper. What you are probing isn't reasoning, it's perception. If you can't see the input, you can't reason about the input.

show 1 reply
Uehrekalast Tuesday at 10:32 PM

No, it’s an example that shows that LLMs still use a tokenizer, which is not an impediment for almost any task (even many where you would expect it to be, like searching a codebase for variants of a variable name in different cases).

show 1 reply
victorbjorklundlast Tuesday at 10:55 PM

No, it is the issue with the tokenizer.