logoalt Hacker News

theptiplast Wednesday at 11:43 PM3 repliesview on HN

Math seems to me like the hardest thing for LLMs to do. It requires going deep with high IQ symbol manipulation. The case for LLMs is currently where new discoveries can be made from interpolation or perhaps extrapolation between existing data points in a broad corpus which is challenging for humans to absorb.


Replies

stauntonlast Thursday at 12:49 AM

Alternatively, human brains are just terrible at "high IQ symbol manipulation" and that's a much easier cognitive task to automate than, say, "surviving as a stray cat".

Der_Einzigelast Thursday at 7:22 AM

If they solve tokenization, you'll be SHOCKED at how much it was holding back model capabilities. There's tons of works at NeurIPS about various tokenizer hacks or alternatives to bpe which massively improve various types of math that models are bad at (i.e. arithmatic performance)

semi-extrinsiclast Thursday at 6:17 AM

This line of reasoning implies "the stochastical parrot people are right, there is no intelligence in AI". Which is the opposite of what AI thought leaders are saying.

show 2 replies