> And then you have problems like "5142352 * 51234" which is trivial problems for any basic calculator, but very hard for a human or a LLM.
I think LLMs are getting better (well better trained) on dealing with basic math questions but you still need to help them. For example, if you just ask it them to calculate the value, none of them gets it right.
http://beta.gitsense.com/?chat=876f4ee5-b37b-4c40-8038-de38b...
However, if you ask them to break down the multiplication to make it easier, three got it right.
http://beta.gitsense.com/?chat=ef1951dc-95c0-408a-aac8-f1db9...
> I think LLMs are getting better (well better trained) on dealing with basic math questions but you still need to help them
I feel like that's a fools errand. You could already in GPT3 days get the LLM to return JSON and make it call your own calculator, way more efficient way of dealing with it, than to get a language model to also be a "basic calculator" model.
Luckily, tools usage is easier than ever, and adding a `calc()` function ends up being really simple and precise way of letting the model focus on text+general tool usage instead of combining many different domains.
Add a tool for executing Python code, and suddenly it gets way broader capabilities, without having to retrain and refine the model itself.