Do we know for a fact that LLMs aren't now configured to pass simple arithmetic like this in a simpler calculator, to add illusion of actual insight?
You can train a LLM on just multiplication and test it on ones it has never seen before, it's nothing particularly magical.
You can train a LLM on just multiplication and test it on ones it has never seen before, it's nothing particularly magical.