logoalt Hacker News

GaggiXtoday at 9:40 AM1 replyview on HN

You can train a LLM on just multiplication and test it on ones it has never seen before, it's nothing particularly magical.


Replies

veltastoday at 9:49 AM

It's not 'magic' though but previously LLMs have performed very badly on longer multiplication, 'insight' is the wrong word but I'm saying maybe they're not wildly better at this calculation... maybe they are just optimising these well known jagged edges.