logoalt Hacker News

Lerctoday at 3:45 AM2 repliesview on HN

>what exactly is this specific challenge of adding numbers with a transformer model demonstrating/advancing?

Well for starters, it puts the lie to the argument that a transformer can only output examples it has seen before. Performing the calculation on examples that haven't been seen demonstrates generalisation of the principles and not regurgitation.

While this misconception persists in a large number of people, counterexamples can always serve a useful purpose.


Replies

mcdeltattoday at 6:00 AM

Are people usually claiming that it strictly cannot produce any output it hasn't seen before? I wouldn't agree, I mean clearly they are generating some form of new content. My argument would be that while they can learn to some extent, the power of their generalisation is still tragically weak, particularly in some domains.

qseratoday at 4:42 AM

>it puts the lie to the argument

But it does not, right? You can either show it something, or modify the parameters in a way that resemble the result of showing it something.

You can claim that the model didn't see the thing, but that would mean nothing, because you are making the same effect with parameter tweaks indirectly.

show 1 reply