logoalt Hacker News

magneticnorthtoday at 6:10 AM2 repliesview on HN

I think that was Tao's point, that the new proof was not just read out of the training set.


Replies

cmatoday at 2:11 PM

I don't think it is dispositive, just that it likely didn't copy the proof we know was in the training set.

A) It is still possible a proof from someone else with a similar method was in the training set.

B) something similar to erdos's proof was in the training set for a different problem and had a similar alternate solution to chatgpt, and was also in the training set, which would be more impressive than A)

show 2 replies
rzmmmtoday at 7:30 AM

The model has multiple layers of mechanisms to prevent carbon copy output of the training data.

show 4 replies