logoalt Hacker News

darkoob12last Wednesday at 7:55 AM1 replyview on HN

Reproduciblity was never a serious issue in AI research community. I think one of the main reasons for explosive progress in AI was the open community and people could easily reproduce other people's research. If you look at top tier conferences you see that they share everything paper, latex, code, data, lecture video etc.

After ChatGPT big cooperations stopped sharing their main research but it still happens at academia.


Replies

mnky9800nlast Wednesday at 8:16 AM

I think what I would rather like to see is the reproduction of results from experiments that the AI didn't see but are well known. Not reproducing AI papers. For example, assuming a human can build it, would an AI, not knowing anything except what was known at the time, be able to design the millikan oil drop experiment? Or would it be able to design an Taylor-Coutte setup for exploring turbulence? Would it be able to design a linear particle accelerator or a triaxial compression experiment? I think an interesting line of reasoning would be to restrict the training data to what was known before a seminal paper was produced. Like take Lorenz atmospheric circulation paper, train an AI on only data that comes from before that paper was published. Does the AI produce the same equations in the paper and the same description of chaos that Lorenz arrived at?