logoalt Hacker News

chaos_emergentlast Sunday at 8:56 PM1 replyview on HN

> This is likely because LLMs are typically trained for only a single epoch over massive datasets, which is in contrast to the multi-hundred-epoch training regimes for which dropout was first introduced.

Wait, is this true? That seems like a wild statement to make, relatively unsubstantiated?


Replies

typonlast Sunday at 8:57 PM

No this is well known. Look for Table 2.2 in GPT3 paper.