logoalt Hacker News

Scaffolding to Superhuman: How Curriculum Learning Solved 2048 and Tetris

82 pointsby a1k0ntoday at 3:52 PM15 commentsview on HN

Comments

omneitytoday at 4:18 PM

Related, I heard about curriculum learning for LLMs quite often but I couldn’t find a library to order training data by an arbitrary measure like difficulty, so I made one[0].

What you get is an iterator over the dataset that samples based on how far you are in the training.

0: https://github.com/omarkamali/curriculus

someoneontenettoday at 5:33 PM

Curriculum learning helped me out a lot in this project too https://www.robw.fyi/2025/12/28/solve-hi-q-with-alphazero-an...

bob1029today at 5:00 PM

> To learn, agents must experience high-value states, which are hard (or impossible) for untrained agents to reach. The endgame-only envs were the final piece to crack 65k. The endgame requires tens of thousands of correct moves where a single mistake ends the game, but to practice, agents must first get there.

This seems really similar to the motivations around masked language modeling. By providing increasingly-masked targets over time, a smooth difficulty curve can be established. Randomly masking X% of the tokens/bytes is trivial to implement. MLM can take a small corpus and turn it into an astronomically large one.

show 2 replies
pedroziegtoday at 5:13 PM

What I like about this writeup is that it quietly demolishes the idea that you need DeepMind-scale resources to get “superhuman” RL. The headline result is less about 2048 and Tetris and more about treating the data pipeline as the main product: careful observation design, reward shaping, and then a curriculum that drops the agent straight into high-value endgame states so it ever sees them in the first place. Once your env runs at millions of steps per second on a single 4090, the bottleneck is human iteration on those choices, not FLOPs.

The happy Tetris bug is also a neat example of how “bad” inputs can act like curriculum or data augmentation. Corrupted observations forced the policy to be robust to chaos early, which then paid off when the game actually got hard. That feels very similar to tricks in other domains where we deliberately randomize or mask parts of the input. It makes me wonder how many surprisingly strong RL systems in the wild are really powered by accidental curricula that nobody has fully noticed or formalized yet.

jsuarez5341today at 5:36 PM

[dead]

hiddencosttoday at 4:29 PM

Those are not hard tasks ...

kgwxdtoday at 6:14 PM

Great, add "curriculum" to the list of words that will spark my interest in human learning, only for it to be about garbage AI. I want HN with a hard rule against AI posts.

show 3 replies