WOPR used reinforcement learning, and could learn from its simulated mistakes. LLMs can't do that without some sort of RL harness. :)