This is a good way to benchmark models. We [the SWE-bench team] took the meta-version of this and implemented it as a new benchmark called CodeClash -
We have agents implement agents that play games against each other- so Claude isn't playing against GPT, but an agent written by Claude plays poker against an agent written by GPT, and this really tough task leads to very interesting findings on AI for coding.
Cool to see core war! I feel it's mostly forgotten by now. My dad is still playing it to this day though and even attends tournaments
Leaderboard looks very outdated..
>this really tough task leads to very interesting findings on AI for coding
Are you going to share those with the class or?