> This is why the video of Claude solving level 1 at the top was actually (dramatic musical cue) staged, and only possible via a move-for-move tutorial that Claude nicely rationalized post hoc.
One of the things this arc of history has taught me is that post-hoc rationalization is depressingly easy. Especially if it doesn't have to make sense, but even passing basic logical checks isn't too difficult. Ripping the rationalization apart often requires identifying novel, non-obvious logical checks.
I thought I had learned that time and time again from human politics, but AI somehow made it even clearer than I thought possible. Perhaps simply because of knowing that a machine is doing it.
Edit: after watching the video more carefully:
> "This forms WALL IS WIN horizontally. But I need "FLAG IS WIN" instead. Let me check if walls now have the WIN property. If they do, I just need to touch a wall to win. Let me try moving to a wall:
There's something extremely uncanny-valley about this. A human player absolutely would accidentally win like this, and have similar reasoning (not expressed so formally) about how the win was achieved after the fact. (Winning depends on the walls having WIN and also not having STOP; many players get stuck on later levels, even after having supposedly learned the lesson of this one, by trying to make something WIN and walk onto it while it is still STOP.)
But the WIN block was not originally in line with the WALL IS text, so a human player would never accidentally form the rule, but would only do it with the expectation of being able to win that way. Especially since there was already an obvious, clear path to FLAG — a level like this has no Sokoban puzzle element to it; it's purely about learning that the walls only block the player because they are STOP.
Nor would (from my experience watching streamers at least) a human spontaneously notice that the rule "WALL IS WIN" had been formed and treat that as a cue to reconsider the entire strategy. The natural human response to unintentionally forming a useful rule is to keep pushing in the same direction.
On the other hand, an actually dedicated AI system (in the way that AlphaGo was dedicated to Go) could, I'm sure, figure out a game like Baba Is You pretty easily. It would lack the human instinct to treat the walls as if they were implicitly always STOP; so it would never struggle with overriding it.
This is interesting. If you approach this game as individual moves, the search tree is really deep. However, most levels can be expressed as a few intermediate goals.
In some ways, this reminds me of the history of AI Go (board game). But the resolution there was MCTS, which wasn't at all what we wanted (insofar as MCTS is not generalizable to most things).
Do you think the performance can be improved if the representation of the level is different?
I've seen AI struggle with ASCII, but when presented as other data structures, it performs better.
edit:
e.g. JSON with structured coordinates, graph based JSON, or a semantic representation with the coordinates
“Reasoning models like o3 might be better equipped to come up with a plan, so a natural step would be to try switching to those, away from Claude Desktop…”
But…Claude Desktop does have a reasoning mode for both Sonnet and Opus.
I think it’s a great idea for a benchmark.
One key difference to ARC in its current iteration is that there is a defined and learnable game physics.
Arc requires generalization based on few examples for problems that are not well defined per se.
Hence ARC currently requires the models that work on it to possess biases that are comparable to the ones that humans possess.
I have noticed a trend of the word "Desiderata" appearing in a lot more writing. Is this an LLM word or is it just in fashion? Most people would use the words "Deisres" or "Goals," so I assume this might be the new "delve."
I once made a “RC plays Baba Is You” that controlled the game over a single shared browser that was streaming video and controls back to the game. Was quite fun!
But I am fairly sure all of Baba Is You solutions are present in the training data for modern LLMs so it won’t make for a good eval.
Baba is You is a great game part of a collection of 2D grid puzzle games.
(Shameless plug: I am one of the developers of Thinky.gg (https://thinky.gg), which is a thinky puzzle game site for a 'shortest path style' [Pathology] and a Sokoban variant [Sokoath] )
These games are typically NP Hard so the typical techniques that solvers have employed for Sokoban (or Pathology) have been brute forced with varying heuristics (like BFS, dead-lock detection, and Zobrist hashing). However, once levels get beyond a certain size with enough movable blocks you end up exhausting memory pretty quickly.
These types of games are still "AI Proof" so far in that LLMs are absolutely awful at solving these while humans are very good (so seems reasonable to consider for for ARC-AGI benchmarks). Whenever a new reasoning model gets released I typically try it on some basic Pathology levels (like 'One at a Time' https://pathology.thinky.gg/level/ybbun/one-at-a-time) and they fail miserably.
Simple level code for the above level (1 is a wall, 2 is a movable block, 4 is starting block, 3 is the exit):
000
020
023
041
Similar to OP, I've found Claude couldn’t manage rule dynamics, blocked paths, or game objectives well and spits out random results.
It reminds me of https://en.m.wikipedia.org/wiki/The_Ricks_Must_Be_Crazy. Hope we are not ourselves in some sort of simulation ;)
There are numerous guides for all levels of Baba Is You available. I think it's likely that any modern LLM has them as part of its training dataset. That severely degrades this as a test for complex solution capabilities.
Still, its interesting to see the challenges with dynamic rules (like "Key is Stop") that change where are you able to move etc.
this is definitely a case for fine tuning a LLM on this game's data. There is currently no LLM out there that is able to play very well many games of different kinds.
[dead]
[dead]
I would be way more interested in it playing niche community levels, because I suspect a huge reason it's able to solve these levels is because it was trained on a million Baba is You walkthroughs. Same with people using Pokemon as a way to test LLMs, it really just depends on how well it knows the game.
I suspect real AGI evals aren't going to be "IQ test"-like which is how I'd categorize these benchmarks.
LLMs will probably continue to scale on such benchmarks, as they have been, without needing real ingenuity or intelligence.
Obviously I don't know the answer but I think it's the same root problem as why neural networks will never lead to intelligence. We're building and testing idiot savants.
In my experience LLMs have a hard time working with text grids like this. It seems to find columns harder to “detect” then rows. Probably because it’s input shows it as a giant row if that makes sense.
It has the same problem with playing chess. But I’m not sure if there is a datatype it could work with for this kinda game. Currently it seems more like LLMs can’t really work on spacial problems. But this should actually be something that can be fixed (pretty sure I saw an article about it on HN recently)