Here's a high level overview for a programmer audience (I'm listed as an author but my contributions were fairly minor):
[See specifics of the pipeline in Table 3 of the linked paper]
* There are 181 million ish essentially different Turing Machines with 5 states, first these were enumerated exhaustively
* Then, each machine was run for about 100 million steps. Of the 181 million, about 25% of them halt within this memany step, including the Champion, which ran for 47,176,870 steps before halting.
* This leaves 140 million machines which run for a long time.
So the question is: do those TMs run FOREVER, or have we just not run them long enough?
The goal of the BB challenge project was to answer this question. There is no universal algorithm that works on all TMs, so instead a series of (semi-)deciders were built. Each decider takes a TM, and (based on some proven heuristic) classifies it as either "definitely runs forever" or "maybe halts".
Four deciders ended up being used:
* Loops: run the TM for a while, and if it re-enters a previously-seen configuration, it definitely has to loop forever. Around 90% of machines do this or halt, so covers most.
6.01 million TMs remain.
* NGram CPS: abstractly simulates each TM, tracking a set of binary "n-grams" that are allowed to appear on each side. Computes an over-approximation of reachable states. If none of those abstract states enter the halting transition, then the original machine cannot halt either.
Covers 6.005 million TMs. Around 7000 TMs remain.
* RepWL: Attempts to derive counting rules that describe TM configurations. The NGram model can't "count", so this catches many machines whose patterns depend on parity. Covers 6557 TMs.
There are only a few hundred TMs left.
* FAR: Attempt to describe each TM's state as a regex/FSM.
* WFAR: like FAR but adds weighted edges, which allows some non-regular languages (like matching parentheses) to be described
* Sporadics: around 13 machines had complicated behavior that none of the previous deciders worked for. So hand-written proofs (later translated into Rocq) were written for these machines.
All of the deciders were eventually written in Rocq, which means that they are coupled with a formally-verified proof that they actually work as intended ("if this function returns True, then the corresponding mathematical TM must actually halt").
Hence, all 5-states TMs have been formally classified as halting or non-halting. The longest running halter is therefore the champion- it was already suspected to be the champion, but this proves that there wasn't any longer-running 5-state TM.
Thank you for the write-down, it is very interesting!
Is there some reason why Rocq is the proof assistant of choice and not one of the others?
This is a fantastic summary that's very easy to understand. Thank you very much for writing it!
So how many sporadics would be left if we run the same on 6-state TM?
Thanks for writing that up! Was there anything to be learned from those 13 sporadics or are they simply truly sporadic?
how much computing time does it take to evaluate a machine to 100 million steps on a modern epyc system?
> There is no universal algorithm that works on all TMs
Is this suspected or proven? I would love to read the proof if it exists.
Thanks. Amazing work.
In Coq-BB5 machines are not run for 100M steps but directly thrown in the pipeline of deciders. Most halting machines are detected by Loops using low parameters (max 4,100 steps) and only 183 machines are simulated up to 47M steps to deduce halting.
In legacy bbchallenge's seed database, machines were run for exactly 47,176,870 steps, and we were left with about 80M nonhalting candidates. Discrepancy comes from Coq-BB5 not excluding 0RB and other minor factors.
Also, "There are 181 million ish essentially different Turing Machines with 5 states", it is important to note that we end up with this number only after the proof is completed, knowing this number is as hard as solving BB(5).