The title is obviously the wrong way around, exchanges turn distributed logs into order books. The distributed part is a resilience decision but not essential to the design (technically writing to a disk would give persistence with less ability to recover, or with some potential gaps in the case of failure (remember there is a sequence published on the other end too, the market data feed)). As noted in the article, the sequencer is a single-threaded, not parallelisable process. Distribution is just a configuration of that single threaded path. Parallelisation is feasible to some extent by sharding across order books themselves (dependencies between books may complicate this).
It would not surprise me at all if the sequencing step was done via FPGA processing many network inputs at line rate with a shared monotonic clock. This would give it some amount of parallelism.