A write-ahead log isn't a performance tool to batch changes, it's a tool to get durability of random writes. You write your intended changes to the log, fsync it (which means you get a 4k write), then make the actual changes on disk just as if you didn't have a WAL.
If you want to get some sort of sub-block batching, you need a structure that isn't random in the first place, for instance an LSM (where you write all of your changes sequentially to a log and then do compaction later)—and then solve your durability in some other way.
you can unify database with write-ahead log using a persistent data structure. It also gives you cheap/free snapshots/checkpoints.
> A write-ahead log isn't a performance tool to batch changes, it's a tool to get durability of random writes.
¿Por qué no los dos?