Though at the limit, it would have to be at least O(sqrt(n)) thanks to the Bekenstein bound [0]. And of course, as mentioned in TFA, you can always do better if you can get away with local random access in parallel, rather than global random access.
Once you're talking about computers that verge on being black holes, I think you're making too many assumptions about how everything works to give it a speed rating.
Ultimately I think we will go to full NUMA like Sun and others tried. Instead of having L4 and then L5 caches, each core simply has 4GB of local working memory and you use programming languages that are ready for this. Erlang would be easy, and I think Rust has the semantics to make it work but it might take years of compiler advancement to make it efficient.
All shared state is communicated through shared memory pool that is either accessed directly through segregated address ranges or via DMA.
If one wants to start talking cosmology, it's unlikely to the case that arbitrarily long-lived computers are possible, I don't think any of the theories in [0] are conducive to either an infinite-time or infinite-memory computer, so the strict mathematical definition for Big-O doesn't hold up. IMO it's better to use Big-O as an effective theory for predicting runtime on human-scale computers than take the mathematical formalism too literally.
[0] https://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe?...