I get that this is an interesting theoretical proof. I'm more interested in the inverse, trading memory for time, to make things faster, even if they take more space. It seems to me the halting problem is almost a proof the inverse of this is impossible.
Memoization is likely the best you can do.
Dynamic programming and divide-and-conquer with multiple cores on a diagonalized problem space of independent problems (CPU and memory hierarchy) with (hopefully) a final reduce step at the end. Real-world problems are seldom so neat, hence the investment in Infiniband gear.
The author of the paper also notes how the result gives an upper bound for how well we can 'trade space for time' in the worst case:
> In this paper, we make another step towards separating P from PSPACE, by showing that there are problems solvable in O(n) space that cannot be solved in n^2/log^c n time for some constant c > 0. [0]
Of course, this is all specifically in the context of multitape Turing machines, which have very different complexity characteristics than the RAM machines we're more used to.
[0] https://arxiv.org/abs/2502.17779