Really? But the whole idea behind the space/time trade-off is that in a bunch of cases, if you want to compute something in less memory, you need to solve it in more steps; or if you want to solve something in fewer steps, you need to use more memory.
This seems to wildly contradict the idea that the amount of memory needed is roughly proportional to the number of steps.
The amount of steps is an (over)estimate on the amount of memory needed.
You can write some trivial programs that need this much memory. But most of them can be optimised to use less space (and still compute the same answer).
The big question was: can you always optimise the space usage down from the ridiculous maximum and to what extent?
> in a bunch of cases
That's precisely the issue: of course for many problems there are obvious ways to trade off space and time. Smart people who have spent their lives thinking about this are aware of this. :-) The issue here is: can this always be done? For an arbitrary computation that takes t time, can it always be simulated in much less space (using more time of course), no matter what the computation is? Lots of people have tried, and the best result until now was t/log t in the 1970s.
Edit: A comment on a previous discussion of this result by the author of the Quanta article, which substantiates and links to researchers' opinion that it's the universality (works for every problem) that's the surprising part: https://news.ycombinator.com/item?id=44075191