One of my favourite sci comms YouTubers explained this in great detail https://youtu.be/8JuWdXrCmWg
Highly recommend
Related:
For algorithms, a little memory outweighs a lot of time. 343 points, 139 comments, 39 days ago. https://news.ycombinator.com/item?id=44055347
You need much less memory than time. 126 points, 11 comments, 22 days ago. https://news.ycombinator.com/item?id=44212855
I get that this is an interesting theoretical proof. I'm more interested in the inverse, trading memory for time, to make things faster, even if they take more space. It seems to me the halting problem is almost a proof the inverse of this is impossible.
Memoization is likely the best you can do.
[ made this a top-level comment as I don't see it mentioned in other comments: ]
The result doesn't actually apply to _all_ algorithms, only oblivious ones. Oblivious algorithms are ones where step i does not depend on decisions made in steps j < i. Almost all useful algorithms are non-oblivious, with some notable exceptions [1]. This work is a step forward towards proving the result for the general case, with little practical usage (and that's perfectly okay).
https://arxiv.org/abs/2502.17779
Link to paper
One thing that's surprised me throughout my career is just how inefficient most of the code that I've inherited is. I suspect we could do a lot more with the hardware we have if we simply became better at programming.
(Perhaps more AI coaching could help?)
I don't understand what they are saying at all. If I have a computation that requires going through a loop n times, why should the memory requirements increase with n at all?
This seems very theoretical, just a lower bound on space required, without talking about what is being computed. Does it have any import on real algorithms?
Without paywall: https://www.removepaywall.com/search?url=https://www.scienti...
>(Technically, that equation was t/log(t), but for the numbers involved log(t) is typically negligibly small.)
My dude, that is NOT how rational numbers work.
Here's a quote from the SciAm article: "Technically, that equation was t/log(t), but for the numbers involved log(t) is typically negligibly small."
Huh?
Here's the gist:
For nearly 50 years theorists believed that if solving a problem takes t steps, it should also need roughly t bits of memory: 100 steps - 100bits. To be exact t/log(t).
Ryan Williams found that any problem solvable in time t needs only about sqrt(t) bits of memory: a 100-step computation could be compressed and solved with something on the order of 10 bits.