logoalt Hacker News

travisgriggstoday at 4:52 AM11 repliesview on HN

I had my formative years in programming when memory usage was something you still worried about as a programmer. And then memory expanded so much that all kinds of “optimal” patterns for programming just become nearly irrelevant. Will we start to actually consider this in software solutions again as a result?


Replies

fulafeltoday at 5:51 AM

You're right in terms of fitting your program to memory, so that it can run in the first place.

But in performance work, the relative speed of RAM relative to computation has dropped such that it's a common wisdom to treat today's cache as RAM of old (and today's RAM as disk of old, etc).

In software performance work it's been all about hitting the cache for a long time. LLMs aren't too amenable to caching though.

show 2 replies
dahcryntoday at 11:08 AM

I've actively started to use outlook and teams through chrome to free up some of my ram, easily saves 3-4gb. It's gotten ridiculous how much ram basic tools are using, leaving nothing for doing actually real work

zarzavattoday at 11:13 AM

RAM didn't get more expensive to produce. It just got more desirable. The prices will come down again when supply responds. It may take some time, but it will happen eventually.

show 2 replies
jacquesmtoday at 5:46 AM

> And then memory expanded so much that all kinds of “optimal” patterns for programming just become nearly irrelevant.

I don't think that ever happened. Using relatively sparse amount of memory turns into better cache management which in turn usually improves performance drastically.

And in embedded stuff being good with memory management can make the difference between 'works' and 'fail'.

show 3 replies
cyberrocktoday at 8:28 AM

It's not like most developers are wasting memory for fun by using Electron etc. It's just the simplest way to deploy applications that require frequent multiplatform changes. Until you get Apple to approve native app changes faster and Linux users to agree on framework, app distribution, etc., it's the most optimal way to ship a product and not just a program.

show 1 reply
rTX5CMRXIfFGtoday at 5:49 AM

I never really bought in to the anti-Leetcode crowd’s sentiment that it’s irrelevant. It has always mattered as a competitive edge, against other job candidates if you’re an employee or the competition of you’re a company. It only looked irrelevant because opportunities were everywhere during ZIRP, but good times never last.

show 2 replies
yxhuvudtoday at 9:01 AM

We would have, if the expensive memory was a long term trend. It is not - eventually the supply will expand to match demand. There is no fundamental lack of raw materials underlying the issues, it is just a demand shock.

show 1 reply
jooztoday at 9:03 AM

When I train some leetcode problems, I remember the best solution was the one that optimised cpu (time) instead of memory. Meaning adding data index in memory instead of iterating on the main data structure. I thought, ok, thats fine, it's normal, you can (could) always buy more RAM, but you can't buy more time.

But well, I think there is no right answer and there always be a trade off case by case depending on the context.

lmcdtoday at 8:53 AM

I've recently started a side project for the N64, and this is very relatable! Working within such tight constraints is most of the fun.

ReedorReedtoday at 6:53 AM

I just heard in a podcast, they talked about how powerful our devices are today but do not feel faster than they did 15 years ago and that it's because of what you write here.

show 1 reply
NooneAtAll3today at 6:11 AM

most likely in a couple years this bubble will pop, just like 8 years and 16 years ago

it's just a cartel cycle of gaining profits while soon eliminating all investments into competitors when flood of cheap ram "suddenly" appears

show 2 replies