logoalt Hacker News

mattmanseryesterday at 7:57 PM1 replyview on HN

That's not actually true.

When you move to the enterprise layer, suddenly you get the opposite problem, you have a low amount of "users" but you often need a load of CPU intensive or DB intensive processing to happen quickly.

One company I worked for had their system built by, ummmm, not the greatest engineers and were literally running out of time in the day to run their program.

Every client was scheduled over 24 hours, and they'd got to running the program for 22 hours per day and were desperately trying to fix it before they ran out of "time". They couldn't run it in parallel because part of the selling point of the program was that it amalgamated data from all the clients.


Replies

bearjawsyesterday at 11:20 PM

Without seeing more this seems like it could be solved by not recomputing the entire history to add on data. Depends what kind of math you are doing however.

Some sort of check point system could likely save significant IO.

What am I missing that requires you to recompute all data every day?