> Imagine buying hardware that will be obsolete in 2 years
Unless the PC you buy is more than $4,800 (24 x $200) it is still a good deal. For reference, a MacBook M4 Max with 128GB of unified RAM is $4,699. You need a computer for development anyway, so the extra you pay for inference is more like $2-3K.
Besides, it will still run the same model(s) at the same speed after that period, or even maybe faster with future optimisations in inference.
The value depreciation of the hardware alone is going to be significant. Probably enough to pay for 3x ~$20 subscriptions to OpenAI, Anthropic and Gemini.
Also, if you use the same mac to work, you can't reserve all 128GB for LLMs.
Not to mention a mac will never run SOTA models like Opus 4.5 or Gemini 3.0 which subscriptions gives you.
So unless you're ready to sacrifice quality and speed for privacy, it looks like a suboptimal arrangement to me.