"maintain a home server" in this case roughly means "park a headless Mac mini (or laptop or RPi) on your desk"
And you can use a local LLM if you want to eliminate the cloud dependency.
> And you can use a local LLM
That ship has sailed a long time ago. It's of course possible, if you are willing to invest a few thousand dollars extra for the graphics card rig + pay for power.
You have spend tens of thousands of dollars on hardware to approach the reasoning and tool call levels of SOTA models...so, casually mentioning "just use local LLM" is out of reach for the common man.