Does one really need to _buy_ a completely new desktop hardware (ie. mac mini) to _run_ a simple request/response program?
Excluding the fact that you can run LLMs via ollama or similar directly on the device, but that will not have a very good token/s speed as far as I can guess...
I’m pretty sure people are using them for local inference. Token rates can be acceptable if you max out the specs. If it was just the harness, they’d use a $20 raspberry pi instead.
You don’t, but for those who would like the agent to interact with Apple provided services like reminders and iMessage it works for that.
You don't, that's just the most visible way to do it. Any other computer capable of running not-Claude code in a shell with a browser will do, but all the cool kids are buying mac's, don't you wanna be one of them?
What other device would you suggest as a home server that a non tech person can set up themselves and has enough power to run several Chrome tabs? Access to iMessage is a plus. Small beeline Windows devices could also work but it’s Windows 11, slow as molasses.