An M4 mini is overkill just to run OpenClaw. I'm running it on a Pentium J5005 and it's running 20 other services in Docker. I think the main thing was many wanted it to be able to access iMessage. I think people dream of also using the mac to run the LLM but the 16gb ones don't have enough ram.
People are running openclown on microcontrollers.
When they say 'due to openclaw' they refer to running AI models that openclaw uses, not to openclaw itself.