> What's to be gained... by offloading inference to someone else?
Access to models that local hardware can't run. The kind of model that an iphone struggles to run is blown out of the water by most low end hosted models. Its the same reason that most devs opt for claude code, cursor, copilot, etc. instead of using hosted models for coding assistance.
> What's to be gained... by offloading inference to someone else?
Access to models that local hardware can't run. The kind of model that an iphone struggles to run is blown out of the water by most low end hosted models. Its the same reason that most devs opt for claude code, cursor, copilot, etc. instead of using hosted models for coding assistance.