It’s tempting to be flippant about MacOS/windows but in all seriousness, the resources required for an LLM to do the job of a typical lighter weight app/software is a serious consideration. No amount of bloat matches what an LLM needs.
> No amount of bloat matches what an LLM needs.
I don't think that's necessarily true. For instance, LinkedIn uses more memory than Gemma E2B inference does.
> No amount of bloat matches what an LLM needs.
I don't think that's necessarily true. For instance, LinkedIn uses more memory than Gemma E2B inference does.