logoalt Hacker News

katortoday at 11:19 AM3 repliesview on HN

Some users are moving to local models, I think, because they want to avoid the agent's cost, or they think it'll be more secure (not). The mac mini has unified memory and can dynamically allocate memory to the GPU by stealing from the general RAM pool so you can run large local LLMs without buying a massive (and expensive) GPU.


Replies

ErneXtoday at 1:38 PM

I think any of the decent open models that would be useful for this claw frency require way more ram than any Mac Mini you can possibly configure.

The whole point of the Mini is that the agent can interact with all your Apple services like reminders, iMessage, iCloud. If you don’t need any just use whatever you already have or get a cheap VPS for example.

trcf23today at 2:13 PM

If the idea is to have a few claws instances running non stop and scrapping every bit of the web, emails, etc, it would probably cost quite a lot of money.

But if still feels safer to not have openAI access all my emails directly no?

duskdozertoday at 1:39 PM

>they think it'll be more secure (not)

for these types of tasks or LLMs in general?