logoalt Hacker News

varenclast Saturday at 3:14 AM0 repliesview on HN

> You already need very high end hardware to run useful local LLMs

A basic macbook can run gpt-oss-20b and it's quite useful for many tasks. And fast. Of course Macs have a huge advantage for local LLMs inference due to their shared memory architecture.