logoalt Hacker News

EagnaIonat08/09/20250 repliesview on HN

> I feel like most of the people touting the Mac’s ability to run LLMs are either impressed that they run at all, are doing fairly simple tasks, or just have a toy model they like to mess around with and it doesn’t matter if it messes up.

I feel like you haven't actually used it. Your comment may have been true 5 years ago.

> If you want an assistant you can talk to that will give you advice or help you with arbitrary tasks for work, that’s not something that’s on the menu.

You can use a RAG approach (eg. Milvus) and also LoRA templates to dramatically improve the accuracy of the answer if needed.

Locally you can run multiple models, multiple times without having to worry about costs.

You also have the likes of Open WebUI which builds numerous features on top of an interface if you don't want to do coding.

I have a very old M1 MBP 32GB and I have numerous applications built to do custom work. It does the job the fine and speed is not an issue. Not good enough to do a LoRA build but I have a more recent laptop for that.

I doubt I am the only one.