logoalt Hacker News

cyanydeezyesterday at 10:35 PM1 replyview on HN

I'm curious: I'venever touched cloud models beyond a few seconds. I run a AMD395+ with the new qwen coder. Is there any intelligence difference, or is it just speed and context? At 128GB, it takes quite awhile before getting context wall.


Replies

__mharrison__today at 5:29 AM

There's a difference in intelligence. However for 90% of what I'm doing I don't really need it. The online models are just faster.

I just did a one hour vibe session today, ripping out a library dependency and replacing it with another and pushing the library to pypi. I should take my task list and let the local model replicate the work and see how it works out.