Curious, can anyone having 128gb ram macs tell their story - is it usable for coding and running model locally? How does latency compare to say copilot?
A rambly "thinking" model like this is way too slow for coding assistance imo, although maybe it could take on larger assignments than you could get out of a chat or coding model.
A rambly "thinking" model like this is way too slow for coding assistance imo, although maybe it could take on larger assignments than you could get out of a chat or coding model.