logoalt Hacker News

Running Google Gemma 4 Locally with LM Studio's New Headless CLI and Claude Code

43 pointsby vbtechguytoday at 5:13 PM13 commentsview on HN

Comments

Someone1234today at 7:25 PM

Using Claude Code seems like a popular frontend currently, I wonder how long until Anthropic releases an update to make it a little to a lot less turn-key? They've been very clear that they aren't exactly champions of this stuff being used outside of very specific ways.

show 3 replies
martinaldtoday at 7:44 PM

Just FYI, MoE doesn't really save (V)RAM. You still need all weights loaded in memory, it just means you consult less per forward pass. So it improves tok/s but not vram usage.

show 1 reply
vbtechguytoday at 5:13 PM

Here is how I set up Gemma 4 26B for local inference on macOS that can be used with Claude Code.

show 1 reply
jonplacketttoday at 7:17 PM

So wait what is the interaction between Gemma and Claude?

show 1 reply
trvztoday at 7:11 PM

  ollama launch claude --model gemma4:26b
show 1 reply