logoalt Hacker News

theseyesterday at 4:49 PM4 repliesview on HN

Has anyone managed to get this to work in LM Studio? They've got a option in the UI, but it never seems to allow me to enable it.


Replies

dvtyesterday at 4:59 PM

It's not implemented in mlx[1] yet (or llama.cpp[2]), so it may take a while.

[1] https://github.com/ml-explore/mlx-lm/pull/990

[2] https://github.com/ggml-org/llama.cpp/pull/22673

AlphaSiteyesterday at 5:22 PM

Yes. Make sure you’re not using the Gemma sparse models since they don’t have a small model to use. Also I removed all the image models from the workspace.

show 1 reply
Havocyesterday at 5:19 PM

Normally when LM Studio doesn't like it it's because of the presence of mmproj files in the folder. Sometimes removing them helps it show up.

They're somehow connected to vision & block speculative decode...don't ask me how/why though

For gemma specifically had more luck with speculative using the llama-server route than lm studio

svachalekyesterday at 5:08 PM

I've gotten it to work with other models. They've got to be perfectly aligned usually, in terms of provider, quantization etc. Might be a bit before you can get a matched set.