logoalt Hacker News

irusenseitoday at 8:12 AM2 repliesview on HN

>Oh does llama.cpp use MLX or whatever?

No. It runs on MacOS but uses Metal instead of MLX.


Replies

zozbot234today at 8:47 AM

ANE-powered inference (at least for prefill, which is a key bottleneck on pre-M5 platforms) is also in the works, per https://github.com/ggml-org/llama.cpp/issues/10453#issuecomm...

OkGoDoIttoday at 8:58 AM

Is that better or worse?

show 1 reply