alt
Hacker News
a_e_k
•
05/15/2025
•
0 replies
•
view on HN
If you're happy running local models, llama.cpp's built-in web-server's interface can do this.