alt
Hacker News
a_e_k
•
yesterday at 7:20 AM
•
0 replies
•
view on HN
If you're happy running local models, llama.cpp's built-in web-server's interface can do this.