logoalt Hacker News

iamnotagenius01/21/20252 repliesview on HN

No, but I use llama 3.2 1b and qwen2.5 1.5 as bash oneliner generator, always runnimg in console.


Replies

andai01/21/2025

Could you elaborate?

show 2 replies
XMasterrrr01/21/2025

What's your workflow like? I use AI Chat. I load Qwen2.5-1.5B-Instruct with llama.cpp server, fully offloaded to the CPU, and then I config AI Chat to connect to the llama.cpp endpoint.