logoalt Hacker News

kristopoloustoday at 1:06 AM3 repliesview on HN

I use local models + openrouter free ones.

My monthly spend on ai models is < $1

I'm not cheap, just ahead of the curve. With the collapse in inference cost, everything will be this eventually

I'll basically do

    $ man tool | <how do I do this with the tool>
or even

    $ cat source | <find the flags and give me some documentation on how to use this>
Things I used to do intensively I now do lazily.

I've even made a IEITYuan/Yuan-embedding-2.0-en database of my manpages with chroma and then I can just ask my local documentation how I do something conceptually, get the man pages, inject them into local qwen context window using my mansnip llm preprocessor, forward the prompt and then get usable real results.

In practice it's this:

    $ what-man "some obscure question about nfs" 
    ...chug chug chug (about 5 seconds)...

    <answer with citations back to the doc pages>
Essentially I'm not asking the models to think, just do NLP and process text. They can do that really reliably.

It helps combat a frequent tendency for documentation authors to bury the most common and useful flags deep in the documentation and lead with those that were most challenging or interesting to program instead.

I understand the inclination it's just not all that helpful for me


Replies

aquafoxtoday at 6:15 AM

> I'll basically do

    $ man tool | <how do I do this with the tool>
or even $ cat source | <find the flags and give me some documentation on how to use this>

Could you please elaborate on this? Do I get this right that you can set up your your command line so that you can pipe something to a command that sends this something together with a question to an LLM? Or did you just mean that metaphorically? Sorry if this is a stupid question.

show 3 replies
m4ck_today at 2:11 AM

Is your RAG manpages thing on github somewhere? I was thinking about doing something like that (it's high on my to-do list but I haven't actually done anything with llms yet.)

show 2 replies
nltoday at 3:54 AM

This is a completely different thing to AI coding models.

If you aren't using coding models you aren't ahead of the curve.

There are free coding models. I use them heavily. They are ok but only partial substitutes for frontier models.

show 1 reply