logoalt Hacker News

embedding-shapetoday at 11:23 AM4 repliesview on HN

I've been playing around with the same, but trying to use local models as my Obsidian vault obviously contain a bunch of private things I'm not willing to share with for-profit companies, but I have yet to find any model that comes close to working out as well as just codex or cc with the small models, even with 96GB of VRAM to play around with.

I've started to think about maybe a fine-tuned model is needed, specifically for "journal data retrieval" or something like that, is anyone aware of any existing models for things like this? I'd do it myself, but since I'm unwilling to send larger parts of my data to 3rd parties, I'm struggling collecting actual data I could use for fine-tuning myself, ending up in a bit of a catch 22.

For some clients projects I've experimented with the same idea too, with less restrictions, and I guess one valuable experience is that letting LLMs write docs and add them to a "knowledge repository" tends to up with a mess, best success we've had is limiting the LLMs jobs to organizing and moving things around, but never actually add their own written text, seems to slowly degrade their quality as their context fills up with their own text, compared to when they only rely on human-written notes.


Replies

gchamonlivetoday at 4:07 PM

Models are lossy, so fine-tune can only take you so far with small models. What we need is reasonably capable local models with a huge context window and a method to make efficient use of token and cram as much info as possible in the context before degrading the output quality.

terminalkeystoday at 2:42 PM

You can fine-tune local models using your own data. Unsloth has a guide at https://unsloth.ai/docs/get-started/fine-tuning-llms-guide.

I'm currently experimenting with Tobi's QMD (https://github.com/tobi/qmd) to see how it performances with local models only on my Obsidian Vault.

show 1 reply
SeanLangtoday at 2:37 PM

Couldn't you create synthetic data based on your entries using local models? Or would that defeat the purpose of fine tuning it?

show 1 reply
weitendorftoday at 2:36 PM

This is exactly what we're working on, is there any application in particular you're interested in the most?

> I'm struggling collecting actual data I could use for fine-tuning myself,

Journalling or otherwise writing is by far the best way to do this IMO but it doesn't take very much audio to accurately do a voice-clone. The hard thing about journalling is that it can actually be really biased away from the actual "distribution" of you, whether it's more aspirational or emotional or less rigorous/precise with language.

What I'm starting to do is save as many of my prompts as possible, because I realized a lot of my professional writing was there and it was actually pretty valuable data (especially paired with outputs and knowledge of what went well and waht didn't) for finetuning on my own workloads. Secondly is assembling/curating a collection of tools and products that I can drop into each new context with LLMs and also use for finetuning them on my own needs. Unlike "knowledge repositories" these both accurately model my actual needs and work and don't require me to do really do anything unnatural.

The other thing I'm about to start doing is "natural" in a certain sense but kinda weird, basically recording myself talking to my computer (verbalizing my thoughts more so it can be embedded alongside my actions, which may be much sparser from the computer's perspective) / screen recordings of my session as I work with it. This is something I've had to look into building more specialized tools for, because it creates too much data to save all of it. But basically there are small models, transcoding libraries, and pipelines you can use for audio/temporal/visual segmentation and transcription to compress the data back down into tokens and normal-sized images.

This is basically creating a semantic search engine of yourself as you work, kinda weird, but IMO it's just much weirder that your computer can actually talk back and learn about you now. With 96GB you can definitely do it BTW. I successfully finetuned an audio workload on gemma 4 2b yesterday on a 16GB mac mini. With 96GB you could do a lot.

> letting LLMs write docs and add them to a "knowledge repository"

I think what you actually want them to do is send them to go looking for stuff for you, or actively seeking out "learning" about something like that for their own role/purposes, so they can embed the useful information and better retrieve it when they need it, or produce traces grounded in positive signals (eg having access to this piece of information or tool, or applying this technique or pattern, measurably improves performance at something in-distribution to whatever you have them working on) they can use in fine-tuning themselves.

show 1 reply