logoalt Hacker News

Word2vec-style vector arithmetic on docs embeddings

70 pointsby kaycebasqueslast Saturday at 7:14 PM13 commentsview on HN

Comments

thorntonlast Saturday at 11:54 PM

We’ve done similar work. Use case was identifying pages in an old website that now 404 and where they should be redirected to.

Basically doc2vec and cosine similarity. Totally nonsensical matching outputs to the point matching on title tag vectors or precis was better so now I’m curious if we just did something wrong…

show 1 reply
aDyslecticCrowlast Saturday at 9:12 PM

There have been quite a few writing tools that are effectively just GPT wrappers with pre-defined prompts. "rephrase this more formally". Personally I find them to modify too much or are difficult to use effectively. Asking a for a few different rephrasings and then merging it myself ends up being my workflow.

But ever since learning about word2vec, I've been thinking that there must be a better way. "Push" a section in with the formal vector a bit. Add a pinch of "brief", dial up the "humour" vector. I think it could create a very controllable and efficient writing tool.

show 1 reply
nostreboredlast Saturday at 9:47 PM

> How do we actually use this in technical writing workflows or documentation experiences? I’m not sure. I was just curious to learn whether or not it would work.

--

There are a few easy applications.

* When surfacing relevant documents, you can keep a list of the previous documents visited and boost in the "direction" that the customer is headed (could be an average of the previous N docs or weight towards frequency). But then you're just building a worse recsys for something where latency probably isn't that critical.

* If you know for every feature you release, you need an API doc, an FAQ, usage samples for different workflows or verticals you're targetting, you can represent each of these as f(doc) + f(topic) and find the existing doc set. But then, you can have much more deterministic workflows from just applying structure.

It's nice that you have a super flexible tool in the toolbox, but I think a lot of text based embedding applications (especially on out of domain data like long, unchunked technical docs) are just better off being something else if you have the time.

show 1 reply
jdthedisciplelast Saturday at 10:03 PM

Intriguing! This inspired me to run the example "calculation" ("king" - "man" + "woman") against several well-known embedding models and order them by L2 distance between the actual output and the embedding for "queen". Result:

    voyage-3-large:             0.54
    voyage-code-3:              0.62
    qwen3-embedding:4b:         0.71
    embeddinggemma:             0.84
    voyage-3.5-lite:            0.94
    text-embedding-3-small:     0.97
    voyage-3.5:                 1.01
    text-embedding-3-large:     1.13
Shocked by the apparently bad performance of OpenAI's SOTA model. Also always had a gut feeling that `voyage-3-large` secretly may be the best embedding model out there. Have I been vindicated? Make of it what you will ...

Also `qwen3-embedding:4b` is my current favorite for local RAG for good reason...

show 2 replies