logoalt Hacker News

A polynomial autoencoder beats PCA on transformer embeddings

17 pointsby timviseelast Tuesday at 11:31 AM3 commentsview on HN

Comments

yobbotoday at 7:19 AM

My understanding after scanning the code examples is the technique expands the dimensionality of each data point with a set consisting of the quadratic coefficients of its existing dimensions. I thought it sounded like kernel PCA.

magicalhippotoday at 6:23 AM

I'm just a casual LLM user, but your description of the anisotropy made me think about the recent work on KV cache quantization techniques such as TurboQuant where they apply a random rotation on each vector before quantizing, as I understood it precisely to make it more isotropic.

But for RAG that might be too much work per vector?

pleshkovlast Tuesday at 11:32 AM

Author here — questions and pushback both welcome.