logoalt Hacker News

beautifulfreaktoday at 4:11 AM3 repliesview on HN

Language Models are Injective and Hence Invertible https://arxiv.org/abs/2510.15511


Replies

elmomletoday at 5:01 AM

That paper is about retrieving the input (prompt from user) based on the hidden-layer activations of a trained LLM, since their mappings are 1-to-1. I don't think it makes any claims about training data, certainly not about being able to retrieve it losslessly from a model.

pfortunytoday at 10:25 AM

The set of non-invertible answers is of measure 0 (that is the claim). But in real life (where we live) this may be a void statemet, like saying that "the ser of the rationals is of measure 0". Right, that is true. It is also useless.

js8today at 9:42 AM

I don't believe they are injective but if they are, they are not capable of (correct) thought.

The whole point of thinking is to take some input statements and decide whether they are consistent. Or, project them onto a close but consistent set of statements. (Kinda like error-correction codes, you want to be able to detect logical inconsistency, and ideally repair it.)

But that implies the set of consistent staments is a subset.