logoalt Hacker News

spwa4last Sunday at 11:02 AM1 replyview on HN

There's 2 things called neuralese:

1) internally, in latent space, LLMs use what is effectively a language, but all the words are written on top of each other instead of separately, and if you decode it as letters, it sounds like gibberish, even though it isn't. It's just a much denser language than any human language. This makes them unreadable ... and thus "hides the intentions of the LLM", if you want to make it sound dramatic and evil. But yeah, we don't know what the intermediate thoughts of an LLM sound like.

The decoded version is often referred to as "neuralese".

2) if 2 LLMs with sufficiently similar latent space communicate with each other (same model), it has often been observed that they switch to "gibberish" BUT when tested they are clearly still passing meaningful information to one another. One assumes they are using tokens more efficiently to get the latent space information to a specific point, rather than bothering with words (think of it like this: the thoughts of an LLM are a 3d point (in reality 2000d, but ...). Every token/letter is a 3d vector (meaning you add them), chosen so words add up to the thought that is their meaning. But when outputting text why bother with words? You can reach any thought/meaning by combining vectors, just find the letter moving the most in the right direction. Much faster)

Btw: some specific humans (usually toddlers or children that are related) when talking to each other switch to talking gibberish to each other as well while communicating. This is especially often observed in children that initially learn language together. Might be the same thing.

These languages are called "neuralese".


Replies