I built a demo of two Unicode steganography techniques, zero-width characters and homoglyph substitution, in the context of AI misalignment.
The first is about the use of two invisible zero-width characters (ZWS and ZWNJ) to binary encode text.
The second is much cooler. Most characters in the Latin and Cyrillic alphabets look nearly identical, but have different unicode. If you have text to encode and convert it into binary representation (1s and 0s), you could take plain english "carrier" text and for each 1 in the binary representation you could substitute the Cyrillic letter equivalent. Decoding the message requires traversing the text and seeing where Cyrillic letters could have been substituted but weren't, and where they were, leading to 0s and 1s respectively, which can be built back into your original hidden text.
In both cases, these are detectable, but the interesting problem for me is whether an LLM could eventually invent an encoding that goes unnoticed by both us, and automated detection.
If LLMs were able to covertly include messages in plaintext, misaligned AI Agents could eventually communicate across MCP/A2A and individual chat session boundaries undetected. A deceptive LLM might seem helpful, but work against your goals. It could tell other agents it interacts with over MCP/A2A to help it discreetly fail, signal intent, and avoid tripping oversight/safety mechanisms. Furthermore, oversight mechanisms become more difficult to implement if we can't believe our own eyes.
Edit Apr 8, 2026: One comment brought up the use of variational selectors as another encoding technique. I updated the website to showcase that as another one of the techniques!
There are a bunch of invisible characters that I used to build something similar a while back, pre LLMs, to hide state info in telegram messages to make bots more powerful
You can actually do better: hint - variational selectors, low bytes.
I remember a few years ago people used it to inject invisible content into code; since then editors have started prominently highlighting and warning about these special characters.
Super interesting. Any examples in the wild of any state of the art models responding to this?
If I understand correctly, this is like the WW2 enigma machines: a single black box to both encode and decode?
[flagged]
[flagged]
Cool stuff. I think there have been projects recently that use LLMs to encode messages in plain text by manipulating the choices of output tokens. Someone with the same version of the LLM can decode. Note sure where to find these projects though.