Would this enable a model to learn concepts in one language and generate answers about it in another, as long as it learns general translations between them?
My educated guess:
Not more than any other LLM.
The text-latent encoder and latent-text decoder just find am more efficient representation of the tokens, but it's more of a compression instead of turning words/sentences into abstract concepts.
There will be residuals of the input language be in there.
My educated guess: Not more than any other LLM. The text-latent encoder and latent-text decoder just find am more efficient representation of the tokens, but it's more of a compression instead of turning words/sentences into abstract concepts. There will be residuals of the input language be in there.