if how us humans reason about things is a clue, language is not the right tool to reason about things.
There is now research in Large Concept Models to tackle this but I'm not literate enough to understand what that actually means...
Is that just doing the TTC in latent space without lossy resolving from embedding to English at each step?
Is that just doing the TTC in latent space without lossy resolving from embedding to English at each step?