Do you have any evals on how good LLMs are at generating Glyphlang?
I’m curious if you optimized for the ability to generate functioning code or just tokenization compression rate, which LLMs you tokenized for, and what was your optimization process like.