That's an awesome tool! I think textclip.sh solves a different problem though (correct me if I'm wrong - this is the first I've been exposed to it). Compression at the URL/transport layer helps with sharing prompts, but the token count still hits you once the text is decompressed and fed into the model. The LLM sees the full uncompressed text.
The approach with GlyphLang is to make the source code itself token-efficient. When an LLM reads something like `@ GET /users/:id { $ user = query(...) > user }`, that's what gets tokenized (not a decompressed version). The reduced tokenization persists throughout the context window for the entire session.
That said, I don't think they're mutually exclusive. You could use textclip.sh to share GlyphLang snippets and get both benefits.
That's an awesome tool! I think textclip.sh solves a different problem though (correct me if I'm wrong - this is the first I've been exposed to it). Compression at the URL/transport layer helps with sharing prompts, but the token count still hits you once the text is decompressed and fed into the model. The LLM sees the full uncompressed text.
The approach with GlyphLang is to make the source code itself token-efficient. When an LLM reads something like `@ GET /users/:id { $ user = query(...) > user }`, that's what gets tokenized (not a decompressed version). The reduced tokenization persists throughout the context window for the entire session.
That said, I don't think they're mutually exclusive. You could use textclip.sh to share GlyphLang snippets and get both benefits.