logoalt Hacker News

Which programming languages are most token-efficient?

94 pointsby tehnubtoday at 1:36 AM68 commentsview on HN

Comments

thw_9a83ctoday at 10:39 AM

There is one class of languages missing in the comparison: Programming golf languages: E.g. Japt [1], Pyth [2] or Jelly [3].

Update: I noticed that the author mentions that "APL's famous terseness isn't a plus for LLMs." Isn't that just a design limitation of the LLM tokenizers?

[1]: https://github.com/ETHproductions/japt

[2]: https://github.com/isaacg1/pyth

[3]: https://github.com/DennisMitchell/jellylanguage

solomonbtoday at 4:09 AM

I'm biased by my preferred style of programming languages but I think that pure statically typed functional languages are incredibly well suited for LLMs. The purity gives you referential transparency and static analysis powers that the LLM can leverage to stay correctly on task.

The high level declarative nature and type driven development style of languages like Haskell also make it really easy for an experienced developer to review and validate the output of the LLM.

Early on in the GPT era I had really bad experiences generating Haskell code with LLMs but I think that the combination of improved models, increased context size, and agentic tooling has allowed LLMs to really take advantage of functional programming.

show 2 replies
jwrtoday at 9:07 AM

I program mostly in Clojure and I expected it to be near the top, as it tends to be very concise and expressive (qualities I really admire). I am getting excellent results from Claude Code (Opus 4.5), and I think this might be one of the reasons. I'm using Claude with a large code base and the token-efficiency of Clojure might help with fitting more into the context window.

bicxtoday at 2:36 AM

Realistically, it’s also a function of how many iterations it takes for an AI agent to correctly solve a problem with a given language. I’d imagine most AI agents would frequently have to redo J or F# code, as they are fairly uncommon languages with much smaller training set than JavaScript or Python.

show 1 reply
HtmlProgrammertoday at 9:02 AM

> I then told Claude Code to suggest a selection of the most popular programming languages

If you’re going to write an article atleast do the base research yourself man

torginustoday at 9:44 AM

This confirms my personal experience with switching to Go from C# - despite the e framework and language being MUCH simpler, the code usually ends up the same length.

C# often has a 'nice' and 'performant' way of doing things (for example, strings are nice, but they allocate and are UTF16, but ReadOnlySpan<byte> is faster for UTF8, and can reuse buffers), the performant syntax often ends up being very verbose, with the nice syntax being barely shorter than Go's. Go also does the right thing by default, and its strings are basically array slices into UTF8 byte arrays.

show 1 reply
janalsncmtoday at 4:03 AM

This is kind of just a measurement of how representative a language is in the distribution of the tokenizer training. You could have a single token equal to “public static void main”.

show 4 replies
johnisgoodtoday at 8:46 AM

Concatenative languages like Factor and Forth are very token-efficient in theory. Theoretically optimal for raw lexical density. No parentheses, no commas, no argument delimiters, just whitespace-separated words, but stack shuffling can add overhead for complex data flow, unless you use "locals" in Factor, for example.

C is surprisingly efficient as well. Minimal keywords, terse syntax, single-character operators. Not much boilerplate, and the core logic is dense.

I think the worst languages are Java, C#, and Rust (lifetime annotations, verbose generics).

In my opinion, C or Go for imperative code, Factor / Forth if the model knows them well.

show 2 replies
protocolturetoday at 4:02 AM

I have always had concerns about physical robots making my work less safe in the real world.

But had never considered that a programming language might be created thats less human readable/auditable to enable LLMs.

Scares me a bit.

show 1 reply
kozikatoday at 6:33 AM

Someone has made a programming language called Sui, which is said to be designed for LLMs. However, using index-based variable names in order to "avoid typo bugs" makes it more difficult than general-purpose languages, and it also has poor token efficiency :(

https://github.com/TakatoHonda/sui-lang

aleph_minus_onetoday at 9:00 AM

Relevant:

On https://danuker.go.ro/programming-languages.html you can find charts of popularity (TIOBE) vs code density for various programming languages together with which programming languages are Pareto-optimal regarding these two criteria.

112233today at 6:44 AM

If language supports comments and LLM is allowed to write them (or docstrings, or any such), there go your tokens.

Plus, they will strongly "pull" the context when LLM parses it back, to the point of overriding your instructions (true story)

verdvermtoday at 6:14 AM

Token efficiency is only one metric. Simplicity of syntax and semantics are another valuable one.

re: tokens and session length, there are other ways to manage this than language choice. Summarization is one, something I do is to not out read_file content in the messages, but rather in the system prompt. This means that when it tries to reread after edit, we don't have two copies of the file in context.

Going to 10M token sessions, keeping per turn context under 100k, working on Golang... language choice for the sake of tokens does not seem a good thing to decide based on

gpmtoday at 3:07 AM

It strikes me that more tokens likely give the LLM more time/space to "think". Also that more redundant tokens, like local type declarations instead of type inference from far away, likely often reduce the portion of the code LLMs (and humans) have to read.

So I'm not convinced this is either the right metric, or even if you got the right metric that it's a metric you want to minimize.

show 2 replies
efitztoday at 3:32 AM

This is interesting research; thank you for doing it.

I am not sure token efficiency is an interesting problem in the long term, though.

And in the short term I wonder if prompts could be pre-compiled to “compressed tokens”; the idea would be to use a smaller number of tokens to represent a frequently needed concept; kind of like LZ compression. Or maybe token compression becomes a feature of future models optimized for specific tasks.

I was wondering last year if it would be worthwhile trying to create a language that was especially LLM-friendly, eg that embedded more context in the language structure. The idea is to make more of the program and the thinking behind it, explicit to the LLM but in a programming language style to eliminate the ambiguity of natural language (one could just use comments).

Then it occurred to me that with current LLM training methodology that there’s a chicken-and-egg problem; it doesn’t start to show rewards until there is a critical mass of good code in the language for LLMs to train on.

btbytestoday at 2:13 AM

Not surprisingly, it is J [1], an APL dialect.

[1] https://www.jsoftware.com/

show 1 reply
tzahifadidatoday at 6:43 AM

An agent can make summaries via Markdown files while processing. Then use that to break the problem to several issues and then tackle them one by one, even automatically, but more usually interactively. The problem is the technique now, not the llm. Yes, it costs a lot (lot) more. But, it can do it, and people work cost way more than tokens.

bri-holttoday at 2:46 AM

I suspect DB queries will also benefit from token-efficient query languages as RAG queries grow exponentially. I've been working on one such language that is emitted in a token-efficient IR and compiles to SQL. https://memelang.net/

Suractoday at 8:24 AM

intresting project, do you mind to explain what brought you to do that research? im a litte surprised that the more simple languages tend to use more tokens, but after thing i realizend that languages with more expressiv syntax allow to write with less "words". But i also think it is a little bit like a race of watches. who realy wants to know what watch runs faster?

didiptoday at 3:17 AM

Does it account for errors generated from Runtime bugs which caused rerunning of prompts?

Because that’s what happened in the real world when generating a bunch of untyped Python code.

HarHarVeryFunnytoday at 2:34 AM

I don't think context size is really the limit for larger codebases - it's more about how you use that context.

Claude Code makes some efforts to reduce context size, but at the end of the day is loading entire source files into context (then keeping them there until told to remove them, or context is compacted). One of the major wins is to run subagents for some tasks, that use their own context rather than loading more into CCs own context.

Cursor makes more efficient use of context by building a vector database of code chunks, then only loading matching chunks into context (I believe it does this for Composer/agentic use as well as for tab/autocomplete).

One of the more obvious ways to reduce context use in a larger multi-module codebase would be to take advantage of the split between small module definition (e.g. C++ .h files) and large module implementations (.cpp files). Generally you'd only need to load module interfaces/definitions into context if you are working on code that uses the module, and Cursor's chunked approach can reduce that further.

For whole codebase overview a language server can help locate things, and one could use the AI to itself generate shortish summaries/overviews of source files and the codebase and structure, similar to what a human developer might keep in their head, rather than repeatedly reading entire source files for code that isn't actually being modified.

It seems we're really in the early days of agentic coding tools, and they have a lot of room to get better and more efficient.

show 1 reply
switchbaktoday at 2:46 AM

I would expect that we’ll end up compressing (or whatever term you would use) this at some point so many of those syntactical differences will not be as significant.

But I would love for more expressive and compact languages to do better, selfish as I am. But I think training data size is more of a factor, and we won’t be all moving up Clojure any time soon.

nineteen999today at 7:42 AM

Ive been doing plenty of Z80/x86_64 assembly with it, as well as a little 6502.

Those are pretty terse.

epolanskitoday at 2:34 AM

I doubt this to be a meaningful metric for anything but code exploration in a larger codebase.

E.g. when it comes to authoring code, C, which comes language, is by far one of the languages that LLMs excel most at.

show 1 reply
nige123today at 9:31 AM

Raku

andersmurphytoday at 7:19 AM

Forth

TZubiritoday at 4:17 AM

05a1be

show 1 reply
awesome_dudetoday at 4:55 AM

I'm finding that I have to share more and more code to ensure that various standards are being kept.

For example I shared some Model code with Claude and Gemini (both via web interfaces) and they both tried to put Controller code into the Model, despite me multiple times telling them that the code wasn't wanted nor needed in there.

I had to (eventually) share the entire project with the models (despite them having been working with the code all along) before they would comply with my request (whilst also congratulating me on my far superior architecture..)

That costs more tokens for each problem than just saying "her look at this section and work toward this goal"

show 1 reply
yeaskutoday at 2:47 AM

[dead]

lngnmn2today at 2:35 AM

[dead]