logoalt Hacker News

danpalmeryesterday at 11:27 PM11 repliesview on HN

> The empirical literature shows that models are particularly vulnerable to naming-related errors like choosing misleading names, reusing names incorrectly, and losing track of which name refers to which value.

I think Vera might be missing something here. In my experience, LLMs code better the less of a mental model you need, vs the more is in text on the page.

Go – very little hidden, everything in text on the page, LLMs are great. Java, similar. But writing Haskell, it's pretty bad, Erlang, not wonderful. You need much more of a mental model for those languages.

For Vera, not having names removes key information that the model would have, and replaces it with mental modelling of the stack of arguments.


Replies

drob518today at 3:03 AM

My Spidey sense was tingling when I saw that, too. An additional issue is how humans are supposed to read the code at all so that they can provide help to the LLM if it’s off track. If the code is only usable by models, the models need to be good enough to deal with binary feedback (“Code doesn’t work.”). The human won’t be able to read the code and steer the model. Given the levels of steering required today, that makes me quite nervous.

show 1 reply
mkltoday at 8:14 AM

> Go – very little hidden, everything in text on the page, LLMs are great. Java, similar. But writing Haskell, it's pretty bad, Erlang, not wonderful. You need much more of a mental model for those languages.

I don't think that follows. It could just be that there is way more Go and Java code to train on than Haskell and Erlang. Haskell's terseness and symbol-named operators probably don't help either.

mannykannottoday at 3:05 AM

This will serve as an interesting empirical test, then: will LLMs do better with Vera than with Go or other languages? The testing so far seems inconclusive (https://github.com/aallan/vera-bench), but the authors make this interesting observation:

"No LLM has ever been trained on Vera. There are no Vera examples on GitHub, no Stack Overflow answers, no tutorials — the language was created after these models' training cutoffs. Every token of Vera code in these results was written by a model that learned the language entirely from a single document (SKILL.md [https://veralang.dev/SKILL.md]) provided in the prompt at evaluation time."

If LLMs do much better with Vera (or something like it) than with traditional languages, we may be entering a time when most machine-written code will be difficult for humans to review - but maybe that ship has already sailed.

show 1 reply
ecthiendertoday at 8:02 AM

Hmm, interesting. Are you speaking from experience for Haskell? I'm a Haskell developer since 2017, and have been using LLMs to write code (including Haskell) since 2024. In my experience, LLMs perform much better generating Haskell/Rust code over Python/Javascript.

show 1 reply
robvirentoday at 12:19 AM

I too have found the models do well with Go. I will say despite the backwards compatibility guarantee library API changes, what counts as "good" patterns, and new language additions do add some friction to the experience. Almost always works but it can be a bit inconsistent in how the code shows up.

rapindyesterday at 11:47 PM

> But writing Haskell, it's pretty bad,

I’m surprised by this. Most likely significant white space is a big part of the problem (LLMs seem horrible at white space). Functional with types has been a win for me with Gleam.

show 1 reply
classifiedtoday at 10:20 AM

If it's incomprehensible to humans, it must be perfect for LLMs. Never mind the training.

sornaensisyesterday at 11:52 PM

I'm curious what issues you had with haskell? I have had the opposite experience and find them dreadful at Java et al.

Surely, denser languages should be better for LLMs?

show 3 replies
Animatstoday at 2:59 AM

The same logic applies to comments. No comments are better than wrong comments.

boxedtoday at 6:47 AM

I've found Claude Code to be amazing at Elm, so your comment about Haskell seems strange to me.

smohareyesterday at 11:37 PM

[dead]