logoalt Hacker News

danpalmertoday at 4:46 AM2 repliesview on HN

This rings true for me. LLMs in my experience are great at Go, a little less good at Java, and much less good at GCL (internal config language).

This is definitely partly training data, but if you give an LLM a simple language to use on the fly it can usually do ok. I think the real problem is complexity.

Go and Java require very little mental modelling of the problem, everything is written down on the page really quite clearly (moreso with Go, but still with Java).

In GCL however the semantics are _weird_, the scoping is unlike most languages, because it's designed for DSLs. Humans writing DSL content requires little thought, but authoring DSLs requires a fair amount of mental modelling about the structure of the data that is not present on the page. I'd wager that Lisp is similar, more of a mental model is required.

The problem is of course that LLMs don't have a mental model, or at least what they do have is far from what humans have. This is very apparent when doing non-trivial code, non-CRUD, non-React, anything that requires thinking hard about problems more than it requires monkeys at typewriters.


Replies

miki123211today at 6:49 AM

I bet it would do much better at hcl (or Starlark, maybe even yaml, something that it has seen plenty of examples of in the wild).

This is a weird moment in time where proprietary technology can hurt more than it can help, even if it's superior to what's available in public in principle.

show 1 reply
eldenringtoday at 4:52 AM

How many docs do you put in the context? we maintain a lot of dsl code internally, and each file has a copy of the spec + guide as a comment at the top. Its about 50 locs and the relevant models are great at writing it.

show 1 reply