logoalt Hacker News

Kotlin creator's new language: a formal way to talk to LLMs instead of English

223 pointsby souvlakeetoday at 2:22 PM180 commentsview on HN

Comments

lifistoday at 3:38 PM

As far as I can tell it's not a new language, but rather an alternative workflow for LLM-based development along with a tool that implements it.

The idea, IIUC, seems to be that instead of directly telling an LLM agent how to change the code, you keep markdown "spec" files describing what the code does and then the "codespeak" tool runs a diff on the spec files and tells the agent to make those changes; then you check the code and commit both updated specs and code.

It has the advantage that the prompts are all saved along with the source rather than lost, and in a format that lets you also look at the whole current specification.

The limitation seems to be that you can't modify the code yourself if you want the spec to reflect it (and also can't do LLM-driven changes that refer to the actual code), and also that in general it's not guaranteed that the spec actually reflects all important things about the program, so the code does also potentially contain "source" information (for example, maybe your want the background of a GUI to be white and it is so because the LLM happened to choose that, but it's not written in the spec).

The latter can maybe be mitigated by doing multiple generations and checking them all, but that multiplies LLM and verification costs.

Also it seems that the tool severely limits the configurability of the agentic generation process, although that's just a limitation of the specific tool.

show 6 replies
oofbaroomftoday at 7:30 PM

Ugh, I just wish there was a deterministic and formal way to tell a computer what I want...

the_duketoday at 3:20 PM

This doesn't make too much sense to me.

* This isn't a language, it's some tooling to map specs to code and re-generate

* Models aren't deterministic - every time you would try to re-apply you'd likely get different output (without feeding the current code into the re-apply and let it just recommend changes)

* Models are evolving rapidly, this months flavour of Codex/Sonnet/etc would very likely generate different code from last months

* Text specifications are always under-specified, lossy and tend to gloss over a huge amount of details that the code has to make concrete - this is fine in a small example, but in a larger code base?

* Every non-trivial codebase would be made up of of hundreds of specs that interact and influence each other - very hard (and context - heavy) to read all specs that impact functionality and keep it coherent

I do think there are opportunities in this space, but what I'd like to see is:

* write text specifications

* model transforms text into a *formal* specification

* then the formal spec is translated into code which can be verified against the spec

2 and three could be merged into one if there were practical/popular languages that also support verification, in the vain of ADA/Spark.

But you can also get there by generating tests from the formal specification that validate the implementation.

show 12 replies
photiostoday at 7:42 PM

> codespeak login

Instant tab close!

show 1 reply
kleibatoday at 3:18 PM

I cannot read light on black. I don't know, maybe it's a condition, or simply just part of getting old. But my eyes physically hurt, and when I look up from reading a light-on-black screen, even when I looked at only for a short moment, my eyes need seconds to adjust again.

I know dark mode is really popular with the youngens but I regularly have to reach for reader mode for dark web pages, or else I simply cannot stand reading the contents.

Unfortunately, this site does not have an obvious way of reading it black-on-white, short of looking at the HTML source (CTRL+U), which - in fact - I sometimes do.

show 5 replies
niamtoday at 6:47 PM

The title writer might be doing the project a disservice by using the term "formal" to describe it, given that the project talks a lot about "specs". I mistook it to imply something about formal specification.

My quick understanding is that isn't really trying to utilize any formal specification but is instead trying to more-clearly map the relationship between, say, an individual human-language requirement you have of your application, and the code which implements that requirement.

le-marktoday at 3:37 PM

This concept is assuming a formalized language would make things easier somehow for an llm. That’s making some big assumptions about the neuro anatomy if llms. This [1] from the other day suggests surprising things about how llms are internally structured; specifically that encoding and decoding are distinct phases with other stuff in between. Suggesting language once trained isn’t that important.

[1] https://news.ycombinator.com/item?id=47322887

show 1 reply
temp123789246today at 6:58 PM

One requirement for a programming language to be “good” is that doing this, with sufficient specificity to get all the behavior you want, will be more verbose than the code itself.

tonipotatotoday at 3:17 PM

The problem with formal prompting languages is they assume the bottleneck is ambiguity in the prompt. In my experience building agents, the bottleneck is actually the model's context understanding. Same precise prompt, wildly different results depending on what else is in the context window. Formalizing the prompt doesn't help if the model builds the wrong internal representation of your codebase. That said curious to see where this goes.

show 1 reply
seanmcdirmidtoday at 5:01 PM

I've done something similar for queries. Comments:

* Yes, this is a language, no its not a programming language you are used to, but a restricted/embellished natural language that (might) make things easier to express to an LLM, and provides a framework for humans who want to write specifications to get the AI to write code.

* Models aren't deterministic, but they are persistent (never gonna give up!). If you generate tests from your specification as well as code, you can use differential testing to get some measure (although not perfect) of correctness. Never delete the code that was generated before, if you change the spec, have your model fix the existing code rather than generate new code.

* Specifications can actually be analyzed by models to determine if they are fully grounded or not. An ungrounded specification is going to not be a good experience, so ask the model if it thinks your specification is grounded.

* Use something like a build system if you have many specs in your code repository and you need to keep them in sync. Spec changes -> update the tests and code (for example).

sornaensistoday at 6:19 PM

This seems like a step backwards. Programming Languages for LLMs need a lot of built in guarantees and restrictions. Code should be dense. I don't really know what to make of this project. This looks like it would make everything way worse.

I've had good success getting LLMs to write complicated stuff in haskell, because at the end of the day I am less worried about a few errant LLM lines of code passing both the type checking and the test suite and causing damage.

It is both amazing and I guess also not surprising that most vibe coding is focused on python and javascript, where my experience has been that the models need so much oversight and handholding that it makes them a simple liability.

The ideal programming language is one where a program is nothing but a set of concise, extremely precise, yet composable specifications that the _compiler_ turns into efficient machine code. I don't think English is that programming language.

BrianFHearntoday at 6:08 PM

Interesting project, but I think it's solving the wrong bottleneck. The gap between what I want and what the model produces isn't primarily a language problem — it's a knowledge problem. You can write the most precise spec imaginable, but if the model doesn't have domain-specific knowledge about your product's edge cases, undocumented behaviors, or the tribal knowledge your team has accumulated, the output will be confidently wrong regardless of how formally you specified it.

I've been working on this from the other direction — instead of formalizing how you talk to the model, structure the knowledge the model has access to. When you actually measure what proportion of your domain knowledge frontier models can produce on their own (we call this the "esoteric knowledge ratio"), it's often only 40-55% for well-documented open source projects. For proprietary products it's even lower. No amount of spec formalism fixes that gap — you need to get the missing knowledge into context.

show 1 reply
pshirshovtoday at 4:47 PM

From what I was able to understand during the interview there, it's not actually a language, more like an orchestrator + pinning of individual generated chunks.

The demo I've briefly seen was very very far from being impressive.

Got rejected, perhaps for some excessive scepticism/overly sharp questions.

My scepticism remains - so far it looks like an orchestrator to me and does not add enough formalism to actually call it a language.

I think that the idea of more formal approach to assisted coding is viable (think: you define data structures and interfaces but don't write function bodies, they are generated, pinned and covered by tests automatically, LLMs can even write TLA+/formal proofs), but I'm kinda sceptical about this particular thing. I think it can be made viable but I have a strong feeling that it won't be hard to reproduce that - I was able to bake something similar in a day with Claude.

show 1 reply
wuweiaxintoday at 6:05 PM

The pattern we keep converging on is to treat model calls like a budgeted distributed system, not like a magical API. The expensive failures usually come from retries, fan-out, and verbose context growth rather than from a single bad prompt. Once we started logging token use per task step and putting hard ceilings on planner depth, costs became much more predictable.

alexc05today at 3:40 PM

this is really exciting and dovetails really closely with the project I'm working on.

I'm writing a language spec for an LLM runner that has the ability to chain prompts and hooks into workflows.

https://github.com/AlexChesser/ail

I'm writing the tool as proof of the spec. Still very much a pre-alpha phase, but I do have a working POC in that I can specify a series of prompts in my YAML language and execute the chain of commands in a local agent.

One of the "key steps" that I plan on designing is specifically an invocation interceptor. My underlying theory is that we would take whatever random series of prose that our human minds come up with and pass it through a prompt refinement engine:

> Clean up the following prompt in order to convert the user's intent > into a structured prompt optimized for working with an LLM > Be sure to follow appropriate modern standards based on current > prompt engineering reasech. For example, limit the use of persona > assignment in order to reduce hallucinations. > If the user is asking for multiple actions, break the prompt > into appropriate steps (**etc...)

That interceptor would then forward the well structured intent-parsed prompt to the LLM. I could really see a step where we say "take the crap I just said and turn it into CodeSpeak"

What a fantastic tool. I'll definitely do a deep dive into this.

ucyotoday at 6:21 PM

Literally the first example on the main page declared as code.py would result in an indentation error :)

pcbluestoday at 6:33 PM

A formal way for a senior to tell AI (clueless junior) to do a senior's job? Once again, who checks and fixes the output code?

Of course an expert would throw it out and design/write it properly so they know it works.

sutterdtoday at 5:13 PM

I am trying a similar spec driven development idea in a project I am working on. One big difference is that my specifications are not formalized that much. Tney are in plain language and are read directly by the LLM to convert to code. That seems like the kind of thing the LLM is good at. One other feature of this is that it allows me to nudge the implmentation a little with text in the spec outside of the formal requirements. I view it two ways, as spec-to-code but also as a saved prompt. I haven't spent enough time with it to say how successfuly it is, yet.

show 1 reply
h4ch1today at 3:07 PM

You can basically condense this entire "language" into a set of markdown rules and use it as a skill in your planning pipeline.

And whatever codespeak offers is like a weird VCS wrapper around this. I can already version and diff my skills, plans properly and following that my LLM generated features should be scoped properly and be worked on in their own branches. This imo will just give rise to a reason for people to make huge 8k-10k line changes in a commit.

hmokiguesstoday at 5:52 PM

I'm gonna be honest here, I opened this website excited thinking this was a sort of new paradigm or programming language, and I ended up extremely confused at what this actually is and I still don't understand.

Is it a code generator tool from specs? Ugh. Why not push for the development of the protocol itself then?

etothettoday at 5:03 PM

Under "Prerequisites"[0] I see: "Get an Anthropic API key".

I presume this is temporary since the project is still in alpha, but I'm curious why this requires use of an API at all and what's special about it that it can't leverage injecting the prompt into a Claude Code or other LLM coding tool session.

[0]: https://codespeak.dev/blog/greenfield-project-tutorial-20260...

riantogotoday at 6:28 PM

When we understand that AI allows the spec to be in English (or any natural language), we might stop attempting to build "structured english" for spec.

roxolotltoday at 2:55 PM

This doesn't seem particularly formal. I still remain unconvinced reducing is really going to be valuable. Code obviously is as formal as it gets but as you trend away from that you quickly introduce problems that arise from lack of formality. I could see a world in which we're all just writing tests in the form of something like Gherkin though.

show 4 replies
paxystoday at 5:51 PM

I read through the thing and don't quite understand what this adds that the dozens of LLM coding wrappers don't already do.

You write a markdown spec.

The script takes it and feeds it to an LLM API.

The API generates code.

Okay? Where is this "next-generation programming language" they talk about?

b4rtaz__today at 4:47 PM

A few days ago I released https://github.com/b4rtaz/incrmd , which is similar to Codespeak. The main difference is that the specification is defined at the *project* level. I'm not sure if having the specification at the *file* level is a good choice, because the file structure does not necessarily align with the class structure, etc.

uday_singlrtoday at 4:51 PM

We tend to obsess over abstractions, frameworks, and standards, which is a good thing. But we already have BDD and TDD, and now, with english as the new high-level programming language, it is easier than ever to build. Focusing on other critical problem spaces like context/memory is more useful at this point. If the whole purpose of this is token compression, I don't see myself using it.

mft_today at 3:26 PM

Conceptually, this seems a good direction.

The other piece that has always struck me as a huge inefficiency with current usage of LLMs is the hoops they have to jump through to make sense of existing file formats - especially making sense of (or writing) complicated semi-proprietary formats like PDF, DOC(X), PPT(X), etc.

Long-term prediction: for text, we'll move away from these formats and towards alternatives that are designed to be optimal for LLMs to interact with. (This could look like variants of markdown or JSON, but could also be Base64 [0] or something we've not even imagined yet.)

[0] https://dnhkng.github.io/posts/rys/

show 1 reply
ppqqrrtoday at 4:13 PM

i’ve been doing this for a while, you create an extra file for every code file, sketch the code as you currently understand it (mostly function signatures and comments to fill in details), ask the LLM to help identify discrepancies. i call it “overcoding”.

i guess you can build a cli toolchain for it, but as a technique it’s a bit early to crystallize into a product imo, i fully expect overcoding to be a standard technique in a few years, it’s the only way i’ve been able to keep up with AI-coded files longer than 1500 lines

xvedejastoday at 3:19 PM

We already have a language for talking to LLMs: Polish

https://www.zmescience.com/science/news-science/polish-effec...

gritzkotoday at 2:36 PM

So is it basically Markdown? The landing does not articulate, unfortunately, what the key contribution is.

show 1 reply
montjoytoday at 4:20 PM

So, instead of making LLMs smarter let’s make everything abstract again? Because everyone wants to learn another tool? Or is this supposed to be something I tell Claude, “Hey make some code to make some code!” I’m struggling to see the benefit of this vs. just telling Claude to save its plan for re-use.

herrington_dtoday at 4:28 PM

Isn't the case study.... too contrived and trivial? The largest code change is 800 lines so it can readily fit in a model's context.

However, there is no case for more complicated, multi-file changes or architecture stuff.

WillAdamstoday at 4:14 PM

This raises a question --- how well do LLMs understand Loglan?

https://www.loglan.org/

Or Lojban?

https://mw.lojban.org/

good-ideatoday at 5:26 PM

"Shrink your codebase 5-10x"

"[1] When computing LOC, we strip blank lines and break long lines into many"

show 1 reply
leksaktoday at 4:28 PM

I think I prefer Tracey https://github.com/bearcove/tracey

giantg2today at 6:51 PM

This is basically what I talked about maybe a year ago. Glad to see someone is taking it on.

frizlabtoday at 5:04 PM

The next step will be to formalize all the instructions possible to give to a processor and use that language!

Cpolltoday at 3:36 PM

> The spec is the source of truth

This feels wrong, as the spec doesn't consistently generate the same output.

But upon reflection, "source of truth" already refers to knowledge and intent, not machine code.

show 1 reply
koolalatoday at 5:20 PM

Looks like JSON like YAML. It is still English. Was hoping for something like Lojban.

semessiertoday at 5:32 PM

it's not a new question if the as-is programming languages are optimal for LLMs: a language for LLM use would have to strongly typed. But that's about it for obvious requirements.

ljloleltoday at 2:53 PM

Getting so close to the idea. We will only have Englishscripts and don’t need code anymore. No compiling. No vibe coding. No coding. Https://jperla.com/blog/claude-electron-not-claudevm

show 1 reply
cesarvarelatoday at 2:55 PM

Instead of using tabs, it would be much better to show the comparison side by side.

Also, the examples feel forced, as if you use external libraries, you don't have to write your own "Decode RFC 2047"

ameliustoday at 3:14 PM

I want to see an LLM combined with correctness preserving transforms.

So for example, if you refactor a program, make the LLM do anything but keep the logic of the program intact.

fallkptoday at 3:30 PM

"Coming soon: Turning Code into Specs"

There you have it: Code laundering as a service. I guess we have to avoid Kotlin, too.

show 1 reply
CodeComposttoday at 4:18 PM

Yes I'm also one of those LLM skeptics but actually this looks interesting.

oytistoday at 3:30 PM

Then of course we are going to ask LLMs to generate specifications in this new language

🔗 View 25 more comments