>The approach was the same as Cloudflare’s vinext rewrite: port the official jsonata-js test suite to Go, then implement the evaluator until every test passes.
the first question that comes to mind is: who takes care of this now?
You had a dependency with an open source project. now your translated copy (fork?) is yours to maintain, 13k lines of go. how do you make sure it stays updated? Is this maintainance factored in?
I know nothing about JSONata or the problem it solves, but I took a look at the repo and there's 15PRs and 150 open issues.
The docs indicate there are already 2 other go implementations. Why not just use one of those? https://docs.jsonata.org/overview.html
> This was costing us ~$300K/year in compute, and the number kept growing as more customers and detection rules were added.
Maybe I’m out of touch, but I cannot fathom this level of cost for custom lambda functions operating on JSON objects.
Congrats! This author found a sub-optimal microservice and replaced it with inline code. This is the bread and butter work of good engineering. This is also part of the reason that microservices are dangerous.
The bad engineering part is writing your own replacement for something that already exists. As other commenters here have noted, there were already two separate implementations of JSONata in Go. Why spend $400 to have Claude rewrite something when you can just use an already existing, already supported library?
This isn’t the first time I’ve read a ridiculous story like this on hackernews. It seems to be a symptom of startups who suddenly get a cash injection with no clue how to properly manage it. I have been slowly scaling a product over the past 12 years, on income alone, so I guess I see things differently, but I could never allow such a ridiculous spend on something so trivial reach even 1% of this level before squashing it.
I'm just kind of confused what took them so long. So it was costing 300k a year, plus causing deployment headaches, etc.
But its a realitively simple tool from the looks of it. It seems like their are many competitors, some already written in go.
Its kind of weird why they waited so long to do this. Why even need AI? This looks like the sort of thing you could port by hand in less than a week (possibly even in a day).
For context, JSONata's reference implementation is 5.5k lines of javascript.
If they were paying $500k/year, why haven't they paid someone to rewrite it? Surely would be cheaper still.
But above everything else, this is a great example of how much JavaScript inefficiency actually costs us, as humanity. How many companies burn money through like this?
The most interesting thing about AI rewrite stories isn't the time saved — it's the forcing function. Someone had to articulate what the system actually does clearly enough for an AI to replicate it. That clarity exercise alone often reveals the architectural problems that caused the cost bloat.
The $500K/year wasn't from JSONata being expensive. It was from an architecture that serialized, sent over the network, and deserialized for every expression evaluation. An engineer documenting that flow for any reason — AI rewrite or not — would likely have spotted the problem.
> The approach was the same as Cloudflare’s vinext rewrite: port the official jsonata-js test suite to Go, then implement the evaluator until every test passes.
This makes me wonder, for reimplementation projects like this that aren't lucky enough to have super-extensive test suites, how good are LLM's at taking existing code bases and writing tests for every single piece of logic, every code path? So that you can then do a "cleanish-room" reimplementation in a different language (or even same language) using these tests?
Obviously the easy part is getting the LLM's to write lots of tests, which is then trivial to iterate until they all pass on the original code. The hard parts are how to verify that the tests cover all possible code paths and edge cases, and how to reliably trigger certain internal code paths.
Next maybe they will use a binary format instead of JSON.
how many billions of compute are wasted because this industry can't align on some binary format across all languages and APIs and instead keep serializing and deserializing things
So they used an ai trained on the original source code to "rewrite" the original source code.
Everyone is surprised at the $300k/year figure, but that seems on the low end. My previous work place spends tens of millions a year on GPU continuous integration tests.
If you can incorporate Quamina or similar logic in there, you might be able to save even more… worth looking into, at least
A principal engineer spending his week end vibe coding some slop at a rate of 13k lines of code in 7h to replace a vendor. Is this really the new direction we want to set for our industry? For the first time ever, I have had a CTO vibe conding something to replace my product [1] even though it cost less than a day of his salary. The direction we are heading makes me want to quit, all points to software now being worthless
These "solutions" place a lot of faith in a "complete" set of test cases. I'm not saying don't do this, but I'd feel more comfortable doing this plus hand-generating a bunch of property tests. And then generating code until all pass. Even better, maybe Claude can generate some / most of the property tests by reading the standard test suite.
These articles remind me so much of those old internet debates about "teleportation" and consciousness.
Your physical form is destructively read into data, sent via radio signal, and reconstructed on the other end. Is it still you? Did you teleport, or did you die in the fancy paper shredder/fax machine?
If vibe code is never fully reviewed and edited, then it's not "alive" and effectively zombie code?
Darn, I'd wished they improved one of the existing Go or Rust implementations.
> The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC.
> This was costing us ~$300K/year in compute
Wooof. As soon as that kind of spend hit my radar for this sort of service I would have given my most autistic and senior engineer a private office and the sole task of eliminating this from the stack.
At any point did anyone step back and ask if jsonata was the right tool in the first place? I cannot make any judgements here without seeing real world examples of the rules themselves and the ways that they are leveraged. Is this policy language intentionally JSON for portability with other systems, or for editing by end users?
As long as you are using JSON, you will be able to optimize.
Did you know that you can pass numbers up to 2 billion in 4 constant bytes instead of as a string of 20 average dynamic bytes? Also, fun fact, you can cut your packets in half by not repeating the names of your variables in every packet, you can instead use a positional system where cardinality represents the type of the variable.
And you can do all of this with pre AI technology!
Neat trick huh?
The key point for me was not the rewrite in Go or even the use of AI, it was that they started with this architecture:
> The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC. That meant that for every event (and expression) we had to serialize, send over the network, evaluate, serialize the result, and finally send it back.
> This was costing us ~$300K/year in compute, and the number kept growing as more customers and detection rules were added.
For something so core to the business, I'm baffled that they let it get to the point where it was costing $300K per year.
The fact that this only took $400 of Claude tokens to completely rewrite makes it even more baffling. I can make $400 of Claude tokens disappear quickly in a large codebase. If they rewrote the entire thing with $400 of Claude tokens it couldn't have been that big. Within the range of something that engineers could have easily migrated by hand in a reasonable time. Those same engineers will have to review and understand all of the AI-generated code now and then improve it, which will take time too.
I don't know what to think. These blog articles are supposed to be a showcase of engineering expertise, but bragging about having AI vibecode a replacement for a critical part of your system that was questionably designed and costing as much as a fully-loaded FTE per year raises a lot of other questions.