logoalt Hacker News

simonwtoday at 12:30 AM10 repliesview on HN

I think the most interesting thing about this is how it demonstrates that a very particular kind of project is now massively more feasible: library porting projects that can be executed against implementation-independent tests.

The big unlock here is https://github.com/html5lib/html5lib-tests - a collection of 9,000+ HTML5 parser tests that are their own independent file format, e.g. this one: https://github.com/html5lib/html5lib-tests/blob/master/tree-...

The Servo html5ever Rust codebase uses them. Emil's JustHTML Python library used them too. Now my JavaScript version gets to tap into the same collection.

This meant that I could set a coding agent loose to crunch away on porting that Python code to JavaScript and have it keep going until that enormous existing test suite passed.

Sadly conformance test suites like html5lib-tests aren't that common... but they do exist elsewhere. I think it would be interesting to collect as many of those as possible.


Replies

avsmtoday at 11:04 AM

The html5lib conformance tests when combined with the WHATWG specs are even more powerful! I managed to build a typed version of this in OCaml in a few hours ( https://anil.recoil.org/notes/aoah-2025-15 ) yesterday, but I also left an agent building a pure OCaml HTML5 _validator_ last night.

This run has (just in the last hour) combined the html5lib expect tests with https://github.com/validator/validator/tree/main/tests (which are a complex mix of Java RELAX NG stylesheets and code) in order to build a low-dependency pure OCaml HTML5 validator with types and modules.

This feels like formal verification in reverse: we're starting from a scattered set of facts (the expect tests) and iterating towards more structured specifications, using functional languages like OCaml/Haskell as convenient executable pitstops while driving towards proof reconstruction in something like Lean.

show 1 reply
Havoctoday at 12:40 PM

Was struggling yesterday with porting something (python->rust). LLM couldn't figure out what was wrong with rust one no matter how I came at it (even gave it wireshark traces). And being vibecoded I had no idea either. Eventually copied in python source into rust project asked it to compare...immediate success

Turns out they're quite good at that sort of pattern matching cross languages. Makes sense from a latent space perspective I guess

gwkingtoday at 1:10 AM

I’ve idly wondered about this sort of thing quite a bit. The next step would seem to be taking a project’s implementation dependent tests, converting them to an independent format and verifying them against the original project, then conducting the port.

show 3 replies
pplonski86today at 10:58 AM

This is amazing. Porting library from one language to one language are easy for LLMs, LLMs are tired-less and aware of coding syntax very well. What I like in machine learning benchmarks is that agents develop and test many solutions, and this search process is very human-alike. Yesterday, I was looking into MLE-Bench for benchamrking coding Agents on machine learning tasks from Kaggle https://github.com/openai/mle-bench There are many projects that provide agents which performance is simply incredible, they can solve several Kaggle competitions under 24 hours and be on medal place. I think this is already above human level. I was reading ML-Master article and they describe AI4AI where AI is used to create AI systems: https://arxiv.org/abs/2506.16499

exclipytoday at 12:53 PM

Can you port tsc to go in a few hours?

bzmrgonztoday at 11:15 AM

I see it as a learning or training tool for AI. The same way we use mock exams/tests, to verify our skill and knowledge absorption ans prepare for the real thing or career. This could one of many obstacles in an obstacle course which a coding AI would have to navigate in order to "graduate"

tracnartoday at 7:06 AM

If you're porting a library, you can use the original implementation as an 'oracle' for your tests. Which means you only need a way to write/generate inputs, then verify the output matches the original implementation.

It doesn't work for everything of course but it's a nice way to bug-for-bug compatible rewrites.

aadishvtoday at 1:54 AM

I wonder if this makes AI models particularly well-suited to ML tasks, or at least ML implementation tasks, where you are given a target architecture and dataset and have to implement and train the given architecture on the given dataset. There are strong signals to the model, such as loss, which are essentially a slightly less restricted version of "tests".

show 2 replies
heavyset_gotoday at 1:31 AM

This is one of the reasons I'm keeping tests to myself for a current project. Usually I release libraries as open source, but I've been rethinking that, as well.

show 1 reply
ciestoday at 12:44 AM

This is an interesting case. It may be good to feed it to other model and see how they do.

Also: it may be interesting to port it to other languages too and see how they do.

JS and Py are but runtime-typed and very well "spoken" by LLMs. Other languages may require a lot more "work" (data types, etc.) to get the port done.