This is a notably better demonstration of a coding agent generated browser than Cursor's FastRender - it's a fraction of the size (20,000 lines of Rust compared to ~1.6m), uses way fewer dependencies (just system libraries for rendering images and text) and the code is actually quite readable - here's the flexbox implementation, for example: https://github.com/embedding-shapes/one-agent-one-browser/bl...
Here's my own screenshot of it rendering my blog - https://bsky.app/profile/simonwillison.net/post/3mdg2oo6bms2... - it handles the layout and CSS gradiants really well, renders the SVG feed icon but fails to render a PNG image.
I thought "build a browser that renders HTML+CSS" was the perfect task for demonstrating a massively parallel agent setup because it couldn't be productively achieved in a few thousand lines of code by a single coding agent. Turns out I was wrong!
I think the human + agent thing absolutely will make a huge difference. I see regularly that Claude can totally off piste and eventually claw itself back with a proper agent setup but it will take a lot of time if I don't spot it and get it back on track.
I have one project Claude is working on right now where I'm testing a setup to attempt to take myself more out of the loop, because that is the hard part. It's "easy" to get an agent to multiply your output. It's hard to make that scale with your willingness to spend on tokens rather than with your ability to read and review and direct.
I've ended up with roughly this (it's nothing particularly special):
- Runs a evaluator that evaluates the current state and assigns scores across multiple metrics.
- If a given score is above a given threshold, expand the test suite automatically.
- If the score is below a given threshold, spawn a "research agent" that investgates why the scores don't meet expectations.
- The research agent delivers a report, that is passed to an implementation agent.
- The main agent re-runs the scoring, and if it doesn't show an improvement on one or more of the metrics, the commit is discarded, and notes made of what was tried, and why it failed.
It takes a bit of trial and error to get it right (e.g. "it's the test suite that is wrong" came up early, and the main agent was almost talked into revising the test suite to remove the "problematic" tests) but a division sort of like this lets Claude do more sensible stuff for me. Throwing away commits feels drastic - an option is to let it run a little cycle of commit -> evaluate -> redo a few times before the final judgement, maybe - but it so far it feels like it'll scale better. Less crap makes it into the project.
And I think this will work better than to treat these agents as if they are developers whose output costs 100x as much.
Code so cheap it is disposable should change the workflows.
So while I agree this is a better demonstration of a good way to build a browser, it's a less interesting demonstration as well. Now that we've seen people show that something like FastRender is possible, expect people to experiment with similarly ambitious projects but with more thought put into scoring/evaluation, including on code size and dependencies.
To me I really like how embedding shapes took things in his own hands and actually built it. It really proved a point at such a scale where I don't think any recent example can point to.
It's great to see hackernews be so core part of it haha.
> I thought "build a browser that renders HTML+CSS" was the perfect task for demonstrating a massively parallel agent setup because it couldn't be productively achieved in a few thousand lines of code by a single coding agent. Turns out I was wrong!
I do wonder if tech people from future/present are gonna witness this as a goliath vs david story. 20k 1 human 1 agent beats 5 million$ 1.6 millions loc browser changing how even the massive AI users/pioneers at the time thought about the use of AI
Looks like I have watched some documentaries recently but why do I feel like a documentary about this whole thing can be created in future.
But also, More and more I am feeling like AI is an absolute black box, nobody knows how to do things but we are all kind of doing experiments with it and seeing what sticks (like how we now have definitive proof that 1 human 1 agent > many agents no human in the loop)
And this is when we are 1 month in 2026, who knows what other experiments and proofs happen this year to find more about this black box, and about its usefulness or not.
Simon, it would be interesting if you could read the thread of predictions of 2026 thread in hn each month or quaterly to see how many people were wrong or right about AI as we figure out more things perhaps.
[dead]
I think most people would agree that this is much more superior than Cursor's "browser" from an engineering perspective -- it doesn't do much but does it well, as you pointed out.
What it tells me is that "effectively using agents" can be much more important than just throwing tokens at a problem and see what comes out. I myself have completely deleted several small vibe-coded projects without even going over the code, because what often happens is that, two days after the code is generated, I realize that I was solving the wrong problem or using the wrong approach.
A coding agent doesn't care. It most likely just does whatever you ask it to do with no pushback. While in some cases it's worth using them to validate an idea, often you dig a deeper hole for yourself if you go down a wrong path in the first place.