logoalt Hacker News

3eb7988a166308/09/20251 replyview on HN

I wonder about the practicalities of improving this. Say you have "acquired" all of the public internet code. Focus on just Python and Javascript. There are solid linters for these languages - automatically flag any code with a trivial SQL injection and exclude it from a future training set. Does this lead to a marked improvement in code quality? Or is the naive string concatenation approach so obvious and simple that a LLM will still produce such opportunities without obvious training material (inferred from blogs or other languages)?

You could even take it a step further. Run a linting check on all of the source - code with a higher than X% defect rate gets excluded from training. Raise the minimum floor of code quality by tossing some of the dross. Which probably leads to a hilarious reduction in the corpus size.


Replies

simonw08/09/2025

This is happening already. The LLM vendors are all competing on coding ability, and the best tool they have for that is synthetic data: they can train only on code that passes automated tests, and they can (and do) augment their training data with both automatically and manually generated code to help fill gaps they have identified in that training data.

Qwen notes here - they ran 20,000 VMs to help run their synthetic "agent" coding environments for reinforcement learning: https://simonwillison.net/2025/Jul/22/qwen3-coder/