logoalt Hacker News

Nashooolast Tuesday at 10:34 PM2 repliesview on HN

One means (used to mean?) actually checking the LLM's output one means keep trying until it outputs does what you want.


Replies

iwontberudelast Tuesday at 10:40 PM

Given that the models will attempt to check their own work with almost the identical verification that a human engineer would, it's hard to say if human's aren't implicitly checking by relying on the shared verification methods (e.g. let me run the tests, let me try to run the application with specific arguments to test if the behavior works).

show 1 reply
closewithlast Tuesday at 11:06 PM

That's the original context of the Andrej Karpathy comment, but it's just synonymous with LLM assisted coding now.

show 1 reply