logoalt Hacker News

emsigntoday at 1:22 AM1 replyview on HN

But it's not correct! Exactly because it can't possibly have enough training data to fill the void of not being able to experience the human condition. Text is not enough. The error rate of LLMs are horrendously bad. And the errors grow exponentially the more steps follow each other.

All the great work you see on the internet AI has supposedly done was only achieved by a human doing lots of trial and error and curating everything the agentic LLM did. And it's all cherry picked successes.


Replies

handoflixuetoday at 10:28 AM

> But it's not correct!

The article explicitly states an 83% success rate. That's apparently good enough for them! Systems don't need to be perfect to be useful.