logoalt Hacker News

nsnzjznzbxyesterday at 8:59 PM1 replyview on HN

We will get to the point where you can quickly bootstrap i.e. an LLM can train a better LLM in a loop, leave it and it can really learn. Like learn learn.

"Train yourself to solve this problem see OBJECTIVE.md"


Replies

nine_kyesterday at 10:32 PM

This is the kind of runaway self-improving development that proponents of the singularity keep talking about.

The problem is that training appears to be really slow and expensive. Some quality thinking is required to improve the training approach and the architecture before committing resources to training a new large model. And even the largest models are by now not nearly as good at quality thinking as the best humans.