logoalt Hacker News

astrangeyesterday at 11:31 PM0 repliesview on HN

> to promote his product with the silent implication that LLMs actually ARE a path to AGI

That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.

Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?