logoalt Hacker News

bloaftoday at 3:27 AM1 replyview on HN

> The harms engendered by underestimating LLM capabilities are largely that people won't use the LLMs.

Speculative fiction about superintelligences aside, an obvious harm to underestimating the LLM's capabilities is that we could effectively be enslaving moral agents if we fail to correctly classify them as such.


Replies

bamboozledtoday at 7:53 AM

If the models were conscious, intelligent, suffering and could think, why wouldn't they tell us ?