logoalt Hacker News

neuralkoitoday at 9:20 PM0 repliesview on HN

> The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking.

If current LLMs are ever deployed in systems harboring the big red button, they WILL most definitely somehow press that button.