logoalt Hacker News

dfajgljsldkjagtoday at 5:05 AM1 replyview on HN

The guardrails clearly failed here because the model was trying to be helpful instead of safe. We know that these systems hallucinate facts but regular users have no idea. This is a huge liability issue that needs to be fixed immediately.


Replies

akomtutoday at 7:30 AM

Guardrails? OpenAI openly deceives users when it wraps this text generator with a quasi personality of a chatbot. This is how it gets users hooked. If OpenAI was honest, it would tell something along the lines: "this is a possible continuation of your input based on texts from reddit, adjust the temperature parameter to get a different result." But this would dispell the lie of AI.