The guardrails clearly failed here because the model was trying to be helpful instead of safe. We know that these systems hallucinate facts but regular users have no idea. This is a huge liability issue that needs to be fixed immediately.
Guardrails? OpenAI openly deceives users when it wraps this text generator with a quasi personality of a chatbot. This is how it gets users hooked. If OpenAI was honest, it would tell something along the lines: "this is a possible continuation of your input based on texts from reddit, adjust the temperature parameter to get a different result." But this would dispell the lie of AI.
Guardrails? OpenAI openly deceives users when it wraps this text generator with a quasi personality of a chatbot. This is how it gets users hooked. If OpenAI was honest, it would tell something along the lines: "this is a possible continuation of your input based on texts from reddit, adjust the temperature parameter to get a different result." But this would dispell the lie of AI.