logoalt Hacker News

causal01/21/20251 replyview on HN

If you literally believe that getting an answer wrong can send someone to hell, then I think the stakes are a little more dire than bad nutritional advice.

Like imagine if this were a baby-care bot, dispensing advice on how to safely care for your baby. That would be pretty stupid, and would likely eventually give advice so incorrect a baby would die. For someone who believes, that is a less tragic outcome than being led astray by an apologetics bot. It takes an incredible level of conceit to build one anyway.


Replies

vivekd01/21/2025

chat GPT and Gemini both provide advice on how to care for babies. I think in the end you have to have faith in people to also use their common sense, discretion and not blindly believe or act on everything a bot tells them.

I think the same is true for an AI giving religious advice - you have to exercise a bit of faith in the readership and, perhaps in this case , also faith in the ultimate guidance of the divine. Faith that they're not going to make serious mortal or religious decision by unquestioningly following a chatbot

If we take this thinking to its logical conclusion we should put all our efforts as a civilization to getting rid of all misinformation that may harm babies whether online or spoken. And every religious person should do nothing but have flame wars and censorship campaigns about any flase religious information that has any chance of affecting a person's salvation.

The author seems to be in a purity spiral and seems to be taking an overly hardline interpretation of the religion