logoalt Hacker News

8organicbitslast Tuesday at 12:22 PM0 repliesview on HN

The golden rule of LLMs is that they can make mistakes and you need to check their work. You're describing a situation where the intended user cannot check the LLM output for mistakes. That violates a safety constraint and is not a good use case for LLMs.