Using an LLM doesn't mean it has to take the final decision. You can also use it as a warning system.
Is there any indication that current warning systems are insufficient in any way that would be improved by LLM involvement?
False negatives are a huge issue when designing safety systems. It is not the case that "more warnings = more better".
Is there any indication that current warning systems are insufficient in any way that would be improved by LLM involvement?