I'm sure you could get an LLM to create a plausible sounding justification for every decision? It might not be related to the real reason, but coming up with text isn't the hard part there surely
Yes, they will, they'll rationalize whatever. This is most obvious w/ transcript editing where you make the LLM 'say' things it wouldn't say and then ask it why.
It sounds like you're saying we should generate more bullshit to justify bullshit.
> I'm sure you could get an LLM to create a plausible sounding justification for every decision.
That's a great point: funny, sad, and true.
My AI class predated LLMs. The implicit assumption was that the explanation had to be correct and verifiable, which may not be achievable with LLMs.