> The idea that hallucinations are somehow less likely because you're asking meta-questions about LLM output is completely without basis
Not sure who you're replying to here – this is not a claim I made.
That's fair, but I'm not sure why you chose to address the one part of my comment that isn't responsive to your points.
That's fair, but I'm not sure why you chose to address the one part of my comment that isn't responsive to your points.