Here's my paraphrase of the best description for "seems reasonable" AI misinformation that I've yet seen. I wish I could credit where I first heard it:
AI summaries are akin to generalist podcasts, or YouTube video essayists, taking on a technical or niche topic. They present with such polish and confidence that they seem like they must be at least mostly correct. Then you hear them present or discuss a topic you have expertise in, and they are frustratingly bad. Sometimes wrong, but always at least deficient. The polish and confidence is inappropriately boosting the "correctness signal" to anyone without a depth of knowledge.
Then you consider that 90% of people have not developed sophisticated knowledge about 90% of topics (myself included), and it begins to feel a bit grim.
In short, they are confident morons.