“Meta spokesperson Tracy Clayton said in a statement to The Verge that ‘no user data was mishandled’ during the incident.”
Wow, no mishandled user data? A striking change of standard operating procedure from Meta here.
Actually the later information in the story directly contradicts that, so The Verge probably shouldn’t have just quoted this line if their reporting is in opposition to it.
Regardless, this is one of the more insidious things about these tools. They often get minor but critical things wrong in the midst of mostly correct information. And people think they can analyze the data presented to them and make logical judgments, but that’s just not the case.
The article points out that “a human could have done the same thing” but, between the overly confident tone of the text generated by these tools, and the fact that weirdly people trust the LLM output more than they trust other humans (who generally admit or at least hint when they aren’t actually experts on a topic), it’s actually far worse when one of these bots gets something wrong.