[dead]
[dead]
[flagged]
If an AI can fabricate a bunch of purported quotes due to being unable to access a page, why not assume that the exact same sort of AI can also accidentally misattribute hostile motivation or intent (such as gatekeeping or envy - and let's not pretend that butthurt humans don't do this all the time, see https://en.wikipedia.org/wiki/fundamental_attribution_error ) for an action such as rejecting a pull request? Why are we treating the former as a mere mistake, and the latter as a deliberate attack?
> If you ask ChatGPT or Claude to write something like this through their websites, they will refuse. This OpenClaw agent had no such compunctions.
OpenClaw runs with an Anthropic/OpenAI API key though?
The only new information I see, which was suspiciously absent before, is that the author acknowledges that there might have been a human at the loop - which was obvious from the start of this. This is a "marketing piece" just like the bot's messages were "hit pieces".
> And this is with zero traceability to find out who is behind the machine.
Exaggeration? What about IPs on github etc? "Zero traceability" is a huge exaggeration. This is propaganda. Also the author's text sounds ai-generated to me (and sloppy)."
This seems like a relatively minor issue. The maintainers tone was arguably dismissive, and the AI response likely reflects patterns in its training data. At its core, this is still fundamentally a sophisticated text prediction system producing output consistent with what it has learned.
[dead]