This is sound advice but isn't really about AI:
Humans must not anthropomorphise {non-humans}
Humans must not blindly trust the output of {anything}
Humans must remain fully responsible and accountable for consequences arising from the use of {anything}
Naturally, none of this advice matters at all as humans will do what they do. This just documents a subset of the ways real humans consistently make choices to their own detriment.
I kind of agree with 1, but not really with 2 and 3. It's easy to come up with trivial examples where it is both unreasonable and not feasible to follow those two, both for AI and non-AI scenarios.