Yeah, I haven't gotten through the 40 pages myself, but skimming through the material, it does seem that the arguments rely on an assumption that AI will be employed in a particular manner. For example, when discussing the rule of law, they assert that AI will be making the moral judgments and will be a black box that humans will just turn to to decide what to do in criminal proceedings. But that seems like it would be the dumbest possible way to use the technology.
Perhaps that's the point of the paper: to warn us not to use the technology in the dumbest possible way.
Nah we know the punch-lines to this one.
Worries about reduced quality of work are overblown, because there's always a human operator of the AI, reviewing the text between copying and pasting (no different from StackOverflow!). Enter vibe-coding.
Worries about AI becoming malicious or Skynet are overblown. Again, it's just a text interface, so the worst it can do is to write text that says "launch the nukes". Enter agents and MCP.
It still staggers me that I occasionally read about a judge calling out a lawyer for citing non-existent cases (this far into chatgpt's life). It was bound to happen to the first moron, but every other lawyer should have heard about it then. But it still happens.
Dumbest possible way is what we do.