I dismiss the paper for 3 reasons:
1. It is entirely based on speculation of what is going to happen in the future.
2. The authors have a clear financial (and status based) interest in the outcome.
3. I have a negative opinion of lawyers and universities due to personal experience. (This is, of course, the weakest point by far.)
Speculation on future outcomes is not by itself a bad thing, but when that speculation is formatted like a scientific paper describing an experimental result I immediately feel I am being manipulated by appeal to authority. And the conflict of interest of the authors is about as irrelevant as pointing out that a paper on why Oxycodone is not addictive is paid for by Perdue Pharma. Perhaps Jessica's papers on IP are respected because they do not suffer from these obvious flaws? I owe the author no deference for the quality of her previous writing nor for her status as a professor.
Yeah, I haven't gotten through the 40 pages myself, but skimming through the material, it does seem that the arguments rely on an assumption that AI will be employed in a particular manner. For example, when discussing the rule of law, they assert that AI will be making the moral judgments and will be a black box that humans will just turn to to decide what to do in criminal proceedings. But that seems like it would be the dumbest possible way to use the technology.
Perhaps that's the point of the paper: to warn us not to use the technology in the dumbest possible way.
3. Same, including press that is no longer unbiased and serve as propaganda of political opinions.
One might say that deinstitutionalization is actually good for plurality of opinions (some call it a democracy). If AI cause it, I'm fine with that.
What do you mean "formatted like a scientific paper?"
Law review articles look like this. Scientific journals don't own the concept of an abstract, nor are law review articles pretending to be scientific research.