While good points are made, I worry this gives the wrong impression. The paper doesn't say it is impossible, just hard. I have, very successfully, worked with dev owned testing.
Why it worked: the team set the timelines for delivery of software, the team built their acceptance and integration tests based on system inputs and outputs based on the edges of their systems, the team owned being on-call, and the team automated as much as possible (no repeatable manual testing aside from sanity checks on first release).
There was no QA person or team, but there was a quality focused dev on the team whose role was to ensure others kept the testing bar high. They ensured logs, metrics, and tests met the team bar. This role rotated.
There was a ci/cd team. They made sure the test system worked, but teams maintained their own ci configuration. We used buildkite, so each project had its own buildkite.yml.
The team was expected by eng leaders to set up basic testing before development. In one case, our team had to spend several sprints setting up generators to make the expected inputs and sinks to capture output. This was a flagship project and lots of future development was expected. It very much paid off.
Our test approach was very much "slow is smooth and smooth is fast." We would deploy multiple times a day. Tests were 10 or so minutes and very comprehensive. If a bug got out, tests were updated. The tests were very reliable because the team prioritized them. Eventually people stopped even manually verifying their code because if the tests were green, you _knew_ it worked.
Beyond our team, into the wider system, there was a light weight acceptance test setup and the team registered tests there, usually one per feature. This was the most brittle part because a failed test could be because another team or a system failure. But guess what? That is the same as production if not more noisy. So we had the same level of logging, metrics, and alerts (limited to business hours). Good logs would tell you immediately what was wrong. Automated alerts generally alerted the right team, and that team was responsible for a quick response.
If a team was dropping the ball on system stability, that reflected bad on the team and they were to prioritize stability. It worked.
Hands down the best dev org I have part of.
I've worked in a strong dev-owned testing team too. The culture was a sort of positive can-I-catch-you-out competitiveness that can be quite hard to replicate, and there was no concept of any one person taking point on quality.