Yeah that's fair, the manual testing doesn't have to sequentially go first - but it does have to get done.
I've lost count of the number of times I've skipped it because the automated test passed and then found there was some dumb but obvious bug that I missed, instantly exposed when I actually exercised the feature myself.
Maybe a bit pedantic, but does manual testing really need to be done, or is the intent here more towards being a usability review? I can't think of any time obvious unintended behaviour showed up not caught by the contract encoded in tests (there is no reason to write code that doesn't have a contractual purpose), but, after trying it, finding out that what you've created has an awful UX is something I have encountered and that is something much harder to encode in tests[1].
[1] As far as I can tell. If there are good solutions for this too, I'd love to learn.
Would automated tests that produce a transcript of what they've done allow perusing that transcript to substitute for manual testing?