I had a drawn out conversation with a friend about erasing NVME drives in a way that met compliance needs. The procedure they were given was to install Windows, with Bitlocker, twice with no effort to retain the key.
But that doesn't even overwrite the visible drive space; you can do a simple PoC to demonstrate that Windows won't get to all the mapped blocks. And that still hasn't gotten to the overprovisioned blocks and wear leveling issues that the article references.
You could use the BIOS or whatever CLI tool to tell the drive to chuck its encryption key, but are you sure that tool meets whatever compliance requirements you're beholden to? Are you sure the drive firmware does?
So they went with paying a company to shred the drives. All of them. It's disgustingly wasteful.
If compliance is the goal, just use FIPS certified self-encrypting drives and trust them to wipe their encryption keys when instructed to do so. At that point, any failure is clearly the vendor's fault, not your own.
Used to do recycling. Before secure erase was widespread there used to be cheapish 16 and 32GB SSDs for embedded devices, but a few of them made it into the thin/zero client space and a few white labelled low end pc's. they were actually twice the size. Basically 2 16s in a single 16 chassis. And what you would get is that the 2 drives were sort of in sync, I think it was a failover mechanism to deal with shitty drive quality. If drive A failed it would just connect to drive B instead and the user might not know about the failure. But the second drive would not wipe necessarily depending on how you wiped the first one. A few people retrieved data from the second disk under lab conditions, after wiping the first, so we had a report come through that we couldnt certify these disks as erased until they demonstrated compliance with secure erase. So we shredded probably a few thousand of them.
I heard of similar issues with early nvme drives.