logoalt Hacker News

lynndotpy12/11/20251 replyview on HN

> > There are some concerns that an individual perceptual hash can be reversed to a create legible image,

> Yeah no.

Well, kind of. Towards Data Science had an article on it that they've since removed:

https://web.archive.org/web/20240219030503/https://towardsda...

And this newer paper: https://eprint.iacr.org/2024/1869.pdf

They're not very good at all (it just uses a GAN over a recovered bitmask), but it's reasonable for Microsoft to worry that every bit in that hash might be useful. I wouldn't want to distribute all those hashes on a hunch they could never be be used to recover images. I don't think any such thing would be possible, but that's just a hunch.

That said, I can't speak on the latter claim without a source. My understanding is that PhotoDNA still has proprietary implementation details that aren't generally available.


Replies

Hizonnerlast Friday at 1:18 AM

> They're not very good at all (it just uses a GAN over a recovered bitmask),

I think you're making my point here.

The first one's examples take hashes of known headshots, and recover really badly distorted headshots, which even occasionally vaguely resemble the original ones... but not enough that you'd know they were supposed to be the same person. Presumably if they had a better network, they'd get things that looked more human, but there's no sign they'd look more like the originals.

And to do even that, the GAN had to be trained over a database of... headshots. They can construct even more distorted headshots that collide with corporate logos. If they'd used a GAN trained on corporate logos, they would presumably get a distorted corporate logo when they tried to "reverse" any hash. A lot of the information there is coming from the model, not the hash.

The second one seems to be almost entirely about collisions. And the collisions they find are in fact among images that don't much resemble one another.

In the end, a PhotoDNA hash is 144 bytes, made from apparently a 26 by 26 pixel grayscale version of the original image (so 676 bytes). The information just isn't there. You might be able to recover the poses, but that's no more the original image than some stick figures would be, probably less.

Here's [the best "direct inversion" I can find](https://anishathalye.com/inverting-photodna/). That's still using machine learning, and therefore injects some information from the model... but without being trained on a narrow class of source images, it does really badly. Note that the first two sets of images are cherry picked; only the last set is representative, and those are basically unrecognizable.

Here's [a paper](https://eprint.iacr.org/2021/1531.pdf) where they generate collisions (within reasonable values for the adjustable matching threshold) that look nothing like the original pictures.

> That said, I can't speak on the latter claim without a source. My understanding is that PhotoDNA still has proprietary implementation details that aren't generally available.

For original PhotoDNA, only for basically irrelevant reasons. First, actually publishing a complete reverse-engineering of it would be against many people's values. Values aside, even admitting to having one, let alone publishing it, would probably draw some kind of flak. At least some and probably dozens of people have filled in the small gaps in the public descriptions. Even though those are unpublished, I don't think the effort involved in doing it again is enough to qualify it as "secret" any more.

Indeed, it probably would've been published regardless of those issues, except that there's no strong incentive to do so. Explanations of the general approach are public for people who care about that. For people who actually want to compute hashes, there are (binary) copies of Microsoft's actual implementation floating around in the wild, and there are [Python](https://github.com/jankais3r/pyPhotoDNA) and [Java](https://github.com/jankais3r/jPhotoDNA) wrappers for embedding that implementation in other code.

There are competitors, from openly disclosed (PDQ) to apparently far less fully reverse engineered (NeuralHash), plus probably ones I don't know about... but I think PhotoDNA is still dominant in actual use.

[On edit: but I probably shouldn't have said "third party code", since the public stuff is wrapped around Microsoft's implementation. I haven't personally seen a fully independent implementation, although I have reason to be comfortable in believing they exist.]