Originally learned about image hashing and similarity comparison for product image searches. Decided to apply it to magazine covers.
Thrasher covers: https://shoplurker.com/labs/thrasher-covers/
This is awesome! Using a CLIP or Dino v2 model to produce image embeddings would probably improve the similarity search a lot - kind of similar to http://same.energy/ .
I like it! Would be nice to have some visual indication on "what" was similar, or why, or even how much? Or an example of "this is similar to that because ...". Maybe a visualizer of the algorithms?
Hey! Cool! Does your code use some of the public libs available (pHash, hmsearch,…) or did you start coding from scratch based on research papers? Can one fork a git repo?
Anyway, KUTGW
I dont understand the UI at all. When I click All or something withij brackets, what am I supposed to see? Covers similar to what I clicked? But the covers I see don't seem similar to me at all no matter what I click. What am I missing? Or may be I am expecting a different kind of similarity.