Nice approach! Merging metadata from multiple sources is tricky, especially handling conflicts like titles and covers. Curious how you plan to handle scalability as your database grows—caching helps, but will the naive field strategies hold with thousands of books?
Right now the meeting happens on the fly and then is cached. In the future I imagine the finished merge will be saved as JSON to the database, depending on which is more expensive, the merging or a database call.
Merging on the fly kinda works for the future too, for when data change or for when the merging process changes.
No idea what the future will hold. The idea is to pre-warm the database after the schema has been refactored, and once we have thousands of books from that, I’ll know for sure what to do next.
TLDR, there is a lot of “think and learn” as I go here, haha.