Great minds think alike :D https://hanzirama.com/character/%E5%AD%A6
It is also allowing me to see all relevant associations easily when revealing the card in built in SRS, you add cards to SRS as you browse, so they are related to what you already know / currently exploring.
Mind you, all data visible is collected from different reputable available sources. When you click "explain" there's a clearly marked LLM explanation, but my explanation generation pipeline pushed all generated explanations through 5 different models including all top Chinese-first for verification, and on average it took a few iterations back and forth to iron out any information that could potentially mislead the learner.
You can actually see thousands of words I typed just working on that pipeline here https://hanzirama.com/making-of
this looks incredible and exactly like something i've been wanting. is there the same amount of depth for the 9k+ characters? if this is open source, id love to build on it;i was wandering if op had posted his on github.