Thanks for reporting on your experience! Those are good questions, and I will think about your valence idea for the future.
On a shorter horizon, I can tune the probability that on-path terms appear in the cloud. We store a larger pool of words than are displayed, and calculate lookaheads (and lookbacks from the target).
Thanks for your response. Getting feedback like "hot or cold" in the algorithm's mind is exactly what I'm thinking of. It's a tricky issue and reminds me a lot of this: https://www.datcreativity.com/
I had tried hard to pick a set of fairly simple words, thinking I had an intricately unique association in my head, only to find out that the reported connections were nothing more than average. My partner obviously landed in an extremely high percentile by instantly picking the first words that came to her without much thought.
Maybe the user could type in their own words, and the app could approve/disapprove based on the 40 word list.
But maybe that adds an entirely new normalization function - user types 'runs' or 'ran', the app has to normalize to 'run'.
The app could just have a 'more words' button, loading the next 17.