logoalt Hacker News

supern0vayesterday at 11:51 PM0 repliesview on HN

>I can for example iterate on the sequence in my initial post and make it novel by writing down more and more disparate concepts and deleting the concepts that are closely associated. This will in the end create a more novel sequence that is not associated in my brain I think.

This seems like something that LLMs can do pretty easily via CoT.

As a fun test, I asked ChatGPT to reflexively given me four random words that are not connected to each other without thinking. It provided: lantern, pistachio, orbit, thimble

I then asked it to think carefully about whether there were any hidden relations between them, and to make any changes or substitutions to improve the randomness.

The result: fjord, xylophone, quasar, baklava