Write something that an LLM could never write.
(This is my latest favorite prompt and interview/conversation question)
3e4a3ad9f05fdfb609dda6e5f512e52506f4c1053962e21bfd93f1ed81582d16ca0fef9574fb07ab62f8f5b1373b4ddd541804c0d176f4a557d900b05047e853
(This is the hash of a string randomly popped in my mind. An LLM will write this with almost 0 probability --- until this is crawled into the training sets)
You go first.
[dead]
If you’re not actively publishing at top conferences (I.e. NeurIPS), than this is a trash question and shows the lack of knowledge that many who are now entering the field will have.
Anything that you or others can answer to this which isn’t some stupid “gotcha” puzzle shit (lol it’s video cus LLMs aren’t video models amiright?) will be wrong because of things like structured decoding and the fact that ultra high temperature works with better samplers like min_p.
https://openreview.net/forum?id=FBkpCyujtS¬eId=mY7FMnuuC9