In what world can you not always break the response of an AI by feeding it a bunch of random junk?
I mean, currently LLMs are stateless and you can get rid of all the poisoned data by just starting a new conversation (context). And OP introduces "long-term memory" where junk will accumulate with time
I mean ideally AI would be resilient to junk, don't you think?
Indeed. In what world can you not break any tool when deliberately misusing it?