logoalt Hacker News

bethekidyouwantlast Sunday at 4:00 PM3 repliesview on HN

In what world can you not always break the response of an AI by feeding it a bunch of random junk?


Replies

xnxlast Sunday at 11:31 PM

Indeed. In what world can you not break any tool when deliberately misusing it?

kgeistlast Sunday at 4:22 PM

I mean, currently LLMs are stateless and you can get rid of all the poisoned data by just starting a new conversation (context). And OP introduces "long-term memory" where junk will accumulate with time

show 2 replies
CooCooCaChalast Sunday at 4:12 PM

I mean ideally AI would be resilient to junk, don't you think?

show 2 replies