Refusing to give up is a benchmark optimization technique with unfortunate consequences.
I think it's probably more complex than that. Humans have constant continuous feedback which we understand as "time". LLMs do not have an equivalent to that and thus do not have a frame of reference to how much time passed between each message.
I think it's probably more complex than that. Humans have constant continuous feedback which we understand as "time". LLMs do not have an equivalent to that and thus do not have a frame of reference to how much time passed between each message.