T3.chat supports convo forking and in my experience works really well.
The fundamental issue is that LLMs do not currently have real long term memory, and until they do, this is about the best we can do.