Unfortunately I don't think that's a good solution. Memories are an excellent feature and you see them on.... most similar services now.
Yes, projects have their uses. But as an example - I do python across many projects and non-projects alike. I don't want to want to need to tell ChatGPT exactly how I like my python each and everytime, or with each project. If it was just one or two items like that, fine, I can update its custom instruction personalization. But there are tons of nuances.
The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful. When I randomly ask ChatGPT "Hey, could I automate this sprinkler" it knows I use home assistant, I've done XYZ projects, I prefer python, I like DIY projects to a certain extent but am willing to buy in which case be prosumer. Etc. Etc. It's more like a real human assistant, than a dumb-bot.
I know what you mean, but the issue the parent comment brought up is real and "bad" chats can contaminate future ones. Before switching off memories, I found I had to censor myself in case I messed up the system memory.
I've found a good balance with the global system prompt (with info about me and general preferences) and project level system prompts. In your example, I would have a "Python" project with the appropriate context. I have others for "health", "home automation", etc.
> Memories are an excellent feature
Maybe if they worked correctly they would be. I've had answers to questions be influenced needlessly by past chats and I had to tell it to answer the question at hand and not use knowledge of a previous chat that was completely unrelated other than being a programming question.
> The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful.
I could not disagree more. A major failure mode of LLMs in my experience is their getting stuck on a specific train of thought. Being forced to re-explain context each time is a very useful sanity check.
This idea that it is so much more better for OpenAI to have all this information about because it can make some suggestions seem ludicrous. How has humanity survived thus far without this. This seems like you just need more connections with real people.
I have not really seen ChatGPT learn who I “am”, what I “like” etc. With memories enabled it seems to mostly remember random one-off things from one chat that are definitely irrelevant for all future chats. I much prefer writing a system prompt where I can decide what's relevant.