Like, for sure you can ask the AI to save it's "settings" or "context" to a local file in a format of its own choosing, and then bring that back in the next prompt ; couple this with temperature 0 and you should get to a fixed-point deterministic app immediately
> couple this with temperature 0
Not quite the case. Temperature 0 is not the same as random seed. Also there are downsides to lowering temperature (always choosing the most probable next token).
Why wouldn't the llm codify that "context" into code so it doesn't have to rethink through it over and over? Just like humans would. Imagine if you were manually operating a website and every time a request came in you had come up with sql queries (without remembering how you did it last time) and manually type the responses. You wouldn't last long before you started automating.
There still maybe some variance at temperature 0. The outputted code could still have errors. LLMs are still bounded by the undecidable problems in computational theory like Rices theorem.