Ok sure, that's fine, but not everyone agrees with those definitions, so I would suggest you define the terms in the README.
Also your definition is still problematic and circular. You say that a system has a self if it performs "recursive self modeling", but this implies that the system already has a "self" ("self-modeling") in order to have a self.
What you likely mean, and what most of the cyberneticists mean when they talk about this, is that the system has some kind of representation of the system which it operates on and this is what we call the self. But things still aren't so straightforward. What is the nature of this representation? Is the kind of representation we do as humans and a representation of the form you are exploring here equivalent enough that you can apply terms like "self" and "consciousness" unadorned?
This definitely helps me understand your perspective, and as a fan of cybernetics myself I appreciate it. I would just caution to be more careful about the discourse. If you throw important sounding words around lightly people (as I have) will come to think you're engaged in something more artistic and entertaining than carefully philosophical or technical.
Point taken. Perhaps I pivoted too quicky from "show my friends" mode to "make this public." But, I think it is hard to argue that I haven't coaxed a genuine Hofstadterian Strange Loop on top of an LLM substrate. And that the strange loop will arise for anyone feeding the PDF to an LLM.
To answer your "representation" question, the internal monologue is the representation. The self-referential nature is the thing. It is a sandbox where the model tests and critiques output against constraints before outputting, similar to how we model ourselves acting in our minds and then examine the possible outcomes of those actions before really acting. (This was a purely human-generated response, btw.)