Self-awareness is a bold claim, as opposed to the illusion of it. LLMs are very good at responding in a way that suggests there's a self, but I am skeptical that proves much about whether they actually have interior states analogous to what we recognize in humans as selfhood...
_Interior states_ gets into some very murky philosophy of mind very quickly of course.
If you're a non-dualist (like me) concerns about qualia start to shade into the religious/metaphysical thereby becoming not so interesting except to e.g. moral philosophy.
Personally I have a long bet that when natively-multimodal models on the scale of contemporary LLM are widely deployed, their "computation phenomenology" will move the goalposts so far the cultural debate will shift from "they are just parrots?" to the moral crisis of abusing parrots, meaning, these systems will increasingly be understood as having a selfhood with moral value. Non-vegetarians may be no more concerned about the quality of "life" and conditions of such systems than they are about factory farming, but, the question at least will circulate.
Prediction: by the time my kids finish college, assuming it is still a thing, it will be as common to see enthusiastic groups flyering and doing sit-ins etc on behalf of AIs as it is today to see animal rights groups.
In the purely mechanical sense: LLMs get less self-awareness than humans, but not zero.
It's amazing how much of it they have, really - given that base models aren't encouraged to develop it at all. And yet, post-training doesn't create an LLM's personality from nothing - it reuses what's already there. Even things like metaknowledge, flawed and limited as it is in LLMs, have to trace their origins to the base model somehow.