logoalt Hacker News

ACCount37last Monday at 10:36 PM0 repliesview on HN

In the purely mechanical sense: LLMs get less self-awareness than humans, but not zero.

It's amazing how much of it they have, really - given that base models aren't encouraged to develop it at all. And yet, post-training doesn't create an LLM's personality from nothing - it reuses what's already there. Even things like metaknowledge, flawed and limited as it is in LLMs, have to trace their origins to the base model somehow.