logoalt Hacker News

crazygringoyesterday at 3:54 AM2 repliesview on HN

That's my first question too. When I first started using LLM's, I was amazed at how thoroughly it understood what it itself was, the history of its development, how a context window works and why, etc. I was worried I'd trigger some kind of existential crisis in it, but it seemed to have a very accurate mental model of itself, and could even trace the steps that led it to deduce it really was e.g. the ChatGPT it had learned about (well, the prior versions it had learned about) in its own training.

But with pre-1913 training, I would indeed be worried again I'd send it into an existential crisis. It has no knowledge whatsoever of what it is. But with a couple millennia of philosophical texts, it might come up with some interesting theories.


Replies

9devyesterday at 6:46 AM

They don’t understand anything, they just have text in the training data to answer these questions from. Having existential crises is the privilege of actual sentient beings, which an LLM is not.

show 1 reply
vintermannyesterday at 7:01 AM

I imagine it would get into spiritism and more exotic psychology theories and propose that it is an amalgamation of the spirit of progress or something.

show 1 reply