> An llm generates the next word based on what best matches its training, with some level of randomisation.
This is sort of accurate, but not precise.
An LLM generates the next token by sampling from a probability distribution over possible tokens, where those probabilities are computed from patterns learned during training on large text datasets.
The difference in our explanations is that you are biasing towards LLMs being fancy database indexes, and I am emphasizing that LLMs build a model of what they are trained on and respond based on that model, which is more like how brains and cells work than you are recognizing. (though I admit my understanding of microbiology places me just barely past peak Mt Stupid [Dunning Kruger], I don't really understand how individual cells do this and can only hand-wavey explain it).
Both systems take input, pass it through a network of neurons, and produce output. Both systems are trying to minimize surprise in predictions. The differences are primarily in scale and complexity. Human brains have more types of neurons (units) and more types of connections (parameters). LLMs more closely mimic the prefrontal cortex, whereas e.g. the brainstem is a lot more different in terms of structure and cellular diversity.
You can make a subjective ontological choice to draw categorical boundaries between them, or you can plot them on a continuum of complexity and scale. Personally I think both framings are useful, and to exclude either is to exclude part of the truth.
My point is that if you draw a subjective categorical boundary around what you deem is consciousness and say that LLMs are outside of that, that is subjectively valid. You can also say that consciousness is a continuum, and individual cells, blades of grass, ants, mice, and people experience different types of consciousness on that continuum. If you take the continuum view, then what follows is a reasonable assumption that LLMs experience a very different kind of consciousness that takes in inputs at about the same rate as a small fish, models those inputs for a few seconds, and then produces outputs. What exactly that "feels" like is as foreign to me as it would be to you. I assume its even more foreign than what it would "feel" like to be a blade of grass.
I'm not sure why you'd describe "sampling from a probability distribution over possible tokens" as "minimize surprise in predictions" other than to make it sound similar to the free energy thing.
The free energy thing as I understand it has internal state, makes predictions, evaluates against new input and adjusts it internal state to continuously learn to predict new input better. This might if you squint look similar to training a neural network, although the mechanisms are different, but it's very distinct from the inference step