Neural networks are universal approximators. The function being approximated in an LLM is the mental process required to write like a human. Thinking of it as an averaging devoid of meaning is not really correct.
I don't think of it as "devoid of meaning". It's just curious to me that minimizing a loss function somehow results in sentences that look right but still... aren't. Like the one I quoted.
> Thinking of it as an averaging devoid of meaning is not really correct.
To me, this sentence contradicts the sentence before it. What would you say neural networks are then? Conscious?
> The function being approximated in an LLM is the mental process required to write like a human.
Quibble: That can be read as "it's approximating the process humans use to make data", which I think is a bit reaching compared to "it's approximating the data humans emit... using its own process which might turn out to be extremely alien."