We know how neural nets work. We don't know how a specific combination of weights in the net is capable of coherently asking questions asked in a natural language, though. If we did, we could replicate what it does without training it.
> We know how neural nets work. We don't know how a specific combination of weights in the net is capable of coherently asking questions asked in a natural language, though.
these are the same thing. the neural network is trained to predict the most likely next word (rather token, etc.) — that’s how it works. that’s it. you train a neural network on data, it learns the function you trained it to, it “acts” like the data. have you actually studied neural networks? do you know how they work? I’m confused why you and so many others are seemingly so confused by this. what fundamentally are you asking for to meet the criteria of knowing how LLMs work? some algorithm that can look at weights and predict if the net will output “coherent” text?
> If we did, we could replicate what it does without training it.
> We know how neural nets work. We don't know how a specific combination of weights in the net is capable of coherently asking questions asked in a natural language, though.
these are the same thing. the neural network is trained to predict the most likely next word (rather token, etc.) — that’s how it works. that’s it. you train a neural network on data, it learns the function you trained it to, it “acts” like the data. have you actually studied neural networks? do you know how they work? I’m confused why you and so many others are seemingly so confused by this. what fundamentally are you asking for to meet the criteria of knowing how LLMs work? some algorithm that can look at weights and predict if the net will output “coherent” text?
> If we did, we could replicate what it does without training it.
not sure what this is supposed to mean