We do not write the code that makes it do what it does. We write the code that trains it to figure out how to do what it does. There's a big difference.
I think the discrepancy is this:
1. We trained it on a fraction of the world's information (e.g. text and media that is explicitly online)
2. It carries all of the biases us humans have and worse the biases that are present in the information we chose to explicitly share online (which may or may not be different to the experiences humans have in every day life)
and then the code to give it context. AFAIU, there is a lot of post training "setup" in the context and variables to get the trained model to "behave as we instruct it to"
Am I wrong about this?
The code that builds the models and performance inference from it is code we have written. The data in the model is obviously the big trick. But what I'm saying is that if you run inference, that alone does not give it super-powers over your computer. You can write some agentic framework where it WOULD have power over your computer, but that's not what I'm referring to.
It's not a living thing inside the computer, it's just the inference building text token by token using probabilities based on the pre-computed model.