Okay, goalpost has instantly moved from seeing to "seeing and touching". Once you feed in touch sensor data, where are you going to move the goalpost next?
Models see when photons hit camera sensors, you see when photons hit your retina. Both of them are some kind of sight.
The difference between photons hitting the camera sensors and photons hitting the retina is immense. With a camera sensor, the process ends in data: voltages in an array of photodiodes get quantized into digital values. There is no subject to whom the image appears. The sensor records but it does not see.
When photons hit the retina, the same kind of photochemical transduction happens — but the signal does not stop at measurement. It flows through a living system that integrates it with memory, emotion, context, and self-awareness. The brain does not just register and store the light, it constructs an experience of seeing, a subjective phenomenon — qualia.
Once models start continuously learning from visual subjective experience, hit me up – and I'll tell you the models "see objects" now. Until direct raw photovoltaic information stream about the world around them without any labelling can actually make model to learn anything, they are not even close to "seeing".