yes except intelligence isn't like a car, there's no way to break the complicated emergent behaviors of these models into simple abstractions. you can understand a LLM by training one the same amount you can understand a brain by dissection.
I think making one would help you understand that they're not intelligent.
I think making one would help you understand that they're not intelligent.