My counter argument is that thay manual training, while beneficial, wont lead to the scaling factors being thrown around. It wont lead to the single person unicorn that keeps being talked about excitedly.
For that, the model needs to learn all this architecture and structure itself from the huge repositories of human knowledge like the internet
Until then, reality will be below expectations, and the bubble will head towards popping