It won't unless there's another (r)evolution in the underlying technology / science / algorithms, at this point scaling up just means they use bigger datasets or more iterations, but it's more finetuning and improving the existing output then coming up with a next generation / superintelligence.
Okay, but let’s be pessimistic for a moment. What can we do if that revolution does happen, and they’re close to AGI?
I don’t believe the control problem is solved, but I’m not sure it would matter if it is.
> It won't unless there's another (r)evolution in the underlying technology / science
I think reinforcement learning with little to no human feedback, O-1 / R-1 style, might be that revolution.