Maybe, but due to the physics of signal integrity, socketed RAM will always be slower than RAM integrated onto the same PCB as whatever processing element is using it, so by the time CAMM / LPCAMM catches up, some newer integrated RAM solution will be faster yet.
This is a matter of physics. It can't be "fixed." Signal integrity is why classic GPU cards have GiBs of integrated RAM chips: GPUs with non-upgradeable RAM that people have been happily buying for years now.
Today, the RAM requirements of GPU and their applications has become so large that the extra, low cost, slow, socketed RAM is now a false economy. Naturally, therefore, it's being eliminated as PCs evolve into big GPUs, with one flavor or other of traditional ISA processing elements attached.
Is the problem truly down to physics or is it down to the stovepiped and conservative attitudes of PC part manufacturers and their trade groups like JEDEC? (Not that consumers don't play a role here too).
The only essential part of sockets vs solder is the metal-metal contacts. The size of the modules and the distance from the CPU/GPU are all adjustable parameters if the will exists to change them.
Perhaps it's time to introduce L4 Cache and a new Slot CPU design where RAM/L4 is incorporated into the CPU package? The original Slot CPUs that Intel and AMD released in the late 90s were to address similar issues with L2 cache.
How much higher bandwidth, percentage wise, can one expect from integrated DRAM vs socketed DRAM? 10%?
It’s possible that Apple really did a disservice to soldered RAM by making it a key profit-increasing option for them, exploiting the inability of buyers to buy RAM elsewhere or upgrade later, but in turn making soldered RAM seem like a scam, when it does have fundamental advantages, as you point out.
Going from 64 GB to 128 GB of soldered RAM on the Framework Desktop costs €470, which doesn’t seem that much more expensive than fast socketed RAM. Going from 64 GB to 128 GB on a Mac Studio costs €1000.