Unlike offloading weights from VRAM to system RAM, I just can't see a situation where you would want to offload to an SSD. The difference is just too large, and any model so large you can't run it in system RAM, is going to be so large it is probably unusable except in VRAM.
Unusable for anything like realtime response, yes. Might be usable and even quite sensible to power less-than-realtime uses on much cheaper inference platforms, as long as the slow storage bandwidth doesn't overly bottleneck compute.