Virtualization has pushed back the need for a while, but we are going to have to look at pointers larger than 64 bit at some point. It's also not just about the raw size of datasets, but how we get a lot of utility out of various memory mapping tricks, so we consume more address space than the strict minimum required by the dataset. Also if we move up to 128 bit a lot more security mitigations become possible.
Please keep in mind that doubling isn't the only option. There's lots of numbers between 64 and 128.
By virtualization are you referring to virtual memory? We haven't even been able to mmap() the direct-attached storage on some AWS instances for years due to limitations on virtual memory.
With larger virtual memory addresses there is still the issue that the ratio between storage and physical memory in large systems would be so high that cache replacement algorithms don't work for most applications. You can switch to cache admission for locality at scale (strictly better at the limit albeit much more difficult to implement) but that is effectively segmenting the data model into chunks that won't get close to overflowing 64-bit addressing. 128-bit addresses would be convenient but a lot of space is saved by keeping it 64-bit.
Space considerations aside, 128-bit addresses would open up a lot of pointer tagging possibilities e.g. the security features you allude to.