Why run your K8S cluster on IPv6 when IPv4 with 10.0.0.0/8 works perfectly with less hassle? You can always support IPv6 at the perimeter for ingress/egress. If your cluster is so big it can’t fit in 10.0.0.0/8, maybe the right answer is multiple smaller clusters-your service mesh (e.g. istio) can route inter-cluster traffic just based on names, not IPs.
And if 10.0.0.0/8 is not enough, there is always the old Class E, 240.0.0.0/4 - likely never going to be acceptable for use on the public Internet, but growing use as an additional private IPv4 address range - that gives you over 200 million more IPv4 addresses
Are you responding to the right comment?
My point was that NAT is for IPv4 address exhaustion.
There’s no point to using NAT for IPv6.
If your software doesn’t work with IPv6, and you need IPv4…that is subject to IP address exhaustion. So yeah you need NAT for IPv4.
> Why run your K8S cluster on IPv6 when IPv4 with 10.0.0.0/8 works perfectly with less hassle? You can always support IPv6 at the perimeter for ingress/egress.
How is it "less hassle"? You've got to use a second, fiddlier protocol and you've got to worry about collisions and translations. Why not just use normal IPv6 and normal addresses for your whole network, how is that more hassle?
> You can always support IPv6 at the perimeter for ingress/egress. If your cluster is so big it can’t fit in 10.0.0.0/8, maybe the right answer is multiple smaller clusters-your service mesh (e.g. istio) can route inter-cluster traffic just based on names, not IPs.
You can work around the problems, sure. But why not just avoid them in the first place?