Yes multiple GPUs absolutely help with inference even for a single model instance. Some models are simply too big to fit on the largest available GPU.
Check out tensor parallelism
Tensor parallelism is not useful on consumer platforms with slow interconnects, unless compute is really low and you prioritize decreasing latency over throughput. pipeline parallelism (and potentially expert parallelism) are more workable.
Tensor parallelism is not useful on consumer platforms with slow interconnects, unless compute is really low and you prioritize decreasing latency over throughput. pipeline parallelism (and potentially expert parallelism) are more workable.