Tangent: is anyone using a 7900 XTX for local inference/diffusion? I finally installed Linux on my gaming pc, and about 95% of the time it is just sitting off collecting dust. I would love to put this card to work in some capacity.
I tested some image and text generation models, and generally things just worked after replacing the default torch libraries with AMD's rocm variants.
I've done it with a 6800XT, which should be similar. It's a little trickier than with an Nvidia card (because everything is designed for CUDA) but doable.
You'd be much better off wiht any decent nVidia against the 7900 series.
AMD doesn't have a unified architecture across GPU and compute like nVidia.
AMD compute cards are sold under the Insinct line and are vastly more powerfull than their GPUs.
Supposedly, they are moving back to a unified architecture in the next generation of GPU cards.
try it with ramalama[1]. worked fine here with a 7840u and a 6900xt.
I bought one when they were pretty new and I had issues with rocm (iirc I was getting kernel oopses due to GPU OOMs) when running LLMs. It worked mostly fine with ComfyUI unless I tried to do especially esoteric stuff. From what I've heard lately though, it should work just fine.