I've been kicking this around in my head for a while. If I want to run LLMs locally, a decent GPU is really the only important thing. At that point, the question becomes, roughly, what is the cheapest computer to tack on the side of the GPU? Of course, that assumes that everything does in fact work; unlike OP I am barely in a position to understand eg. BAR problems, let alone try to fix them, so what I actually did was build a cheap-ish x86 box with a half-decent GPU and called it a day:) But it still is stuck in my brain: there must be a more efficient way to do this, especially if all you need is just enough computer to shuffle data to and from the GPU and serve that over a network connection.
We're not yet to the point where a single PCIe device will get you anything meaningful; IMO 128 GB of ram available to the GPU is essential.
So while you don't need a ton of compute on the CPU you do need the ability address multiple PCIe lanes. A relatively low-spec AMD EPYC processor is fine if the motherboard exposes enough lanes.
And you don’t want to go the M4 Max/M3 Ultra route? It works well enough for most mid sized LLMs.
Get the DGX Spark computers? They’re exactly what you’re trying to build.
This problem was already solved 10 years ago - crypto mining motherboards, which have a large number of PCIe slots, a CPU socket, one memory slot, and not much else.
> Asus made a crypto-mining motherboard that supports up to 20 GPUs
https://www.theverge.com/2018/5/30/17408610/asus-crypto-mini...
For LLMs you'll probably want a different setup, with some memory too, some m.2 storage.
There is a whole section in here on how to spec out a cheap rig and what to look for:
I run a crowd sourced website to collect data on the best and cheapest hardware setup for local LLM here: https://inferbench.com/
Source code: https://github.com/BinSquare/inferbench