If they wait a year or so, the new AI chips being used now in China will probably be available for LLM inference in Europe. It seems unfortunate for small and medium size countries, and also for the EU to be dependent on any IT infrastructure only from China or the USA, but perhaps being flexible enough to be able to switch venders or use both is safer?
exactly. in HPC we all understood that it was a tradeoff between money and time, and that the curve was exponential. if you wanted to race ahead of todays capabilities, you could, but you couldn't go very far without burning alot of cash.
because of the investment story about being first and building a moat, we have companies torching 100s of billions of dollars to see who can climb that exponential the furthest.
we have so much work to do, in infrastructure, and distributed computation models, and programmability, quantization, and information theory...just relax a little. you dont have to compete with OpenAI. OpenAI is just a giant waste of money. take your incremental gains and invest in research and I assure you we can get there without directing our entire economic output into buying the latest highest margin parts from Nvidia only to use them at 30%, if you're being generous.