I'm not sure the direction should be to finetune a small local model for each country or language. These models are already not particularly great at information retrieval, so I doubt anyone would use them for questions like the author suggests (ie who was the president between X and Y). Similarly, they are a little too lightweight to be used for translations too.
If the budget is indeed so modest (5.5 million euros!), I would focus completely on preparing datasets and making sure all open cultural artifacts that we can find are well documented in them. That way every model, private or open, that gets trained in the future could better represent the culture and language of your country.
This is the way.
Sovereign SOTA models might also be possible with nation-state involvement. But this is a good stopgap.
Yeah I think India is going the better route with Sarvam which is trained from scratch and still relatively cheap.
I agree, the research is complex enough as is without having to worry about splitting it babel-like into multiple languages.