I've spent many year moving away from relying on third parties and got my own servers, do everything locally and with almost no binary blobs. It has been fun, saved me money and created a more powerful and pleasant IT environment.
However, I recently got a 100 EUR/m LLM subscription. That is the most I've spend on IT excluding a CAD software license. So've made a huge 180 and now am firmly back on the lap of US companies. I must say I've enjoyed my autonomy while it lasted.
One day AI will be democratized/cheap allowing people to self host what are now leading edge models, but it will take a while.
Out of curiosity, what use case or difference caused the 180?
Have you tried out Gemma3? The 4b parameter model runs super well on a Macbook as quickly as ChatGPT 4o. Of course the results are a bit worse and other product features (search, codex etc) don't come along for the ride, but wow, it feels very close.