I kinda felt like this was coming so mid last year I built a local rig with the top of the line parts I could afford at the time lol (rtx5090/ryzen9 etc) now I just need to build out my inference setup (sadly m3 ultras r insanely expensive now) - I have a feeling they will try to lock down usage of open source LLMs too. I don’t get how token moat can exist if local inference rigs can be built out and serve open source models locally for nothing (besides power cost).