The moral argument in favor of piracy was that it didn’t cost the companies anything and the uses were noncommercial. Neither of those applies to the AI scrapers - they’re aggressively overusing freely-provided services (listen to some of the other folks on this thread about how the scrapers behave) and they’re doing so to create a competing commercial products.
I’m not arguing you shouldn’t be annoyed by these changes, I’m arguing you should be mad at the right people. The scrapers violated the implicit contract of the open internet, and now that’s being made more explicit. GitHub’s not actually a charity, but they’ve been able to provide a free service in exchange for the good will and community that comes along with it driving enough business to cover their costs of providing that service. The scrapers have changed that math, as they did with every other site on the internet in a similar fashion. You can’t loot a store and expect them not to upgrade the locks - as the saying goes, the enemy gets a vote on your strategy, too.
There are plenty of commercial pirates, and those commercial uses were grouped in with noncommercial sharing in much the same way you are doing with scraping. Am I wrong in assuming most of this scraping comes from people utilizing AI agents for things like AI-assisted coding? If an AI agent scrapes a page at a users' request (say the 1 billionth git commit scraped today), do you consider that "loot[ing] a store"? What got looted? Is it the bandwidth? The CPU? Or does this require the assumption that the author of that commit wouldn't be excited that their work is being used?
I'd like to focus on your strongest point, which is the cost to the companies. I would love to know what that increase in cost looks like. You can install nginx on a tiny server and serve 10k rps of static content, or like 50 (not 50k) rps of a random web framework that generates the same content. So this increase in cost must be weighed against how efficient the software serving that content is.
If this Github post included a bunch of numbers and details demonstrating how they have reached the end of the line on optimizing their web frontend, they have ran out of things to cache, and the increase in costs is a real cause for concern to the company (not just a quick shave to the bottom line, not a bigger net/compute check written from Github to their owners), I'd throw my hands up with them and start rallying against the (unquestionably inefficient and on the line of hostile) AI agent scrapers causing the increase in traffic.
Because they did not provide that information, I have to assume that Github and Microsoft are doing this out of pure profit motivations and have abandoned any sense of commitment to open access of software. In fact, they have much to gain from building the walls of their garden up as high as they can get away with, and I'm skeptical their increase in costs is very material at all.
I would rather support services that don't camouflage as open and free software proponents one day and victims of a robbery on the next. I still think this question is important and valid: There is tons of software on Github written by users who wish for their work to remain open access. Is that the class of software and people you believe should be shuffled around into smaller and smaller services that haven't yet abandoned the commitments that allowed them to become popular?