I understand the need to seed future debates early.
My hesitation comes from the fact that most proposals implicitly assume a “fixed physical capability” for AI systems — something we don’t actually have yet.
In practice, social impact won’t be determined by abstractions but by power budgets, GPU throughput, reliability of autonomous systems, and years of real-world operation.
If scaling hits physical or economic limits, the eventual policy debate may look more like progressive taxation on high-wattage compute or specialized hardware than anything being discussed today.
And if fully automated systems ever run safely for several consecutive years, that would still be early enough for the Overton window to shift.
I’m not dismissing long-term thinking.
I’m pointing out the opportunity cost: attention spent on hypothetical futures tends to displace attention from problems that exist right now. That tradeoff rarely appears in the discussion.
So for me it’s just a question of balance — how much time we allocate to tomorrow’s world versus today’s neighborhood.
From my own vantage point, the future talk feels disproportionately dominant, so the T-1000 analogy came naturally.
I think "tax AI" makes as little sense as "taxing Jacquard looms" or "taxing robot factory-arms"... Which are all part of a long-term trend, and attention to that trend is overdue, rather than premature.