logoalt Hacker News

Terr_last Monday at 9:05 AM1 replyview on HN

The merits of this particular proposal aside, it's tactically important to get the ideas out there and build consensus about "where we want to get to."

Otherwise you're ceding control of the Overton window to the folks aiming for techno-serfdom.


Replies

ursAxZAlast Monday at 9:36 AM

I understand the need to seed future debates early.

My hesitation comes from the fact that most proposals implicitly assume a “fixed physical capability” for AI systems — something we don’t actually have yet.

In practice, social impact won’t be determined by abstractions but by power budgets, GPU throughput, reliability of autonomous systems, and years of real-world operation.

If scaling hits physical or economic limits, the eventual policy debate may look more like progressive taxation on high-wattage compute or specialized hardware than anything being discussed today.

And if fully automated systems ever run safely for several consecutive years, that would still be early enough for the Overton window to shift.

I’m not dismissing long-term thinking.

I’m pointing out the opportunity cost: attention spent on hypothetical futures tends to displace attention from problems that exist right now. That tradeoff rarely appears in the discussion.

So for me it’s just a question of balance — how much time we allocate to tomorrow’s world versus today’s neighborhood.

From my own vantage point, the future talk feels disproportionately dominant, so the T-1000 analogy came naturally.

show 1 reply