I have a perhaps unique viewpoint among people in tech, at least among the sample I see on HN
I simultaneously think
1. AI will be a massively impactful technology on the scale of the industrial revolution or greater
2. The potential upside of AI is enormous, but potential downside is just as big (utopia or certain ruin)
3. Most current AI companies are acting somewhat reasonably in a game-theory sense with respect to the deployment of their tech, and aren't especially evil or dastardly compared to Google in the 2000s, social media in the 2010s
4. AI safety is an under-appreciated concern and many who are spending time nitpicking the details are missing the bigger picture of what ASI and complete human obsolescence look like.
5. No amount of whiny protest, data sabotaging, or small-scale angst or claiming that AI is "fake" or hoping for the bubble to pop is going to have even a marginal effect on the development of AI. It is too powerful and the rewards are too great. If anything it will have an overall negative effect because it will convince labs that their potential role as a utopian, public benefactor will not be appreciated, so will instead align themselves with the military industrial complex for goodwill.