> it does not act independently of the individual leveraging the tool
This used to be true. As we scale the notion of agents out it can become less true.
> western liberal ideals of truth, liberty, and individual responsibility
It is said that Psychology best replicates on WASP undergrads. Take that as you will, but the common aphorism is evidence against your claim that social science is removed from established western ideals. This sounds more like a critique against the theories and writings of things like the humanities for allowing ideas like philosophy to consider critical race theory or similar (a common boogeyman in the US, which is far removed from western liberal ideals of truth and liberty, though 23% of the voting public do support someone who has an overdevleoped ego, so maybe one could claim individualism is still an ideal).
One should note there is a difference between the social sciences and humanities.
One should also note that the fear of AI, and the goal of alignment, is that humanity is on the cusp of creating tools that have independent will. Whether we're discussing the ideas raised by *Person of Interest* or actual cases of libel produced by Google's AI summaries, there is quite a bit that social sciences, law, and humanities do and will have to say about the beneficial application of AI.
We have ethics in war, governing treaties, etc. precisely because we know how crappy humans can be to each other when they do control the tools under their control. I see little difference in adjudicating the ethics of AI use and application.
This said, I do think stopping all interaction, like what Anthropic is doing here, is short sighted.
A simple question: would you rather live in a world in which responsibility for AI action is dispersed to the point that individuals are not responsible for what their AI tools do, or would you rather live in a world of strict liability in which individuals are responsible for what AI under their control does?
Alignment efforts, and the belief that AI should itself prevent harm, shifts us much closer to that dispersed responsibility model, and I think that history has shown that when responsibility is dispersed, no one is responsible.