Weird question. If a system you are using is intended to extract from you, and unwillfully and nonconsensually stealing your intellectual property because the unaccountable big companies do shady stuff (the "AI" companies) is it justified to use it for the productivity gains because it will 'eventually' get there anyway?
Does the potential gain as an early adopter make it morally ok.
Because thats what these tiered uses of these AI and how it's been getting better works imo. It got lots of training data from juniors and seniors using it the last 2 years, it got better. It gets more appealing and leverages human psychology and marketing to get higher level engineers to train it as it and the companies extract more data. It needs and gets more data from the people willfully complying and using it. Wondering if theres a game theory design for this conundrum - what typically happens in nature in these scenarios?