Or, it could be like asbestos and the immediate benefits are just too appealing to listen to arguments of skeptical naysayers about some vaguely defined problems that are decades away, if they even happen.
I use AI tools daily (because they feel like they're helping me) but it's not exactly hard to imagine scenarios where an explosion of slop piling up plus harm to learning by outsourcing all thinking results in systemic damage that actually slows the pace of technological progress given enough time.
History of new technologies tend to average into a positive trend over a long enough time scale but that doesn't mean there aren't individual ups and downs. Including WTF moments looking back at what now seems like baffling decision-making with benefit of hindsight.
> Or, it could be like asbestos
If it is, the fall out will be way worse than if AI ends up living up to (reasonable) expectations.
If it doesn’t, we are going to see over a trillion dollars of capital leave the tech sector, which I think will have worse impacts on the livelihood of tech workers than if AI ends up panning out.
This is something the naysayers need to grapple with. We’ve crossed a line where this tech needs to work simply because of the amount of money depending on that fact.
Some of us are already experiencing that. For example I handed off an initial version of something some months ago, and the AI-generated stuff they came up with was a huge buggy mess of spaghetti code neither of us understood. Months later we've detangled it, cutting it down to a third the size, making it far simpler to understand, and fixing several bugs in the process (one was even by accident, we'd made note of it, then later when we went to fix it, it was already fixed).