I mean, we're watching the world try to figure out how to use a new set of tools. As with so many disruptive technologies, the initial stages of development appear to be drop in quality and inferior to the status quo. That usually reverses within five to ten years.
That said, I agree with you that AI is not going to lead to people doing less work, in the same way that computers didn't lead to people doing less work.
The non-technical folks don't understand the very real limitations, hallucinations, and security risks that LLMs introduce into workflows. Tech CEOs and leadership are shoving it down everyone's throats without understanding it. Google/Microsoft are shoving it down everyone's' throats without asking, and with all the layoffs that have happened? People are understandably rejecting it.
The entire premise is also CURRENTLY built around copyrighted infringement, which makes any material produced by an LLM questionable legally. Unless the provider you are using has a clause saying they will pay for all your legal bills, you should NOT be using an LLM at work. This includes software development, btw. Until the legal issue is settled once and for all, any company using an LLM may risk becoming liable for copyright infringement. Possibly any individual depending on the setup.