Perhaps we can add that using LLMs for logical, creative or reasoning tasks (things the technology isn’t capable of doing) is an anti-pattern.
In my experience, LLMs work pretty nicely as rubber ducks, for logical, creative or reasoning tasks. They'll make lots of mistakes, but if you know the field, they are often (not always, though) easy to detect and brush off.
Whether that's worth the environmental or social cost, of course, remains open for debate.
I use LLMs for those purposes all the time and they seems to work for me.
> reasoning tasks
Please provide a definition of reasoning.
What would be examples of tasks of that type? I hate hype as much as the next guy, but frankly I don't think you can support that assertion.
I use LLMs as a sounding board for logical, creative or reasoning tasks all the time as it can provide different points of view that make ME think about a problem differently.