Seems kinda like a first world problem to me.
The way I see it, when LLMs work, they're almost magical. When they don't, oh well, it didn't take that long anyway, and I didn't have them until recently, so I can just do things the old boring way if the magic fails.
The only time it ever seems like magic is when you don't really care about the problem or how it gets "solved" and are willing to ignore all the little things it got wrong.
Generative AI is neither magic, nor does it really solve any problems. The illusion of productivity is all in your head.
The problem with zork is that you don’t have a list of all the options in front of you so you have to guess. You could have a menu that lists all the valid options, but that changes the game. It doesn’t require you to use imagination and open-ended thinking, it becomes more of a point’n’click storybook.
But for tools, we should have a clear up front list of capabilities and menu options. Photoshop and VScode give you menu after menu of options with explicit well defined behaviors because they are tools used to achieve a specific aim and not toys for open ended exploration.
An llm doesn’t give you a menu because the llm doesn’t even know what it’s capable of. And that’s why I think we can see such polarized responses - some people want an LLM that’s a supercharged version of a tool, others want a toy for exploration.