So basically “you’re holding it wrong?”
I’d say “skill issue” since this is a domain where there are actually plenty of ways to “hold it wrong” and lots of ink spilled on how to hold it better, and your phrasing connotes dismissal of user despair which is not my intent.
(I’m dismissive of calling the tool broken though.)
Remember when "Googling" was a skill?
LLMs are definitely in the same boat. It's even more specific where different models have different quirks so the more time you spend with one, the better the results you get from that one.
Do you think it's impossible to ever hold a tool incorrectly, or use a tool in a way that's suboptimal?
I found this a pretty apt - if terse - reply. I'd appreciate someone explaining why it deserves being downvoted?
Every time this is what I'm told. The difference between learning how to Google properly and then the amount of hoops and in-depth understanding you need to get something useful out of these supposedly revolutionary tools is absurd. I am pretty tired of people trying to convince me that AI, and very specifically generative AI, is the great thing they say it is.
It is also a red flag to see anyone refer to these tools as intelligence as it seems the marketing of calling this "AI" has finally sewn its way into our discourse that even tech forums think the prediction machine is intelligent.