It's not "fundamentally flawed". It is brilliant at what it does. What is flawed is how people are applying it to solve specific problems. It isn't a "do anything" button that you can just push. Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
> Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
Ok, but that isn't useful to me. If I have to hold the bot's hand to get stuff done, I'll just do it myself, which will be both faster and higher quality.
You’re correct, you need to learn how to use it. But for some reason HN has an extremely strong anti-AI sentiment, unless it’s about fundamental research.
At this point, I consider these AI tools to be an invaluable asset to my work in the same way that search engines are. It’s integrated into my work. But it takes practice on how to use it correctly.
I'd consider hallucinations to be a fundamental flaw that currently sets hard limits on the current utility of LLMs in any context.