Anecdotally, I’m finding that, at least in the Spark ecosystem, AI-generated ideas and code are far from optimal. Some of this comes from misinterpreting the (sometimes poor) documentation, and some of it comes from, probably, there not being as many open source examples as CRUD apps, which AI “influentists” (to borrow from TFA) appear to often be hyping up.
This matters a lot to us because the difference in performance of our workflows can be the difference in $10/day in costs and $1000/day in costs.
Just like TFA stresses, it’s the expertise in the team that pushes back against poor AI-generated ideas and code that is keeping our business within reach of cash flow positive. ~”Surely this isn’t the right way to do this?”
Most text worth paying for (code, contracts, research) requires:
- accountability
- reliability
- validation
- security
- liability
Humans can reliably produce text with all of these features. LLMs can reliably produce text with none of them.
If it doesn't have all of these, it could still be worth paying for if it's novel and entertaining. IMO, LLMs can't really do that either.