It’s not just on the job learning though. I’m no AI expert, but the fact that you have “prompt engineers” and AI doesn’t know what it doesn’t know, gives me pause.
If you ask an expert, they know the bounds of their knowledge and can understand questions asked to them in multiple ways. If they don’t know the answer, they could point to someone who does or just say “we don’t know”.
LLMs just lie to you and we call it “hallucinating“ as though they will eventually get it right when the drugs wear off.
LLM comprehends, but does not understand. It is interesting to see these two qualities separated; so far they were synonyms.
> I’m no AI expert, but the fact that you have “prompt engineers” [...] gives me pause.
Why? A bunch of human workers can get a lot more done with a capable leader who helps prompt them in the right direction and corrects oversights etc.
And overall, prompt engineering seems like exactly the kind of skill AI will be able to develop by itself. You already have a bit like this happening: when you ask Gemini to create a picture for you, then the language part of Gemini will take your request and engineer a prompt for the picture part of Gemini.