> But the author just took pictures of food & expected a realistic response?
If someone sent me a picture of a meal and asked me what the macros were or how many carbs this is, I would say "I can't tell from a photo. Nobody can". The problem is that current LLM chatbots don't seem to have a concept of telling you "I don't know", "you can't do that" or even "you're wrong".
You can say that somebody shouldn't trust an LLM for this but it's going to be a problem that LLMs give nonsencial answers. What I find particularly amusing is that there are still technical people (generally, not anyone specifically) who seem unable to acknowledge that LLMs hallucinate and lie.
There was a post on here recently that I couldn't find with some quick searching but the premise basically was that chatbots were trained like neurotypical people: A lot of affirmation and basically lying. Separately someone else characterized this NT style of communication as "tone poems" [1]. I keep thinking about that because to me that's so accurate.
Dunning-Kruger is a common refrain on HN, for good reason. Another way to put this is how often people are confidently wrong. I really wonder if this is an inevitable consequence of NT communication because most neurodivergent ("ND") people I know are incredibly intentional in what they say and mean.