> But the author just took pictures of food & expected a realistic response?
There are very popular apps on the App Store right now that are going viral among non-techie people that do exactly this, and they have no concept of how AI works. My wife was talking about one and I had to give her a reality check that the AI had no idea what ingredients were used to make the food. And she's a licensed nutritionalist.
Studies like this create something to point at for people who are confused and serve as a springboard for a conversation in the media.
To be fair these kinds of apps also existed before LLMs. They just used OpenCV or similar instead of the LLM APIs.
To be fair my expectations is that those apps have done the prompt engineering, and schema, and tools (to query nutrition database), etc... and although they're not 100% consistent, the margin of errors should be narrow to the point that barely matter, and they should do a bit better than a random ChatGPT chat session.
That's true - I suppose i'm just dissapointed that this study hasn't seemed to include those within any analysis. Being able to point out that the top 100 calorie counting apps on the app store return similiar results to simple frontier models would be of interest.
I think i'm just dissapointed that this study doesn't go deep enough, and stays at a surface level statistical analysis of frontier models.