I am... unsure why anyone would think LLMs would be able to do this. They are not magic oracles. Like I think even most humans would be extremely bad at this.
Like, are people actually using LLMs for this? Please do not, it won't work.
> They are not magic oracles.
I came across a LinkedIn post a couple days ago where someone had asked ChatGPT, "What are the top things you get asked about $NICHE_INDUSTRY_THING_I_AM_SELLING?"
As if there is introspection like that at the meta level, where ChatGPT could actually provide hard numbers around its own usage and request patterns.
The fact that these products work with natural language beguiles people into thinking they are, indeed, magic oracles.
> They are not magic oracles.
Anthropic's trillion dollar valuation hinges on the idea that it is just that, a magic oracle that can replace any worker for any type of task. Any programmer, any author, any musician, any kind of clerical work. All we've asked here is "sudo evaluate me a sandwich", the sort of estimation task that humans with internet resources might reasonably be expected to do, and it's given up?
(It would be fun to compare this to sending the picture out on Mechanical Turk and asking humans to eyeball the calorie count of said sandwich...)
You are severely overestimating the average, or even above average understanding of LLMs.
It’s because AI can debug a programme and people start thinking it can do fitness and health stuff too, but the thing is, there is no “instant-reacting compiler" for health or fitness. Things change over a long time, till then AI would have run out of context or lost the data from its cache, or the user may have got bored and deleted their account.
Most people are convinced LLMs can do this.
Cal AI, which claims to generate a nutritional breakdown based off a photo, has $30 million in annual recurring revenue.
It’s worse, I bet there are apps in the App Store that do this, the users just have no idea on the accuracy
https://xkcd.com/1425/ strikes again.
As far as consumers know, LLMs can identify the towns pictures were taken (without metadata), can summarize entire movies, generate clips of your kid flying a rocket to the moon, can translate images from any language imaginable, but somehow they cannot estimate the calories in a cheese sandwich.
The supposed professional posting about an LLM deleting their prod database for their non-existent company asked the AI to explain itself. That's the level of LLM knowledge you should expect from most people that actually work with these tools.
They sold the idea that LLMs "have" information. That the LLM "is" intelligent.
Truth is the LLM is good at making intelligent decisions. But in order to make intelligent decision, you need context.
If you give proper context -> ask the LLM -> get almost perfect result every time.
Anything else is rolling dice, a very special type of dice, but dice anyhow. Not magic.
If the LLM can correctly identify a food item some high percentage of the time, why would it be magic for it to guess the amount of calories in an object? It's perhaps a lookup and some simple math as an extra step.
OpenAI etc, are, however advertising them like they are magical oracles, on the verge of lifting humanity to next phrase of civilisation. The idea the majority of users know what nondeterministic even means it's a massive, massive ask
They're marketed as AI. AI has a long standing image built up by movies and other media of being some omniscient computer capable of analyzing the world. These AI companies are very aware of this and leverage it.
And a person with sufficient knowledge could easily give a rough estimate of the calories. A slice of store bought sandwich bread of a given thickness generally has calories within a certain range. So do cheese slices. It's elementary school health class material. We all learn how to calculate calories in a meal. Packaging on food also always has calories, so clearly people know how to estimate it fairly accurately.
If a fifth grader can calculate it but an AI can't, that says a lot about how bad these AIs are. We'll get another series of paid and bought articles saying "AI analyzed IMPOSSIBLE math problem beyond human comprehension and solved it with FACTS and LOGIC", while at the same time being told "bro no you can't expect an ai to calculate calories in a sandwich bro that's impossible bro if you even try that then you're insane for even thinking ai should be used that way bro". These companies need to decide: is AI smart enough to solve hard questions, or is it too useless to calculate something any kid could do by googling calories in a slice of bread and doing some basic arithmetic?
But nothing prevents llms from being RLed to do this right?
But does training llms to be better at this, improves their world model or does it only make changes at the surface?
The vast majority of people using LLMs in my experience use them as though they are Oracles
They are surprised and upset when the Oracle is not perfect
Go ahead and search around on hacker news you’ll see precisely the same pattern with people who are ostensibly engineers and hackers
It’s actually pretty mind boggling but then again humans never fail to surprise and disappoint
Some people are asking LLMs what's on the menu of restaurants they are actively sitting in, possibly with a menu on the table in front of them.
Some people have a very poor understanding of what LLMs are good for. Some people do see them as magic oracles.
I mean people will shamelessly paste you a wall of text from LLM while chatting with you to prove a point, probably thinking how they outsmarted you now...
>I am... unsure why anyone would think LLMs would be able to do this.
Well firstly the average IQ is 100. And also because people market products to consumers that claim to be able to count carbs from images. If you don't know the limitations of LLMs then there would be little reason to doubt it for an uniformed or below average intelligence person, of which there are hundreds of millions.
Yes, people are using LLMs for this because that is how they've been marketed, like being able to solve every day tasks like a personal assistant on one hand, but also like researchers being able to solve old problems that humans couldn't crack.
Does the model say it can't do that when asked? No, it answers confidentely.
Also it's easy to trust it if you don't know how it works