This is all really arbitrary metrics across such wildly different fields. IMO LLMs are where computer vision was 20+ years ago in terms of real world accuracy. Other people feel LLMs offer far more value to the economy etc.
I understand the temptation to compare LLMs and computer vision, but I think it’s misleading to equate generative AI with feature-identification or descriptive AI systems like those in early computer vision. LLMs, which focus on generating human-like text and reasoning across diverse contexts, operate in a fundamentally different domain than descriptive AI, which primarily extracts patterns or features from data, like early vision systems did for images.
Comparing their 'real-world accuracy' oversimplifies their distinct goals and applications. While LLMs drive economic value through versatility in language tasks, their maturity shouldn’t be measured against the same metrics as descriptive systems from decades ago.
I understand the temptation to compare LLMs and computer vision, but I think it’s misleading to equate generative AI with feature-identification or descriptive AI systems like those in early computer vision. LLMs, which focus on generating human-like text and reasoning across diverse contexts, operate in a fundamentally different domain than descriptive AI, which primarily extracts patterns or features from data, like early vision systems did for images.
Comparing their 'real-world accuracy' oversimplifies their distinct goals and applications. While LLMs drive economic value through versatility in language tasks, their maturity shouldn’t be measured against the same metrics as descriptive systems from decades ago.