Maybe you didn't realise that LLMs have just wiped out entire class of problems, maybe entire disciplines- do you remember "natural language processing"? What, ehm, happened to it?
Sometimes I have the feeling that what happened with LLMs is so enormous that many researches and philosophers still haven't had time to gather their thoughts and process it.
I mean, shall we have a nice discussion about the possibility of "philosophical zombies"? On whether the Chinese room understands or not? Or maybe on the feasibility of the mythical Turing test? There's half a century or more of philosophical questions and scenarios that are not theory anymore, maybe they're not even questions anymore- and almost from one day to the other.
How is NLP solved, exactly? Can LLMs reliably (that is, with high accuracy and high precision) read, say, literary style from a corpus and output tidy data? Maybe if we ask them very nicely it will improve the precision, right? I understand what we have now is a huge leap, but the problems in the field are far from solved, and honestly BERT has more use cases in actual text analysis.
"What happened with LLMs" is what exactly? From some impressive toy examples like chatbots we as a society decided to throw all our resources into these models and they still can't fit anywhere in production except for assistant stuff
> do you remember "natural language processing"? What, ehm, happened to it
There’s this paper[1] you should read, is sparked an entire new AI dawn, it might answer your question
1. https://arxiv.org/abs/1706.03762