Not to be the “ai” guy, but LLMs have helped me explore areas of human knowledge that I had postponed otherwise
I am of the age where the internet was pivotal to my education, but the teacher’s still said “don’t trust Wikipedia”
Said another way: I grew up on Google
I think many of us take free access to information for granted
With LLMs, we’ve essentially compressed humanity’s knowledge into a magic mirror
Depending on what you present to the mirror, you get some recombined reflection of the training set out
Is it perfect? No. Does it hallucinate? Yes. It it useful? Extremely.
As a kid that often struggled with questions he didn’t have the words for, Google was my salvation
It allowed me to search with words I did know, to learn about words I didn’t know
These new words both had answer and opened new questions
LLMs are like Google, but you can ask your exact question (and another)
Are they perfect? No.
The benefit of having expertise in some area, means I can see the limits of the technology.
LLMs are not great for novelty, and sometimes struggle with the state of the art (necessarily so).
Their biggest issue is when you walk blindly, LLMs will happily lead the unknowing junior astray.
But so will a blogpost about a new language, a new TS package with a bunch of stars on GitHub, or a new runtime that “simplifies devops”
The biggest tech from the last five years is undoubtedly the magic mirror
Whether it can evolve to Strong AI or not is yet to be seen (and I think unlikely!)