I find myself worrying the AI bubble will pop and we'll lose this aspect of AI's without it ever being properly explored. Instead of doomscrolling now I find myself firing up claude and saying 'explain ... to me' and it proceeds to tell me all about it. I can ask it questions and it seems fairly right - at least right enough for me to proceed, it's way better at this than building code, in my experience anyway.
When people say the "bubble will pop" it's meant in analogy to the dotcom era - businesses and investers lost money, but the internet (and its opportunities) didn't vanish.
Even open-weight local models are becoming good enough for teaching yourself quite a range of stuff, especially the beginner aspects. LLMs are not going to simply disappear because of a financial reallignment. The worst thing might be not being able to access a super-duper frontier model for free?