> This means that they should never be used in medicine, for evaluation in school or college, for law enforcement, for tax assessment, or a myriad of other similar cases.
If AI models can deliver measurably better accuracy than doctors, clearer evaluations than professors and fairer prosecutions than courts, then it should be adopted. Waymo has already shown a measurable decrease in loss of life by eliminating humans from driving.
I believe, technically, moderns LLMs are sufficiently advanced to meaningfully disrupt the aforementioned professions as Waymo has done for taxis. Waymo's success relies on 2 non-llm factors that we've yet to see for other professions. First is exhaustive collection and labelling of in-domain high quality data. Second is the destruction of the pro-human regulatory lobby (thanks to work done by Uber in the Zirp era that came before).
To me, an AI winter isn't a concern, because AI is not the bottleneck. It is regulatory opposition and sourcing human experts who will train their own replacements. Both are significantly harder to get around for high-status white collar work. The great-AI-replacement may still fail, but it won't be because of the limitations of LLMs.
> My advice: unwind as much exposure as possible you might have to a forthcoming AI bubble crash.
Hedging when you have much at stake is always a good idea. Bubble or no bubble.