logoalt Hacker News

cyanydeeztoday at 7:02 AM3 repliesview on HN

I've been watching the automation of things like flight control systems for the past decade, and the evolution of the fallback to a real pilot in the event of a emergency is what's most concerning about where LLMs are being embedded.

Right now, we have a lot of smart people who have trained for decades to understand where these things go wrong and how to nudge them back, but the pool of people are going to slowly be replaced by less knowledgeable.

At some point, a rubicon will be crossed where these systems can't fallback to a human operator and will fail spectacularly.


Replies

pbhjpbhjtoday at 10:09 AM

Watching a teenager approach their homework, instead of struggling to answer questions they don't know, they ask Gemini. Unfortunately, I think the mental struggle to approach an answer is where much of the learning is. They also miss out on the reward for persistence of seeing things fall together.

It is troubling. It suggests a plateauing of human understanding.

show 1 reply
regularfrytoday at 10:30 AM

What that means practically is that we've got a generation - 25 years or less - to evolve these things not to need the fallback. If such a thing is possible.

leptonstoday at 8:40 AM

We're on the road to Idiocracy.