This feels right when you're looking forwards. The perfect AI bot is definitely not 6 months away. It'll take a lot longer than that to get something that doesn't get things wrong a lot of the time. That's not especially interesting or challenging though. It's obvious.
What's much more interesting is looking back 6, 12, 18, or 24 months. 6 months ago was ChatGPT 5, 12 months ago was GPT 4.5, 18 months ago was 4o, and 24 months ago ChatGPT 3.5 was released (the first one). If you've been following closely you'll have seen incredible changes between each of them. Not to get to perfect, because that's not really a reasonable goal, but definite big leaps forward each time. A couple of years ago one-shotting a basic tic tac toe wasn't really possible. Now though, you can one-shot a fairly complex web app. It won't be perfect, or even good by a lot of measures compared to human written software, but it will work.
I think the comparison to the internet is a good one. I wrote my first website in 1997, and saw the rapid iteration of websites and browsers back then. It felt amazing, and fast. AI feels the same to me. But given the fact that browsers still aren't good in a lot of ways I think it's fair to say AI will take a similarly long time. That doesn't mean the innovations along the way aren't freaking cool though.
Yeah, but humans still had to work to create those websites, it increased jobs, didn't replace them (this is happening). This will devalue all labor that has anything to do with i/o on computers, if not outright replace a lot of it. Who cares if it can't write perfect code, the owners of the software companies never cared about good code, they care about making money. They make plenty of money off slop, and they'll make even more if they don't have to have humans create the slop.
The job market will get flooded with the unemployed (it already is) with fewer jobs to replace the ones that were automated, those remaining jobs will get reduced to minimum wages whenever and wherever possible. 25% of new college grads cannot find employment. Soon young people will be so poor that you'll beg to fight in a war. Give it 5-10 years.
This isn't a hard future to game theory out, its not pretty if we maintain this fast track of progress in ML that minimally requires humans. Notice how the ruling class has increased the salaries for certain types of ML engineers, they know what's at stake. These businessmen make decisions based on expected value calculated from complex models, they aren't giving billion dollar pay packages to engineers because its trendy. We should use our own mental models to predict where this is going, and prevent it from happening however possible.
THE word ''Luddite'' continues to be applied with contempt to anyone with doubts about technology, especially the nuclear kind. Luddites today are no longer faced with human factory owners and vulnerable machines. As well-known President and unintentional Luddite D. D. Eisenhower prophesied when he left office, there is now a permanent power establishment of admirals, generals and corporate CEO's, up against whom us average poor bastards are completely outclassed, although Ike didn't put it quite that way. We are all supposed to keep tranquil and allow it to go on, even though, because of the data revolution, it becomes every day less possible to fool any of the people any of the time. If our world survives, the next great challenge to watch out for will come - you heard it here first - when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long. Meantime, as Americans, we can take comfort, however minimal and cold, from Lord Byron's mischievously improvised song, in which he, like other observers of the time, saw clear identification between the first Luddites and our own revolutionary origins. It begins:[0]
https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...
ChatGPT 3.5 was almost 40 months ago, not 24. GPT 4.5 was supposed to be 5 but was not noticeably better than 4o. GPT 5 was a flop. Remember the hype around Gemini 3? What happened to that? Go back and read the blog posts from November when Opus 4.5 came out; even the biggest boosters weren't hyping it up as much as they are now.
It's pretty obvious the change of pace is slowing down and there isn't a lot of evidence that shipping a better harness and post-training on using said harness is going to get us to the magical place where all SWE is automated that all these CEOs have promised.