What's your point, though? Let's assume your hypothesis and 5 years from now everyone has access to an LLM that's as good as a typical staff engineer. Is it now acceptable for a junior engineer to submit LLM-generated PRs without having tested them?
> It was thought impossible for a computer to reach the point of being able to beat a grandmaster at chess.
This is oft-cited but it takes only some cursory research to show that it has never been close to a universally-held view.
In the scenario I'm hypothesizing, why would anyone need to "check" or "test" its work? What chess players are checking to make sure Stockfish made the "right" move? What determines whether or not it's "right" is if Stockfish made it.
Its hard to imagine now but the code won't matter. We will have other methods of validating the product I think; like before tech. There are many ways to validate something; this is an easier problem than creation (which these AI models are somewhat solving right now)
All very demoralizing but I can see the trend. In the end all "creative" parts of the job will disappear; AI gets to do the fun stuff.
We invented something that devalues the human craft and contribution -> if you weren't skilled in that and/or saw it as a barrier you win and are excited by this (CEO types, sales/ideas people, influencers, etc). If you put the hard yards in and did the work to build hard skills and built product; you lose.
Be very clear: AI devalues intelligence and puts more value on what is still scarce (political capital, connections, nepotism, physical work, etc). It mostly destroys meritocracy.