Our industry has never been serious about security. We all download and run unvetted code via package managers every day. At least now the insanity is out in the open. We won't change until Skynet fires off the nukes.
Yes and also the software industry has never been truly serious about security either: it's more of implied table stakes than an advertised product feature.
Also, customers outsource the risk to their vendors, so as long as there's someone to sue, nobody worries about doing it right. Ship it now and pay the lawyers later.
This is never getting to skynet launching the nukes stage. It's not that clever and never will be.
Humans will kill us by it damage amplifying their worst characteristics.
Thus we'll die of a pandemic because some idiot LLM'ed up positive looking virology data when they were being too lazy to verify something. Everyone will trust it because they don't really care as long as it looks about right.
> We won't change until Skynet fires off the nukes.
And then we won't need to, because at that point it will be too late.
I keep getting so depressed thinking about the inevitable. Quite simply, humans can't scale or iteratively improve. We still need to eat, we still need to sleep, we can only think on one thread at a time basically, we take 20 years to get to our prime, which is a fleeting moment, while most of our lifespan is spent in a state of decline of capability. AI humanoid robot from the near future doesn't need to eat or sleep, can work 24/7, can compute thousands of processes in parallel, is the same fungible unit as any other humanoid robot, forever with some maintenance. Why justify a sustaining an inefficient human in that modern world? It is more profitable for the company to have humans go extinct and maximize planetary resource use to its fullest extent possible.
Seems we are digging our graves as a species and don't even realize it. I mean Sam Altman is already saying it taking 20 years to train a human is a Big Problem.