The memo itself is an excellent walk through historical bubbles, debt, financing, technological innovation, and much more, all written in a way that folks with a cursory knowledge of economics can reasonably follow along with.
A+, excellent writing.
The real meat is in the postscript though, because that's where the author puts to paper the very real (and very unaddressed) concerns around dwindling employment in a society where not only does it provide structure and challenge for growth, but is also fundamentally required for survival.
> I get no pleasure from this recitation. Will the optimists please explain why I’m wrong?
This is what I, and many other, smarter "AI Doomers" than myself have been asking for quite some time, that nobody has been able or willing to answer. We want to be wrong on this. We want to see what the Boosters and Evangelists allegedly see, we want to join you and bring about this utopia you keep braying about. Yet when we hold your feet to the fire, we get empty platitudes - "UBI", or "the government has to figure it out", or "everyone will be an entrepreneur", or some other hollow argument devoid of evidence or action. We point to AI companies and their billionaire owners blocking regulation while simultaneously screeching about how more regulation is needed, and are brushed off as hysterical or ill-informed.
I am fundamentally not opposed to a world where AI displaces the need for human labor. Hell, I know exactly what I'd do in such a world, and I think it's an excellent thought exercise for everyone to work through (what would you do if money and labor were no longer necessary for survival?). My concern - the concerns of so many, many of us - are that the current systems and incentives in place lead to the same outcome: no jobs, no money, and no future for the vast majority of humanity. The author sees that too, and they're way smarter than I am in the economics department.
I'd really, really love to see someone demonstrate to us how AI will solve these problems. The fact nobody can or will speaks volumes.
The sad reality is that no one in tech and most sciences is concerned with ethics. Our society has internalised the ideology that technological progress is always good and desirable in whatever form it comes about, and this will be our undoing.
Yeah, I do not think AI as the tech industry knows it will bring this future, but as you say, the conversation ends immediately when you bring the implications of their goals and claims.
Another huge issue that particularly Anthropic and OpenAI tend to avoid, despite AGI being their goal, is how they essentially want synthetic slaves. Again, I do not think they will achieve this but it is pretty gross when AGI is a stated goal but the result is just using it to replace labor and put billionaires in control.
Right now I am pretty anti-AI, but if these companies get what they want, I might find myself on the side of the machines.
All you'll get is Jevon's paradox this and horses that, while continue to fundamentally undersell the potential upending of a not insignificant part of labour market.
FWIW, the only optimism I have is that humanity seemingly always finds a way to adapt and its, to me, our greatest superpower. But yeah, this feels like a big challenge this time