One of the hardest hitting George Carlin observations:
“Scratch any cynic and you will find a disappointed idealist.”
The title resonates with me. The post does not.
Cynicism is the mind's way of protecting itself from repeating unproductive loops that can be damaging. Anyone who ever had a waking dream come crashing down more than once likely understands this.
It doesn't necessarily logically follow that you wholesale reject entire categories of technology which have already shown multiple net positive use cases just because some people are using it wastefully or destructively. There will always be someone who does that. The severity of each situation is worth discussing, but I'm not a big fan of the thought-terminating cliché.
The actual title of the article is "The Left Doesn't Hate Technology, We Hate Being Exploited" and I think anyone can agree with that sentiment regardless of your political leanings.
LLMs are amazing math systems. Give them enough input and they can replicate that input with exponential variations. That in and of itself is amazing.
If they were all trained on public domain material, or if the original authors of that material were compensated for having the corpus of their work tossed into the shredder, then the people who complain about it could easily be described as Luddites afraid of having their livelihood replaced by technology.
But you add in the wholesale theft of the content of almost every major, minor, great and mediocre work of fiction and non-fiction alike to be shredded and used as logical paper mache to wholesale replace the labor of living human beings for nickles on the dollar and their complains become much more valid and substantial in my opinion.
It's not that LLMs are bad. It's that the people running them are committing ethical crimes that have not been formally illegalized. We can't use the justice system to properly punish the people who have literally photocopied the soul of modern media for an enormously large quick buck. The frustration and impotence they feel is real and valid and yet another constant wound for them in a life full of frustrating constant wounds, which in itself is a lesser but still substantial portion of what we created society to guard the individual against.
It's a small group of ethically amoral people injuring thousands of innocent people and making money from it, mind thieves selling access to their mimeographs of the human soul for $20/month, thank you very much.
If some parallel of this existed in ancient Egypt or Rome, surely the culprits would be cooked alive in a brazen bull or drawn and quartered in the town square, but in the modern era they are given the power and authority and wealth of kings. Can you not see how that might cause misery?
All that being said, if the 20 year outcome of this misery is that everyone ends up in an GAI assisted beautiful world of happiness and delight, then surely the debt will be paid, but that is at bet a 5% likely outcome.
More likely, the tech will crash and burn, or the financial stability of the world that it needs to last for 20 years will crash and burn, or WWIII will break out and in a matter of days we will go from the modern march towards glory to irradiated survivors struggling for daily survival on a dark poisoned planet.
Either way, the manner in which we are allowing LLMS to be fed, trained, and handled is not one that works to the advantage of all humanity.
>Yes it is. It is still exactly as simple as it sounds. If I’m doing math billions of times that doesn’t make the base process somehow more substantial. It’s still math, still a machine designed to predict the next token without being able to reason, meaning that yes, they are just fancy pattern-matching machines.
I find this argument even stranger. Every system can be reduced to its parts and made to sound trivial thereby. My brain is still just neurons firing. The world is just made up of atoms. Humans are just made up of cells.
>here’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails.
This shows that the author is not very curious because its easy to take the worst examples from the cheapest models and extrapolate. Its like asking a baby some questions and interpreting humanity's potential on that basis. What's the point of this?
> The questions leftists ask about AI are: does this improve my life? Does this improve my livelihood? So far, the answer for everyone who doesn’t stand to get rich off AI is no.
I'll spill the real tension here for all of you. There are people who really like their comfy jobs and have got attached to their routine. Their status, self worth and everything is attached to it. Anything that disrupts this routine is obviously worth opposing. Its quite easy to see how AI can make a person's life better - I have so many examples. But that's not what "leftists" care about - its about security of their job.
The rest of the article is pretty low quality and full of errors.
You have to ask the question of "what exactly is Capitalism?"
By putting capital ahead of everything else of course capitalism gives you technological progress. If we didn't have capitalism we'd still be making crucible steel and the bit would cost more than the horse [1] -- but if you can license the open hearth furnace from Siemens and get a banker to front you to buy 1000 tons of firebricks it is all different, you can afford to make buildings and bridges out of steel.
Similarly, a society with different priorities wouldn't have an arms race between entrepreneurs to spend billions training AI models.
[1] an ancient "sword" often looks like a moderately sized knife to our eyes
> I will spare you some misery: you do not have to read this blog. It is fucking stupid as hell, constantly creating ideas to shadowbox with then losing to them.
OK. Closed tab.
I liked the original title better "The Left Doesn't Hate Technology, We Hate Being Exploited". I think that sums up my grievances towards AI - amazing technology and certainly a booster to anyone's life, but what is the cost? Why AI companies get to download, consume and transform all copyrighted works essentially for free (I think there were some lawsuits that resulted in the companies paying), but normal people have to pay millions if they wanted to access all that data and pay to the original creators? I'm also not so ok with the workforce being displaced, but it's what happens with technological progress. But am not ok that it's displacing the writers while benefiting from their prior work without paying them a cent.