As usual with these, it helps to try to keep the metaphor used for downplaying AI, but flip the script. Let's grant the author's perception that AI is a "bag of words", which is already damn good at producing the "right words" for any given situation, and only keeps getting better at it.
Sure, this is not the same as being a human. Does that really mean, as the author seems to believe without argument, that humans need not be afraid that it will usurp their role? In how many contexts is the utility of having a human, if you squint, not just that a human has so far been the best way to "produce the right words in any given situation", that is, to use the meat-bag only in its capacity as a word-bag? In how many more contexts would a really good magic bag of words be better than a human, if it existed, even if the current human is used somewhat differently? The author seems to rest assured that a human (long-distance?) lover will not be replaced by a "bag of words"; why, especially once the bag of words is also ducttaped to a bag of pictures and a bag of sounds?
I can just imagine someone - a horse breeder, or an anthropomorphised horse - dismissing all concerns on the eve of the automotive revolution, talking about how marketers and gullible marks are prone to hippomorphising anything that looks like it can be ridden and some more, and sprinkling some anecdotes about kids riding broomsticks, legends of pegasi and patterns of stars in the sky being interpreted as horses since ancient times.
So a human is just a really expensive, unreliable bag of words. And we get more expensive and more unreliable by the day!
There's a quote I love but have misplaced, from the 19th century I think. "Our bodies are just contraptions for carrying our heads around." Or in this instance... bag of words transport system ;)
Her argument really only works if you institute new economic systems where humans don’t need to labor in order to eat or pay rent.
I don't think the author's argument is that it won't replace any human labour. Or at least I wouldn't agree with such an argument. But the stronger case is that however much LLMs improve, they won't replace humans in general. In the furtherment of knowledge, because they are fundamentally parroting and synthesizing the already known, vs performing truly novel thought. And in creative fields, because people are fundamentally interested in creations of other people, not of computers.
Neither of these is entirely true in all cases, but they could be expected to remain true in at least some (many) cases, and so the role for humans remains.