logoalt Hacker News

matheisttoday at 7:00 AM8 repliesview on HN

I remember being very taken with this story when I first read it, and it's striking how obsolete it reads now. At the time it was written, "simulated humans" seemed a fantastical suggestion for how a future society might do scaled intellectual labor, but not a ridiculous suggestion.

But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.

A high variance story! It could have been prescient, instead it's irrelevant.


Replies

sooheontoday at 7:30 AM

This is a sad take, and a misunderstanding of what art is. Tech and tools go "obsolete". Literature poses questions to humans, and the value of art remains to be experienced by future readers, whatever branch of the tech tree we happen to occupy. I don't begrudge Clarke or Vonnegut or Asimov their dated sci-fi premises, because prediction isn't the point.

The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.

show 2 replies
rcovesontoday at 7:10 AM

I think that's a little harsh. A lot of the most powerful bits are applicable to any intelligence that we could digitally (ergo casually) instantiate or extinguish.

While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.

show 1 reply
Joeritoday at 8:11 AM

That is the same categorical argument as what the story is about: scanned brains are not perceived as people so can be “tasked” without affording moral consideration. You are saying because we have LLMs, categorically not people, we would never enter the moral quandaries of using uploaded humans in that way since we can just use LLMs instead.

But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.

For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.

penteracttoday at 7:31 AM

Lena isn't about uploading. https://qntm.org/uploading

cwillutoday at 7:09 AM

“Irrelevant” feels a bit reductive while the practical question of what actually causes qualia remains unresolved.

harperleetoday at 7:48 AM

I actually think it was quite prescient and still raises important topics to consider - irrespective of whether weights are uploaded from an actual human, if you dig just a little bit under the surface details, you still get a story about ethical concerns of a purely digital sentience. Not that modern LLMs have that, but what if future architectures enable them to grow an emerging sense of self? It's a fascinating text.

matkoniecztoday at 7:39 AM

I have not seen as prediction as actual technology, but mostly as a horror story.

And a warning, I guess, in unlikely case of brain uploading being a thing.

andaitoday at 7:32 AM

Found the guy who didn't play SOMA ;)