logoalt Hacker News

Mind-reading devices can now predict preconscious thoughts

89 pointsby srameshctoday at 6:26 PM65 commentsview on HN

Comments

pedalpetetoday at 9:03 PM

I believe that training a system to understand the electrical signals that define a movement is significantly different from a system that understands thought.

I work in neurotech, I don't believe that the electrical signals of the brain define thought or memory.

When humans understood hydro-dynamics, we applied that understanding to the body and thought we had it all figured out. The heart pumped blood, which brought nutients to the organs, etc etc.

When humans discovered electricity, we slapped ourselves on the forehead and exclaimed "of course!! it's electric" and we have now applied that understanding on top of our previous understanding.

But we still don't know what consciousness or thought is, and the idea that it is a bunch of electrical impulses is not quite proven.

There are electrical firing of neurons, absolutely, but do they directly define thought?

I'm happy to say we don't know, and that "mind-reading" devices are yet un-proven.

A few start-ups are doing things like showing people images while reading brain activity and then trying to understand what areas of the brain "light-up" on certain images, but I think this path will prove to be fruitless in understanding thought and how the mind works.

show 7 replies
Terr_today at 6:41 PM

From some dystopic device log:

    [alert] Pre-thought match blacklist: 7f314541-abad-4df0-b22b-daa6003bdd43
    [debug] Perceived injustice, from authority, in-person
    [info]  Resolution path: eaa6a1ea-a9aa-42dd-b9c6-2ec40aa6b943
    [debug] Generate positive vague memory of past encounter
Not a reason to stop trying to help people with spinal damage, obviously, but a danger to avoid. It's easy to imagine a creepy machine argues with you or reminds you of things, but consider how much worse it'd be if it derails your chain of thought before you're even aware you have one.
show 3 replies
guiandtoday at 7:00 PM

Split brain experiments show that a person rationalizes and accommodates their own behavior even when "they" didn't choose to perform an action[1]. I wonder if ML-based implants which extrapolate behavior from CNS signals may actually drive behavior that a person wouldn't intrinsically choose, yet the person accommodates that behavior as coming from their own free will.

[1]: "The interpreter" https://en.wikipedia.org/wiki/Left-brain_interpreter

show 1 reply
zh3today at 8:14 PM

AI following the Libet ([0]1983) paper about preconscious thought apparently preceding 'voluntary' acts (which really elevated the question of what 'freewill' means).

* [0] https://pubmed.ncbi.nlm.nih.gov/6640273/

show 3 replies
handednesstoday at 7:26 PM

> is it time to worry?

Shouldn't the device be the judge of that?

rpqtoday at 7:15 PM

I think the real danger lies in how many will accept that output as the unadulterated unmistakable truth for actions, for judgment. Talk about a sinister device.

show 1 reply
mostertoastertoday at 8:24 PM

Ok does anyone else’s mind just immediately go to “The Minority Report” is soon going to no longer be just a sci fi dystopia?

show 1 reply
analog8374today at 8:51 PM

Install one of these on every citizen!

ryandvtoday at 9:39 PM

You don't need a device to do this.

show 1 reply
bryanrasmussentoday at 8:38 PM

I guess I will start paying attention when it can predict word choice in my internal monologue.

show 1 reply
fjfaasetoday at 7:14 PM

I wonder how much this experience is similar to the Alien Hand Syndrome, where people experience that part of their body, usually a hand, act on their own.

amaranttoday at 8:04 PM

I find the take a quirk in how the state of the art assistive technology works is reason for privacy fear mongering to be tired, unimaginative, and typical of today's journalism that cares more for clicks than reporting fact.

It's a very interesting quirk of a immensely useful device for those that need it, but it's not an ethical dilemma.

I for one am sick and tired of these so-called ethicists who's only work appear to be so stir up outrage over nothing holding back medicinal progress.

Similar disingenuous articles appeared when stem-cell research was new, and still do from time to time. Saving lives and improving life for the least fortunate is not an ethical dilemma, it's an unequivocally good thing.

Quit the concern trolling nature.com, you're supposed to be better than that

show 2 replies
cmatoday at 7:35 PM

Rather than the Karpathy thing about in class essays for everything, maybe random selections of students will be asked to head to the school fMRI machine and be asked to remember the details of writing their essay homework away from school.

show 1 reply
j45today at 8:04 PM

Maybe skulls will need a faraday cage.

show 1 reply
keyboredtoday at 7:33 PM

Unlike the vast sea of the subconscious, we can try to take direct control of technology. But we don’t. So we are left to fret about what technology will do to us (meaning: what people will power will use it for).

show 1 reply
idiotsecanttoday at 7:08 PM

It's interesting that the path from 'decide to do something' to performing the action is hundreds of ms long. It's also interesting that grabbing the data early in the process and acting on it can perform the action before the conscious 'self' understands fully that the action will take place. It's just another reminder that the 'you' that you consider to be running the show is really just a thin translation layer on top of an ocean of instinct, emotion, and hormones that is the real 'you'.

show 1 reply
thomastjefferytoday at 8:50 PM

Seems like they are really jumping to conclusions here.

> Smith’s BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so

There are some serious problems lurking in the narrative here.

Let's look at it this way: they trained a statistical model on all of the brain patterns that happen when the patient performs a specific task. Next, the model was presented with the same brain pattern. When would you expect the model to complete the pattern? As soon as it recognizes the pattern, of course!

> That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so

There are two overconfident assumptions at play here:

1. Researchers can accurately measure the moment she "consciously attempted" to perform the pretrained task.

2. Whatever brain patterns that happened before this arbitrary moment are relevant to the patient's intention.

There's supposed to be a contradiction here: The first assumption is correct, and the second assumption is also correct. Therefore, the second assumption does not invalidate the first assumption. How? Because the circumstances of the second assumption are a special thing called "precognition"... Tautological nonsense.

Not only do these assumptions blatantly contradict each other, they are totally irrelevant to the model itself. The BCI system was trained on her brain signals during the entirety of her performance. It did not model "her intention" as anything distinct from the rest of the session. It modeled the performance. How can we know that when the patient begins a totally different task, that the model won't just "play the piano" like it was trained to? Oh wait, we do know:

> But there was a twist. For Smith, it seemed as if the piano played itself. “It felt like the keys just automatically hit themselves without me thinking about it,” she said at the time. “It just seemed like it knew the tune, and it just did it on its own.”

So the model is not responding to her intention. That's supposed to support your hypothesis how?

---

These are exactly the kind of narrative problems I expect to find any "AI" research buried in. How did we get here? I'll give you a hint:

> Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users.

This is the fundamental miscommunication. Statistical models are not decoders. Decoding is a symbolic task. The entire point of a statistical model is to overcome the limitations of symbolic logic by not doing symbolic logic.

By failing to recognize this distinction, the narrative leads us right to all the familiar tropes:

LLMs are able to perform logical deduction. They solve riddles, math problems, and find bugs in your code. Until they don't, that is. When an LLM performs any of these tasks wrong, that's simply a case of "hallucination". The more practice it gets, the fewer instances of hallucination, right? We are just hitting the current "limitation".

This entire story is predicated on the premise that statistical models somehow perform symbolic logic. They don't. The only thing a statistical model does is hallucinate. So how can it finish your math homework? It's seen enough examples to statistically stumble into the right answer. That's it. No logic, just weighted chance.

Correlation is not causation. Statistical relevance is not symbolic logic. If we fail to recognize the latter distinction, we are doomed to be ignorant of the former.

alan-jordan13today at 8:14 PM

[dead]