logoalt Hacker News

thomastjefferytoday at 8:50 PM0 repliesview on HN

Seems like they are really jumping to conclusions here.

> Smith’s BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so

There are some serious problems lurking in the narrative here.

Let's look at it this way: they trained a statistical model on all of the brain patterns that happen when the patient performs a specific task. Next, the model was presented with the same brain pattern. When would you expect the model to complete the pattern? As soon as it recognizes the pattern, of course!

> That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so

There are two overconfident assumptions at play here:

1. Researchers can accurately measure the moment she "consciously attempted" to perform the pretrained task.

2. Whatever brain patterns that happened before this arbitrary moment are relevant to the patient's intention.

There's supposed to be a contradiction here: The first assumption is correct, and the second assumption is also correct. Therefore, the second assumption does not invalidate the first assumption. How? Because the circumstances of the second assumption are a special thing called "precognition"... Tautological nonsense.

Not only do these assumptions blatantly contradict each other, they are totally irrelevant to the model itself. The BCI system was trained on her brain signals during the entirety of her performance. It did not model "her intention" as anything distinct from the rest of the session. It modeled the performance. How can we know that when the patient begins a totally different task, that the model won't just "play the piano" like it was trained to? Oh wait, we do know:

> But there was a twist. For Smith, it seemed as if the piano played itself. “It felt like the keys just automatically hit themselves without me thinking about it,” she said at the time. “It just seemed like it knew the tune, and it just did it on its own.”

So the model is not responding to her intention. That's supposed to support your hypothesis how?

---

These are exactly the kind of narrative problems I expect to find any "AI" research buried in. How did we get here? I'll give you a hint:

> Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users.

This is the fundamental miscommunication. Statistical models are not decoders. Decoding is a symbolic task. The entire point of a statistical model is to overcome the limitations of symbolic logic by not doing symbolic logic.

By failing to recognize this distinction, the narrative leads us right to all the familiar tropes:

LLMs are able to perform logical deduction. They solve riddles, math problems, and find bugs in your code. Until they don't, that is. When an LLM performs any of these tasks wrong, that's simply a case of "hallucination". The more practice it gets, the fewer instances of hallucination, right? We are just hitting the current "limitation".

This entire story is predicated on the premise that statistical models somehow perform symbolic logic. They don't. The only thing a statistical model does is hallucinate. So how can it finish your math homework? It's seen enough examples to statistically stumble into the right answer. That's it. No logic, just weighted chance.

Correlation is not causation. Statistical relevance is not symbolic logic. If we fail to recognize the latter distinction, we are doomed to be ignorant of the former.