It may be that this tech produces clear, rational, chain of logic writeups, but it's not clear that just because we also do that after thinking that it is only thinking that produces writeups.
It's possible there is much thinking that does not happen with written word. It's also possible we are only thinking the way LLMs do (by chaining together rationalizations from probable words), and we just aren't aware of it until the thought appears, whole cloth, in our "conscious" mind. We don't know. We'll probably never know, not in any real way.
But it sure seems likely to me that we trained a system on the output to circumvent the process/physics because we don't understand that process, just as we always do with ML systems. Never before have we looked at image classifications and decided that's how the eye works, or protein folding and decided that's how biochemistry works. But here we are with LLMs - surely this is how thinking works?
Regardless, I submit that we should always treat human thought/spirit as unknowable and divine and sacred, and that anything that mimics it is a tool, a machine, a deletable and malleable experiment. If we attempt to equivocate human minds and machines there are other problems that arise, and none of them good - either the elevation of computers as some kind of "super", or the degredation of humans as just meat matrix multipliers.
> Never before have we looked at image classifications and decided that's how the eye works
Actually we have, several times. But the way we arrived at those conclusions is worth observing:
1. ML people figure out how the ML mechanism works.
2. Neuroscientists independently figure out how brains do it.
3. Observe any analogies that may or may not exist between the two underlying mechanisms.
I can't help but notice how that's a scientific way of doing it. By contrast, the way people arrive at similar conclusions when talking about LLMs tends to consist of observing that two things are cosmetically similar, so they must be the same. That's not just pseudoscientific; it's the mode of reasoning that leads people to believe in sympathetic magic.
The contrast between your first and last paragraph is... unexpected
> It may be that this tech produces clear, rational, chain of logic writeups, but it's not clear that just because we also do that after thinking that it is only thinking that produces writeups.
I appreciate the way you describe this idea, I find it likely I'll start describing it the same way. But then you go on to write:
> Regardless, I submit that we should always treat human thought/spirit as unknowable and divine and sacred, and that anything that mimics it is a tool, a machine, a deletable and malleable experiment. If we attempt to equivocate human minds and machines there are other problems that arise, and none of them good - either the elevation of computers as some kind of "super", or the degredation of humans as just meat matrix multipliers.
Which I find to be the exact argument that you started by discarding.
It's not clear that equating organic, and synthetic thought will have any meaningful outcome at all, let alone worthy of baseless anxiety that it must be bad. Equally it seems absolutely insane to claim that anything is unknowable, and that because humanity doesn't have a clear foundational understanding that we should pretend that it's either devine, or sacred. Having spent any time watching the outcome of the thoughts of people, neither devine nor sacred are reasonable attributes to apply, but more importantly, I'd submit that you shouldn't be afraid to explore things you don't know, and you shouldn't advocate for others to adopt your anxieties.