The contrast between your first and last paragraph is... unexpected
> It may be that this tech produces clear, rational, chain of logic writeups, but it's not clear that just because we also do that after thinking that it is only thinking that produces writeups.
I appreciate the way you describe this idea, I find it likely I'll start describing it the same way. But then you go on to write:
> Regardless, I submit that we should always treat human thought/spirit as unknowable and divine and sacred, and that anything that mimics it is a tool, a machine, a deletable and malleable experiment. If we attempt to equivocate human minds and machines there are other problems that arise, and none of them good - either the elevation of computers as some kind of "super", or the degredation of humans as just meat matrix multipliers.
Which I find to be the exact argument that you started by discarding.
It's not clear that equating organic, and synthetic thought will have any meaningful outcome at all, let alone worthy of baseless anxiety that it must be bad. Equally it seems absolutely insane to claim that anything is unknowable, and that because humanity doesn't have a clear foundational understanding that we should pretend that it's either devine, or sacred. Having spent any time watching the outcome of the thoughts of people, neither devine nor sacred are reasonable attributes to apply, but more importantly, I'd submit that you shouldn't be afraid to explore things you don't know, and you shouldn't advocate for others to adopt your anxieties.
> It's not clear that equating organic, and synthetic thought will have any meaningful outcome at all,
I agree! I'm saying "If we equate them, we shortcut all the good stuff, e.g., understanding", because "it may be that this tech produces what we can, but that doesn't mean we are the same", which is good because it keeps us learning vs reducing all of "thinking" to just "Whatever latest chatgpt does". We have to continue to believe there is more to thinking, if only because it pushes us to make it better and to keep "us" as the benchmark.
Perhaps I chose the wrong words, but in essence what I'm saying is that giving up agency to a machine that was built to mimic our agency (by definition as a ML system) should be avoided at all costs.