> What purpose does seeing everything through the lens of an algorithm serve?
To me at least it helps me understand how things work. What is an alternative? Because alternative seems like some sort of magic I wouldn't understand.
> Is the movement of an electron an algorithm?
I think there's a lot of argument and complexity to what an electron exactly is or does, and what its properties actually mean, but I would imagine in general from particle and other levels whole universe can be just a deterministic algorithm, and so anything can be an algorithm. Universe has certain laws and rules which could in theory be simulated, but the simulation must have more capacity than the universe itself has so we inside the universe likely can not do it unless we find a mechanism to somehow bypass this.
> But for what purpose would we go around, assuming things that don't look like algorithms are actually algorithms that are just outside of our reach? This doesn't solve a practical problem, so of what use is that, and where does it lead?
If I try to think of what is the algorithm behind something, it helps me understand it better. Intuitively I think there's a complex algorithm behind everything, and it's just a matter of spending time and effort to figure out what it exactly is. It's unrealistic to get close to the real detail and nuance of the algorithm, but already trying to figure out the algorithm brings me closer to understanding.
> We end up in a rather cliche epiphenomenalism + causal determinism worldview
Wait -- what is wrong with that? And also I don't think it's cliche, I think it is likely to be the case?
> - "You - the person who is reasoning - don't actually know what reasoning is like, it's really a very complex algorithm which we could never know or follow."
I mean writing it down on the algorithmic level is not something we can consciously follow easily. However our brain within us is following those algorithms in the level of efficiency that we cannot consciously follow at that speed step by step, just following the instructions.
> The only way such an uninspiring outlook can subsist is because it jives well with some modern dreams:
I think my outlook is at least inspiring to me.
> - "If we're just a machine then maybe we can hack-reward-centers/optimize-drug-concoction/upload-to-mainframe-for-immortality" (cue quasi-immortality pitches and externally-enforced-happines pipe-dreams)
I do think theoretically it would be possible to hack humans to have constant "heroin like euphoria". I'm not sure I exactly care for it, but I do think these things could be done, I just don't know when, is it 50 years, 100 years or 1000 years. Of course while I say right now that I don't exactly care for it, if I tried it for once I would be hooked on it forever, assuming it has no tolerance build up or other negative effects making me consider to quit it. But even real heroin is terribly hard to quit while it has tolerance build up and side effects.
> - "If I'm just a machine then I'm not responsible for my shortcomings - they're just the outcome of my wiring I cannot influence." (a nice supplement for absolving oneself from responsibility - to oneself)
I'm inclined to think that the World is deterministic, yet I happen to also think that I have reward mechanisms that make me ambitious in a sense that I want to achieve certain goals to feel rewarded and in order to achieve those goals I have to overcome many challenges and improve certain shortcomings because it's needed to achieve those goals. If someone is using those as an excuse they would likely be the type of person to find an excuse in anything anyway. And if they do have goals they want to reach they will be affected by that, because there's certain behaviour that will get you to your desired goals and certain behaviour that is not. Taking responsibility and ownership is rewarded and if you do that, you will reach your goals with higher probability. I don't buy into the sort of thing where "something is bad because some people might use it as an excuse". Because finding an excuse is usually about the mindset, not about what kind of excuses are possible to select from. An AI bot with good goals and reward system, despite it being very obvious that they are programmed and deterministic wouldn't go about to make those excuses. But an AI bot trained to make excuses and be rewarded for it, would keep making excuses no matter what.
Thanks for your thoughtful response.
The view you are elaborating has a logic to it. You could argue it's the zeitgeist, at least among educated and STEM-adjacent circles. Hence my comment of it being cliche: you see variants of it all the time, and it gets a bit jading by virtue of being wholly unconvincing (to me).
In general, I think the utility of seeing everything through a mechanistic/algorithmic lens is overblown. When I'm doing technical work, I put my STEM hat on, and sometimes write algorithms. For the most part, I let it rest though. And I don't feel I understand the world any worse for dropping such mechanistic world-images I may have held years ago (I'm more at peace with it, if anything). In hindsight, the ROI on the skill of mechanistically dissecting anything you look at is rather low and hardly transferable ime. The ensuing "understanding" a passing regurgitative satisfaction.
If there's anything I really want to push back on, however, it's the notion that the views you hold do not influence the range of ways in which you develop yourself in an important way. If one truly holds the view that one is responsible for one's actions, and not the whims of determination of chance, where is the space for the excuse "it's not up to me"? Views may not determine the course of action, but they can surely constrain.
Disentangling one's views from one's behavior can go a long way in measured, written discussions like this, but I don't see it being the case in real life. It is the case however, that we can be a hodge-podge of contradictory views and standards, and that we commit to one for a moment, then to another. This is a matter of strength and cohesiveness of character. And we are good at "saving face" in front of us and others. For example, if you've met people who partake in a vice yet will say stuff like "This is bad for me." - the actual underlying view is "This has obvious drawbacks but I still think the enjoyment is worth it." It's only when they can maintain the view that the drawbacks are not worth it, that they can break out.