I think it can be an algorithm, it just that the algorithm will be a very complex one compromised of many different algorithms. It's not an algorithm anyone could practically follow in their lifetime. But there's plenty of algorithms people can't follow in real life.
If all one has is hammers, one can start seeing nails everywhere.
A screw? That's just a nail which will damage the wood.
A tomato? That's just a soft screw which splatters. Etc.
What purpose does seeing everything through the lens of an algorithm serve? Is the movement of an electron an algorithm? Is a polar planimeter an algorithm? [0]
We design algorithms to solve certain problems. It's part of our problem solving activity. But for what purpose would we go around, assuming things that don't look like algorithms are actually algorithms that are just outside of our reach? This doesn't solve a practical problem, so of what use is that, and where does it lead?
My long-winded answer is: We derive satisfaction from being in principle powerful. Our mechanistic/computational knowledge of nature allows us to bend certain parts of it to our will. If there are parts we cannot control, it's at least consoling that we in principle could know/control them. So we stretch computational/algorithmic terms as far as we possibly can. In the end, it envelops us as subjects. We end up in a rather cliche epiphenomenalism + causal determinism worldview:
- "Yeah, we have experiences, but they're just inefficacious artifacts of underlying chemistry."
- "You - the person who is reasoning - don't actually know what reasoning is like, it's really a very complex algorithm which we could never know or follow."
The only way such an uninspiring outlook can subsist is because it jives well with some modern dreams:
- "We only need X more compute and Y more years to bend Z part of nature to our will and bring utopia." (cue all the AI hype, see relevant frontpage entry [1])
- "If we're just a machine then maybe we can hack-reward-centers/optimize-drug-concoction/upload-to-mainframe-for-immortality" (cue quasi-immortality pitches and externally-enforced-happines pipe-dreams)
- "If I'm just a machine then I'm not responsible for my shortcomings - they're just the outcome of my wiring I cannot influence." (a nice supplement for absolving oneself from responsibility - to oneself)
- "If all is mechanical, then I'm just a temporarily embarrassed sovereign over everything. After all, if I just knew the mechanism behind things, then I could bend it to my will."
- "I have to believe this because it is obviously true." (maybe the saddest of them all, since it promises nothing except the joy of being right and having others be wrong. it also seeds the others)
[0] http://psychsciencenotes.blogspot.com/2015/07/brains-dont-ha...
[1] https://news.ycombinator.com/item?id=41813268