It makes obvious sense to consider an array as a function with the index as its input argument and the element its output, i.e. f(x) = A[x]... but this isn't the first time I've encountered this and I still don't see the practical benefit of considering things from this perspective.
When I'm writing code and need to reach for an array-like data structure, the conceptual correspondence to a function is not even remotely on my radar. I'm considering algorithmic complexity of reads vs writes, managed vs unmanaged collections, allocation, etc.
I guess this is one of those things that's of primary interest to language designers?
There are some appealing-sounding arguments for designing languages around functions. Having given this a very fair shot with things like Erlang... turns out this optimizes for rare use cases at the expense of common ones. There's no 100% general-purpose language, they all have use cases in mind, and I guess some people are still trying to find a way around that.
Similar conclusion for using a graph DB for something you'd typically do in a relational DB. Just because you can doesn't mean you should.
Well for example this insight explains Memoization
https://en.wikipedia.org/wiki/Memoization
If you know that Arrays are Functions or equivalently Functions are Arrays, in some sense, then Memoization is obvious. "Oh, yeah, of course" we should just store the answers not recompute them.
This goes both ways, as modern CPUs get faster at arithmetic and yet storage speed doesn't keep up, sometimes rather than use a pre-computed table and eat precious clock cycles waiting for the memory fetch we should just recompute the answer each time we need it.