logoalt Hacker News

soulofmischief10/12/20240 repliesview on HN

Simply put, hubs in a scale-free network act as efficient intermediaries, minimizing the overall cost, in terms of action, for communication or interaction between nodes.

Scale-free networks are robust to random dropout (though not to targeted dropout of hubs) and this serves to stabilize the system. The interplay between stability and stationary action is the key here.

What follows is my own mathematical inquiry into a generalized stationary action principle, which might provide some intuition. Feel free to correct any mistakes.

We often define action as the integral of a Lagrangian [0] over time:

S = ∫ₜ₁ᵗ² Ldt

Typically the Lagrangian is defined at a specific layer in a hierarchical system. Yet, Douglas Hofstadter famously introduces the concept of "strange loops" in Gödel, Escher, Bach. A strange loop is a cyclic structure which arises within several layers of a hierarchical system, due to inter-layer feedback mechanisms. [1] Layers might be described with respect to their information dynamics. In a brain network each layer might be at the quantum, chemical, mechanical, biological, psychological, etc. scale.

Thus, we could instead consider total action within a hierarchical system, with each layer xᵢ having a Lagrangian ℒᵢ defined which best captures the dynamics of that layer. We could define total action as a sum of the time integrals of each Lagrangian plus the time integral of a coupling function C(x₁,x₂,...,xₙ). This coupling function captures the dynamics between coupled layers and allows for inter-layer feedback to affect global state.

So we end up with

S = ∫ₜ₁ᵗ²(∑ ℒᵢ(xᵢ,ẋᵢ) + C(x₁,x₂,...,xₙ))dt

Now, when S ≈ 0 it means that each layer in the system has minimized not necessarily its own local action, but the global action of the system with respect to each layer. It is often the case however that scale-free networks exhibit fractal-like behavior and thus tend to be both locally and globally efficient, and structurally invariant under scaling transformations. In a scale-free network, each subnetwork is often itself scale-free.

We might infer that global stability is the result of stationary action (and thus energy/entropy) management across all scales. Strange loops are effectively paths of least action through the hierarchical system.

Personally I think that minimization of action at certain well-defined layers might be able to predict the scales at which proceeding layers emerge, but that is beside the point.

By concentrating connections in a small number of hubs, the system minimizes the overall energy expenditure and communication cost. Scale-free networks can emerge as the most action-efficient structures for maintaining stable interactions between a large number of entities in a hierarchical system.

A network can be analyzed in this fashion both intrinsically (each node or subnetwork representing a hierarchical layer) or in the context of a larger network within which it is embedded (wherein the network is a single layer). When a network interacts with other networks to form a larger network, it's possible that other non-scale-free architectures more efficiently reduce global action.

I imagine this is because the Lagrangian for each layer in the hierarchy becomes increasingly complex and at some critical point, goal-oriented (defined here as tending toward a non-stationary local action in order to minimize global action or the action of another layer). Seemingly anomalous behavior which doesn't locally follow the path of least action might be revealed to be part of a larger hierarchical loop which does follow the path of least action, and this accounts for variation in structure within sufficiently complex networks which exhibit overall fractal-like structure.

Let me know if any of that was confusing or unclear.

[0] https://en.wikipedia.org/wiki/Lagrangian_mechanics

[1] https://en.wikipedia.org/wiki/Strange_loop