Markov Centrality


Definition

The Markov centrality score uses the concept of a random walk through the graph to calculate the centrality of each vertex. The method uses the mean first-passage time from every vertex to every other vertex to produce a score for each vertex.
The mean first-passage time can be used as a measure of how closely each vertex is connected to every other vertex in a graph. The mean first-passage time from vertex a to vertex b is the mean number of steps a random walk emanating from vertex a takes to reach vertex b for the first time. Random walks are more likely to reach well-connected vertices quicker and therefore this method can be used to measure distance.
In order to calculate the Markov centrality of each vertex in a graph, the inverse of the mean distance between each vertex and every other vertex is taken. Vertices with smaller average distances to all other vertices have higher Markov centrality scores, indicating that they occupy a more central position within the graph. These values can be used to rank the vertices.


The centrality of a vertex was defined as the inverse of the mean first passage time (MFPT) in the Markov chain. The MFPT mst from s to t is defined as the expected number of steps starting at node s taken until the first arrival at node t:
Markov Centrality
where n denotes the number of steps taken, and fnst denotes the probability that the chain first returns to state t in exactly n steps. MFPTs not only have a natural Markov interpretation but also permit direct computation of a mean first passage matrix giving the MFPTs for all pairs of nodes. The mean first passage matrix is given by:
Markov Centrality
where I is the identity matrix, and E is a matrix containing all ones. D is the diagonal matrix with elements dvv = 1/π(v), where π(v) is the stationary distribution (in the Markov chain) of node v. Z is known as the fundamental matrix, and Zdg agrees with Z on the diagonal but is 0 everywhere else. The fundamental matrix is defined as:
Markov Centrality
where A is the Markov transition probability matrix, e is a column vector of all ones, and π is a column vector of the stationary probabilities for the Markov chain. The Markov centrality index CM(v) uses the inverse of the average MFPTs to define the importance of node v
Markov Centrality
where n = |R|, R is a given root set, and mst is the MFPT from s to t. Markov centrality values for vertices show which vertex is closer to the center of mass. More central nodes can be reached from all other nodes in a shorter average time.
[WHITE, S. 2003]

Requirements

Require directed and weigthed network.

References

  • JUNG, the Java Universal Network/Graph Framework [http://jung.sourceforge.net]
  • WHITE, S. & SMYTH, P. Algorithms for estimating relative importance in networks. Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, 2003. ACM, 266-275.


Comments

There are no comment yet.

Add your comment

Name:
Email:
Sum of    and