Centrality Measures Based on Matrix Functions

Abstract

Network is considered naturally as a wide range of different contexts, such as biological systems, social relationships as well as various technological scenarios. Investigation of the dynamic phenomena taking place in the network, determination of the structure of the network and community and description of the interactions between various elements of the network are the key issues in network analysis. One of the huge network structure challenges is the identification of the node(s) with an outstanding structural position within the network. The popular method for doing this is to calculate a measure of centrality. We examine node centrality measures such as degree, closeness, eigenvector, Katz and subgraph centrality for undirected networks. We show how the Katz centrality can be turned into degree and eigenvector centrality by considering limiting cases. Some existing centrality measures are linked to matrix functions. We extend this idea and examine the centrality measures based on general matrix functions and in particular, the logarithmic, cosine, sine, and hyperbolic functions. We also explore the concept of generalised Katz centrality. Various experiments are conducted for different networks generated by using random graph models. The results show that the logarithmic function in particular has potential as a centrality measure. Similar results were obtained for real-world networks.

Share and Cite:

Njotto, L. (2018) Centrality Measures Based on Matrix Functions. Open Journal of Discrete Mathematics, 8, 79-115. doi: 10.4236/ojdm.2018.84008.

1. Introduction

Since its introduction by Euler in eighteen century, graph theory has proven its important applications in many different scientific fields. Graphs and linear algebra have been used to model social interactions. Recently network models are now commonplace not only in the hard sciences but also in various technological, social and biological scenarios. Networks are used to model a variety of highly interconnected systems, both in nature and man-made world of technology. These networks include protein-protein interaction networks, social networks, food webs, scientific collaboration networks, metabolic networks, lexical or semantic networks, neural networks, the World Wide Web and others. The use of network analysis is in various situations: from determining network structure and communities, describing the interactions between various elements of the network and investigating the dynamics phenomena taking place in the network [1].

One of the ground laying questions analysis of network is how to determine the “most important” nodes in a given network. Many centrality measures have been proposed, starting with the simplest of all, node degree centrality. This measure has being considered too “local”, as it does not take into account the connectivity of the immediate neighbours of the node under consideration. A number of centrality measures have been introduced that take into account the global connectivity properties of the network. These include various types of eigenvector centrality for both directed and undirected networks, Katz centrality, subgraph centrality and PageRank centrality [1]. The use of centrality scores provides rankings of the nodes in the networks. The higher the ranking of a node, the more important the node is believed to be within the network. There are many different ranking methods in use, and many algorithms have been developed to compute these rankings.

The purpose of this paper is to discuss some of the centrality measures and analyse the relationship between degree centrality, eigenvector centrality, and Katz centrality and to discuss the measures of centrality based on matrix functions including the logarithmic, sine, cosine, exponential and hyperbolic function. The main aim is to determine which of the matrix functions is highly correlated to “standard” centrality measures. We will use the Kendall correlation coefficient [2]in the experimental work to determine the correlations.

2. Literature Review

Bavelas [3]introduces the application of centrality of networks in human communication by measuring the communication within a small group in terms of the relationship between the structural centrality and influence in a group process. Afterwards, an application of centrality was made under the direction of Bavelas at the Group Networks Laboratory, M.I.T in the late 1940s. Leavitt in 1949 and Smith in 1950 conducted a study on centrality measure on which Bavelas in 1950 and Bavelas and Barrett in 1951 reported. These experiments all concluded that centrality was related to group efficiency in problem-solving and agreed with the subjective perception of leadership [4].

Various centrality measures in various contexts were then explored in the following decade. Cohn and Marriott in 1958 attempted to use the centrality to understand political integration in Indian social life [5]. Pitts examined the consequences of centrality in communication paths for urban development [6]. Later, Czepiel used the concept of centrality to explain the pattern of diffusion of a technological innovation in the steel industry [7].

Recently, Bolland analysed the stability of the degree (DC), closeness (CC), betweenness (BC), and eigenvector (EC) centrality in random and systematic variations of network structure, and found that betweenness centrality changes significantly with the variation of the network structure, while degree and closeness centrality are usually stable. He also found that eigenvector centrality is the most stable of all the indices analysed [8]. Borgatti and Frantz extended the studies on the stability of centrality indices by considering the addition and deletion of nodes and links [9]as well as by differentiating several types of network topology such as uniformly random, small-world, core-periphery, scale-free, and cellular. Landherr reviewed critically the role of centrality measure in social networks [10]. Estrada analysed examples of how a particular centrality measure is applied in social networks [11].

Benzi and Klymko analysed centrality measures, such as degree, eigenvector, Katz and subgraph for both undirected and directed networks. They used the local and global influence of a given node in the network as measured by walks of different lengths through that node. They analysed the relationship between centrality measures based on the diagonal entries and row sums of the matrix exponential and resolvent of the adjacency matrix involving the degree and eigenvector centrality. They showed experimentally that the rankings produced by exponential subgraph centrality, total communicability and resolvent subgraph centrality converge to those produced by degree centrality [1].

Most of the centrality measures notations considered are combinatorial in nature and based on the discrete structure of the underlying networks. We can extend our studies by defining the centrality measure by using the spectral techniques from linear algebra. Benzi and Klymko considered the diagonal entries of the matrix exponential f ( A ) = e A and the Katz function f ( A ) = ( I α A ) 1 where α > 0 and A is the adjacency matrix of the network [1]. Though, none of the previous studies considered other matrix functions such as the logarithmic, cosine, sine, hyperbolic functions and the generalized Katz centrality as centrality measures. In this work we will develop the notions of centrality based on matrix functions, and we will use the Kendall correlation coefficient [2]to determine the agreement between the node rankings produced by these matrix functions and those produced by the standard centrality measures.

3. Elements of Graph

Graphs are discrete structures which consist of vertices connected by edges. A graph can be written as G = ( V , E ) , where V ( G ) is a non-empty set of vertices (also called nodes) and E ( G ) is a set of edges. An edge of the graph consists of two vertices associated with it, these two vertices are called endpoints. An edge starting and ending at the same vertex is called a loop.

・ Graphs can be represented by using points as nodes (vertices) and joining them using line segments for edges. We write uv to denote an edge between nodes u and v.

・ We can assign numerical values to the edges of a graph in which case the graph is referred to as weighted. In an unweighted graph we assign to every edge the value 1.

・ If the edges of the graph are directed (Figure 1), then the graph is called a directed graph or digraph otherwise it is called an undirected graph. An undirected unweighted (Figure 2) graph without loops is simple if no two edges connect the same pair of vertices.

・ For undirected graphs, if there are multiple edges between a pair of nodes then the graph is called a multi-graph or pseudo-graph. In digraphs we can have two edges connecting two vertices.

3.1. Basic Graph-Theoretic Terminology

If u v E is an edge in an undirected graph G, then nodes u and v are incident to the edge uv and we say that u and v are adjacent (neighbours) in G. It follows that u and v are the endpoints of an edge uv. If G is the directed graph and u v E , then u is said to be adjacent to v and v is said to be adjacent from u. We call u the initial node of uv and v the terminal or end node of uv. For an undirected graph G, the degree d e g ( v i ) of a node v i is the total number of edges which are incident to the corresponding node v i .

The degree d e g ( v i ) is the number of “immediate neighbours” of a node v i in G. In a regular graph all nodes have the same degree. Nodes in directed graphs have an in-degree and an out-degree. The in-degree of a node v i , d e g ( v i ) , is the total number of edges with v i as their terminal node. Similarly,

Figure 1. Digraph.

Figure 2. Undirected.

the out-degree of v i , d e g + ( v i ) , is the total number of edges with v i as their initial node. Loops contribute 1 to both the in-degree and out-degree.

A graph H = ( W , F ) is a subgraph of G = ( V , E ) if W V and F E . We write H G , meaning that H is contained in G (or G contains H). If H contains all edges of G that join two vertices in W, then we say that H is a subgraph induced or spanned by W.

The subgraphs of G = ( V , E ) can be obtained by deleting edges and vertices of G. We denote by G W V \ W with W V , the subgraph of G obtained by deleting the vertices in W and all the edges incident with those vertices. In a similar manner, we denote by G F ( V , E \ F ) where F E , the subgraph of G obtained by deleting all the edges in F.

3.2. Walk, Trail and Path

A walk W k of length k from node v 0 to node v k is a finite sequence W k = ( V , E ) of the form

V = { v 0 , v 1 , , v k } ,

E = { ( v 0 , v 1 ) , ( v 1 , v 2 ) , , ( v k 1 , v k ) }

where v i and v i + 1 are adjacent.

A trail in G is a walk in which no edges of G appear more than once (a walk with all different edges). A trail which begins and ends at the same node is known as a closed trail or circuit.

A path in G is a walk in which no nodes appear more than once with the exception that v k can be equal to v 0 . A cycle or a closed path is a path which begins and ends at the same node.

Two nodes v i and v j in G are connected if there is a path between them. We say that graph G is connected if for every pair of nodes v i and v j there exists a path that starts at v i and ends at v j . An edge v i v j E in a connected graph G = ( V , E ) is a bridge if G becomes disconnected if v i v j is deleted. If v i and v j are nodes of the directed graph G, then G is said to be strongly connected if for any two nodes v i and v j we can find a path from v i to v j and from v j to v i . It is weakly connected if there is a path between every two nodes in the underlying undirected graph. The undirected graph is obtained by ignoring the directions of the edges in the directed graph. All strongly connected directed graphs are also weakly connected.

The digraph in Figure 3 is strongly connected because there is a path between any two ordered vertices in the directed graph. The digraph in Figure 4 is not strongly connected, since, for example, there is no directed path from S to T, nor from V to T, but it is weakly connected.

4. Matrices in Graphs

This section discuses the way of representing graphs using matrices. There are multiple ways to do this and any graph G ( V , E ) can be represented by using either adjacency, incidence or Laplacian matrices.

Figure 3. Strongly connected.

Figure 4. Weakly connected.

4.1. Matrices for Undirected Graph

Given an undirected graph G with n vertices and m edges. The adjacency matrix of G ( V , E ) is given by A = { a i j } n × n , where

a i j = { 1 ifnode v i isadjacenttonode v j 0 otherwise

The incidence matrix is given by M = { m i , j } n × m , where

m i j = { 1 whenedge e j isincidentwithnode v i 0 otherwise

The Laplacian matrix can be found by using the relation

L = D A ,

where D is a diagonal matrix whose ith diagonal entry is the degree of the ith node and A is the adjacency matrix.

In other words, L = { l i j } n × n , where

l i j = { 1 ifnode v i isadjacenttonode v j d e g ( v i ) if i = j 0 otherwise

Figure 5 is undirected graph with five nodes and six edges where there is a path from one node to the other nodes. The adjacency matrix A , the incidence matrix M and the diagonal matrix D will be

A = ( 0 0 0 1 1 0 0 1 1 1 0 1 0 1 0 1 1 1 0 0 1 1 0 0 0 ) , M = ( 1 1 0 0 0 0 0 0 1 1 0 1 0 0 0 0 1 1 0 1 0 1 1 0 1 0 1 0 0 0 ) ,

Figure 5. Undirected Graph.

D = ( 2 0 0 0 0 0 3 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 2 ) .

The Laplacian matrix will be

L = D A = ( 2 0 0 1 1 0 3 1 1 1 0 1 2 1 0 1 1 1 3 0 1 1 0 0 2 ) .

4.2. Matrices for Directed Graph

Given a directed graph G with n nodes and m edges. The adjacency matrix of G ( V , E ) is given by A = { a i j } n × n , where

a i j = { 1 ifnode v i isconnectedtonode v j anddirectedfrom v i to v j 1 ifnode v i isconnectedtonode v j anddirectedfrom v j to v i 0 otherwise

The incidence matrix is given by M = { m i , j } n × m , where

m i j = { 1 ifnode v i isthestartingnodeofanedge e j 1 ifnode v i istheendnodeofanedge e j 0 otherwise

Figure 6 is a digraph which shows that there is path from node A to any other node of the graph, nevertheless there is no path from node C to any other node of the network. The adjacency matrix A and the incidence matrix M will be

A = ( 0 1 0 1 0 1 0 1 1 1 0 1 0 0 1 1 1 0 0 0 0 1 1 0 0 ) ,

Figure 6. Directed Graph.

M = ( 1 0 1 0 0 0 1 1 0 1 1 0 0 1 0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 1 ) .

4.3. Distance in Graphs

Geodesic distance denoted as d ( v i , v j ) between two nodes v i and v j in the graph G is defined as the length of the shortest path between nodes v i and v j . The diameter of the graph G = ( V , E ) is given as

d i a m ( G ) = m a x v i v j V d ( v i , v j ) .

Geodesic distances in digraph Q in Figure 3:

d ( D , C ) = 4, d ( A , E ) = 3 and d ( D , D ) = 0.

The distance matrix of a graph, denoted as D , is the square matrix { d ( i j ) } n × n ,

where d ( i j ) is equal to the length of the shortest path from node v i to node v j .

Consider the undirected unweighted graph in Figure 7 as example;

The distance matrix for the graph in Figure 7, is given as;

D = ( 0 1 2 2 2 1 1 0 1 1 1 1 2 1 0 1 2 2 2 1 1 0 1 2 2 1 2 1 0 1 1 1 2 2 1 0 ) .

4.4. Perron―Frobenius Theorem

We will state (without giving the proof) the Perron-Frobenius theorem which will be used later on in our work.

Theorem 1 (Perron-Frobenius theorem). Let A n × n be an irreducible matrix. Then the Perron-Frobenius theorem states that [12]:

A has a principal eigenvalue λ 1 such that all other eigenvalues λ i , for i = 2 , 3 , , n , satisfy

| λ 1 | | λ i | .

Figure 7. Unweighted-Undirected Graph.

・ The principal eigenvalue λ 1 has algebraic and geometry multiplicity of 1, and has a right eigenvector x with all positive elements, i.e. x i > 0 , i = 1 , 2 , , n and a left eigenvector v with all positive elements, i.e. v i = 0 , i = 1 , 2 , , n .

・ Any non-negative right eigenvector is a multiple of x , any non-negative left eigenvector is a multiple of v .

Furthermore, if A is the adjacency matrix of a directed network with a strongly connected component, then

A has a principal eigenvalue λ 1 such that all other eigenvalues λ i , for i = 2 , 3 , , n , satisfy

| λ 1 | | λ i | .

・ The principal eigenvalue λ 1 has algebraic and geometric multiplicity equal to 1, and has a left eigenvector x with non-negative elements, i.e. x i > 0 if node i belongs to the strongly connected component of the network or the out-components of the network, and x i = 0 if node i belongs to the in-component of the strongly connected component of the network.

5. Centrality Measures

Centrality of a given node is a measure of the importance and influence of that node in the corresponding network. The identification of which nodes are more important or central than the others is a key issue in network analysis. We can ask the following questions:

・ Which are the most central nodes in a network?

・ Which are the most important nodes in a network?

・ Which are the most influential nodes in a network?

These types of questions can have different interpretations in different networks. For instance;

・ when dealing with a social network, the most central node can be the most popular person,

・ when dealing with a web portal network, the most central node can be a web page with the best quality of content in a specific field,

・ in terms of the internet network, the most central node might be a network gateway (router) with the highest bandwidth.

These ideas can be used to characterize types of centrality measure to find the most important nodes in a network in a given context. That is, there are many different centrality measures. When measuring the centrality of the node, we should be sure that:

・ we know what each centrality measure means;

・ what they measure well; and

・ why a particular centrality measure is the most appropriate for the kind of set we are investigating.

The most common centrality measures include degree centrality, betweenness centrality, eigenvector centrality, Katz centrality, PageRank centrality, closeness centrality and subgraph centrality [11][13].

5.1. Degree Centrality

Degree centrality of a node v i in a given network is given by the total degree d i of v i . The degree centrality measures the ability of a node to communicate directly with other nodes.

In an undirected network, the degree d i of a node is given as

d i = j = 1 n a i j = e i T A e , (1)

where A is the adjacency matrix, e i is the ith standard basis vector (ith column of the identity matrix) and e is the vector of all entries one.

In a directed network, we can consider the in-degree of a node, given as

d i i n = j = 1 n a i j = e i T A e , (2)

or the out-degree of the node, given as

d i o u t = j = 1 n a i j = e T A e i . (3)

In a directed graph, a source is a node with zero in-degree and a sink is a node with zero out-degree.

As an example, consider the undirected graph in Figure 8. We are interested in finding the central node using degree centrality.

The adjacency matrix A for Network-1 in Figure 8 is given as

A = ( 0 1 0 0 0 0 0 1 1 1 1 1 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 1 0 1 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 )

The degree centralities are contained in the vector d = A e ;

d = [ 6 3 3 4 2 3 4 2 4 3 2 2 ] T

Figure 8. Network-1.

Using the degree centrality measure, node 1 is the most central node due to the fact that it has the highest degree.

5.2. Closeness Centrality

Closeness centrality measures the average shortest path length from a node to all other nodes. It uses neighbours and the neighbours of neighbours of a node v i to determine its centrality. Thus, nodes that are not directly connected to v i are taken into consideration as opposed to the degree centrality case.

Letting d ( i j ) be the length of the shortest path from node i to node j, the mean distance from node i to the other nodes in a network is given by;

l i = 1 N 1 i j V d ( i j ) = 1 N 1 e i T D e , (4)

where N is the total number of nodes and D denotes the distance matrix.

In general, we want to associate high centrality score with important nodes. So we will use the reciprocal of l i as the value of the centrality. Thus, the closeness centrality c i for a node v i is given by:

c i = l i 1 = N 1 e i T D e (5)

For example, consider Network-1 in Figure 8.

The distance matrix is given by D = ( d ( i , j ) )

D = ( 0 1 2 3 4 3 2 1 1 1 1 1 1 0 1 2 3 2 1 2 2 2 2 2 2 1 0 1 2 2 1 3 3 3 3 3 3 2 1 0 1 1 1 4 4 4 4 4 4 3 2 1 0 1 2 5 5 5 5 5 3 2 2 1 1 0 1 4 4 4 4 4 2 1 1 1 2 1 0 3 3 3 3 3 1 2 3 4 5 4 3 0 1 2 2 2 1 2 3 4 5 4 3 1 0 1 1 2 1 2 3 4 5 4 3 2 1 0 2 1 1 2 3 4 5 4 3 2 1 2 0 2 1 2 3 4 5 4 3 2 2 1 2 0 ) .

Then,

l i = e i T D e N 1 = e i T d N 1 ,

where D e = d

E V C = [ 20 20 24 29 38 30 23 29 27 28 29 29 ]

Thus, c i = 1 l i , gives

c 1 = 11 20 = 0.55 , c 2 = 11 20 = 0.55 , c 3 = 11 24 = 0.458

c 4 = 11 29 = 0.379 , c 5 = 11 38 = 0.289 , c 6 = 11 30 = 0.367

c 7 = 11 23 = 0.478 , c 8 = 11 29 = 0.379 , c 9 = 11 27 = 0.407

c 10 = 11 28 = 0.393 , c 11 = 11 29 = 0.379 , c 12 = 11 29 = 0.379.

Now, nodes 1 and 2 are identified as the most important nodes in the network.

5.3. Eigenvector Centrality

In an undirected network which is connected we can write the measure of centrality by using the eigenvector centrality. Eigenvector centrality takes into consideration the importance of neighbours of a node. In degree centrality a node is awarded one centrality point for each neighbour. Eigenvector centrality gives each node a score which is proportional to the sum of the score of its neighbours. In eigenvector centrality, a node is important if it is linked to other important nodes. The larger an entry on the node, the more important the node is considered to be.

From the Perron-Frobenius theorem, the eigenvector associated to the principal eigenvalue of the adjacency matrix A is unique if the network is strongly connected. We define the centrality of a node iteratively by using the sum of its neighbours’ centralities. We initially assume that a node j has centrality x j ( 0 ) = 1 . Then we calculate a new iteration x i ( 1 ) as the sum of the centralities of i’s neighbours.

That is,

x i ( 1 ) = j a i j x j ( 0 ) , (6)

where a i j are the entries of the adjacency matrix.

In matrix form we write this as:

x ( 1 ) = A x ( 0 ) . (7)

After k-steps, we have

x ( k ) = A ( k ) x ( 0 ) . (8)

Note that the eigenvector centrality is defined as l i m k x ( k ) .

We can write x ( 0 ) as a linear combination of eigenvectors v i of the adjacency matrix A , that is,

x ( 0 ) = i c i v i , (9)

where the c i are constants.

Then, from Equation (8), we have

x ( k ) = A ( k ) i c i v i = i c i A ( k ) v i = i c i λ i k v i = i c i λ 1 k ( λ i λ 1 ) k v i = λ 1 k i c i ( λ i λ 1 ) k v i . (10)

Therefore,

x ( k ) λ 1 k = i c i ( λ i λ 1 ) k v i , (11)

where λ i is the eigenvalue associated with the eigenvector v i and λ 1 is the principal eigenvalue.

Since λ i λ 1 < 1 , for i = 2 , 3 , , n , we have;

l i m k x k λ 1 k = l i m k i c i ( λ i λ 1 ) k v i = i c i lim k ( λ i λ 1 ) k v i = c 1 v 1

as lim k ( λ i λ 1 ) k 0 , i > 1.

This implies that the limiting centralities are proportional to the principal eigenvector v 1 of the adjacency matrix.

Therefore, in matrix form, the eigenvector centrality x satisfies

A x = λ 1 x x = 1 λ 1 A x (12)

x i = 1 λ 1 j a i j x j . (13)

Note that in eigenvector centrality, the higher the centrality of the neighbours of the node, the more important the node is.

For example, consider the network in Figure 9.

The adjacency matrix for Network-2 in Figure 9, is given by

A = ( 0 1 1 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 1 0 1 0 1 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 0 0 0 0 1 1 1 0 1 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 1 0 1 0 ) .

Figure 9. Network-2.

The eigenvalues of the adjacency matrix are

ρ ( A ) = ( 2.464 , 1.931 , 1.543 , 0.666 , 0.477 , 0.332 , 0.370 , 1.398 , 2.116 , 3.531 )

The principal eigenvector corresponding to the principal eigenvalue λ 1 = 3.531 is

v = [ 0.198 0.181 0.261 0.255 0.443 0.243 0.468 0.416 0.313 0.221 ]

Since the eigenvector centralities ( E V C ) correspond to the principal eigenvector, the eigenvector centralities for Network-2 will be

E V C = [ 0.198 0.181 0.261 0.255 0.443 0.243 0.468 0.416 0.313 0.221 ]

Using the eigenvector centrality, we conclude that node 7 is the most important node.

5.4. Katz Centrality

Katz centrality takes into consideration both the number of direct neighbours and the further connections of a node in the network. That is, a node is important in Katz centrality if it has universal connections to other nodes in the network. Katz centrality takes into account all paths of arbitrary length from a node i to other nodes in the network.

The Katz centrality k is given by

k = ( k = 0 α k A k ) e = k = 0 ( α A ) k e , (14)

where e is the column vector of ones, α is called the attenuation factor and A is the adjacency matrix of the network. We can expand Equation (14) as

k = k = 0 ( α A ) k e = ( I + α A + ( α A ) 2 + ) e , (15)

and, if the sum converges, then

k = ( I α A ) 1 e , (16)

where I is n × n identity matrix and e is the column vector of ones.

To ensure the convergence of ( I α A ) 1 and an accurate definition of Katz centrality, we must consider the attenuation factor α to be within the range

α ( 0, 1 λ 1 ) .

For example, we compute the Katz centrality of Network-2 in Figure 9. Since

the principal eigenvalue is λ 1 = 3.531 , then α ( 0, 1 3.531 ) . To calculate the

Katz centrality, we choose α = 0.25 , then the Katz centrality will be

k = ( I 0.25 A ) 1 e ,

k = [ 5.826 5.223 7.086 6.992 11.065 6.316 11.526 10.201 7.895 5.855 ] T

Therefore, node 7 is the most important node.

5.5. Subgraph Centrality

Subgraph centrality attempts to measures the centrality of a node by taking into consideration the participation of each node in all subgraphs of the network. It does this indirectly by counting the number of closed walks in the network which start and end at a given node in the network: a relationship can be shown between subgraphs and these walks.

If A is the adjacency matrix of an unweighted network, we know that

( A k ) i i corresponds to the number of closed walks of length k starting at node v i .

( A k ) i j corresponds to the number of walks of length k that start at node v i and end at node v j .

We define

μ k ( i ) = ( A k ) i i asthelocalspectralmomentofnode i

In a similar way to Katz centrality, subgraph centrality of a node i is a weighted sum of closed walks of different lengths which start and end at node i. The shorter the closed walk, the more the centrality of the node is influenced.

The subgraph centrality of node i in the network is given by

S C ( i ) = k = 0 μ k ( i ) k ! = k = 0 ( A k ) i i k ! = [ I + A 1 ! + A 2 2 ! + A 3 3 ! + ] i i (17)

S C ( i ) = ( e A ) i i (18)

Considering the exponential of the adjacency matrix,

e A = I + A + A 2 2 ! + A 3 3 ! + + A k k ! +

we observe that the numbers of closed walks associated with A 2 of length 2 are counted twice for every link in the network. On the other hand, the closed walk associated with A 3 of length 3 is counted 2 × 3 = 3 ! for every triangle. To avoid double counting, we penalize the closed walks of length 2 by 2 ! and closed walks of length 3 are penalized by 3 ! . In general, any circuit of length k can be traversed in 2 directions and there are k points where you can start, so that this circuit is counted 2 × k times. From the exponential of the adjacency matrix, when k 4 , the penalization included is not the same as the penalization of counting the number of repeated closed walks.

For instance, let us consider Network-2 in Figure 9. We want to find the subgraph centrality. We have;

e A = ( 3.962 2.475 3.393 3.532 2.914 0.991 2.345 1.569 0.944 0.687 2.475 2.79 1.858 2.227 3.306 1.423 2.188 1.997 1.055 0.682 3.393 1.858 4.283 3.696 3.498 1.369 4.021 2.631 2.037 1.562 3.532 2.227 3.696 4.345 4.107 1.691 3.297 2.564 1.508 1.044 2.914 3.306 3.498 4.107 7.713 4.312 6.554 6.404 3.924 2.556 0.991 1.423 1.369 1.691 4.312 3.423 3.448 4.131 2.24 1.294 2.345 2.188 4.021 3.297 6.554 3.448 8.37 6.791 5.762 4.327 1.569 1.997 2.631 2.564 6.404 4.131 6.791 7.02 4.948 3.131 0.944 1.055 2.037 1.508 3.924 2.24 5.762 4.948 5.016 3.563 0.687 0.682 1.562 1.044 2.556 1.294 4.327 3.131 3.563 3.32 ) .

Then the subgraph centrality will consist of the diagonal entries of e A , which is

S C ( i ) = [ 3.962 2.79 4.283 4.345 7.713 3.423 8.37 7.02 5.016 3.32 ] T

We observe that node 7 has the highest subgraph centrality, thus, node 7 is the most central node.

6. Relationship between Centrality Measures

Among the challenges that arise in determining the importance of a node in a network using centrality is that it is not always clear which of the centrality measures should be used. It is not obvious whether two centrality measures will give the same ranking of the nodes in the given network. Also, there is the necessity of choosing the attenuation factor α in Katz centrality which adds another challenge. Different choices of α may lead to different rankings. Experimentally, it has been seen that different centrality measures provide highly correlated rankings [1]. Ranking becomes more stable when α approaches its limits, i.e. as

α 0 and α 1 λ 1 .

We will prove these correlations and stability of ranking. This will relate the degree and eigenvector centrality to Katz centrality.

Theorem 2. Let G = ( V , E ) be undirected connected network with adjacency matrix A . The Katz centrality k is given as

k = ( I α A ) 1 e .

Then,

・ as α 0 , the ranking produced by k converges to that produced by degree centrality, and

・ as α 1 λ 1 , the ranking produced by k converges to that produced by eigenvector centrality.

Proof. The Katz centrality k is given as

k = ( I α A ) 1 e , (19)

which can be written as

k = ( I α A ) 1 e = ( I + α A + ( α A ) 2 + ( α A ) 3 + ) e = e + α A e + ( α A ) 2 e + ( α A ) 3 e + (20)

k = e + α d + ( α A ) 2 e + ( α A ) 3 e + (21)

where d is the vector of the degree centralities of the nodes.

Consider the relation

ψ = 1 α ( k e ) . (22)

It is clear that the ranking produced by ψ will be exactly the same as that produced by k , due to the fact that the score of each node has been scaled and shifted in the same way. Thus,

ψ = 1 α ( k e ) = 1 α ( e + α d + ( α A ) 2 e + ( α A ) 3 e + )

ψ = d + α A 2 e + α 2 A 3 e + . (23)

Then,

l i m α 0 ψ = d (24)

where d is the vector of the degree centralities of the nodes.

Therefore, the ranking produced by the Katz centrality reduces to that produced by degree centrality.

To show the second relation, we write the column vector e , as

e = i = 1 n β i v i , (25)

where β i s are constants and v i s are eigenvectors of matrix A .

Then, we can write the Katz centrality as

k = ( I α A ) 1 e = k = 0 ( α A ) k e = k = 0 ( α A ) k i = 1 n β i v i = k = 0 α k i = 1 n β i A k v i = k = 0 α k i = 1 n β i λ i k v i = i = 1 n 1 1 α λ i β i v i (26)

k = 1 1 α λ 1 β 1 v 1 + i = 2 n 1 1 α λ i β i v i = 1 1 α λ 1 ( β 1 v 1 + i = 2 n 1 α λ 1 1 α λ i β i v i ) (27)

Consider the relation

ϕ = ( 1 α λ 1 ) k (28)

That is, the ranking produced by ϕ is exactly the same as that produced by k , due to the fact that the score of each node has been scaled and shifted in the same way. This implies that

ϕ = β 1 v 1 + i = 2 n 1 α λ 1 1 α λ i β i v i (29)

Then,

l i m α 1 λ 1 ϕ = β 1 v 1 (30)

This implies that the limiting centralities are proportional to the principal eigenvector v 1 of the adjacency matrix. Thus, the ranking produced by the Katz centrality reduces to those produced by eigenvector centrality.

7. Matrix Functions

This section discusses some of the matrix functions developed using Taylor series.

Matrix functions have applications throughout applied mathematics and scientific computing. Matrix functions are used in various fields, for example, in control theory and electromagnetism and can also be used to study complex networks like social networks.

Let f ( z ) be a complex-valued function, such that z C n × n and f ( z ) is analytic in the disc | z | < R , where R . Using Taylor’s theorem, we can represent f ( z ) as a convergent power series

f ( z ) = a 0 + a 1 z + a 2 z 2 + , (31)

for | z | < R , and a k , k 0 are complex-valued constants [12].

Let A C n × n be a complex-valued matrix. Then we define the matrix function of A as

f ( A ) = a 0 I + a 1 A + a 2 A 2 + . (32)

The matrix series in Equation (32) is convergent to the n × n matrix f ( A ) if all n2 scalar series that make up f ( A ) are convergent. It turns out that the series of f ( A ) converges if all eigenvalues of A lie in the region of convergence of f ( z ) in Equation (31). This can be proved by the following theorem.

Theorem 3. Suppose that f ( z ) has a power series representation, written as

f ( z ) = k = 0 a k z k (33)

in an open disc | z | < R . Then the series

k = 0 a k A k (34)

is convergent if and only if the eigenvalues of A lie in | z | < R [12].

Proof. We prove this theorem only for diagonalisable matrices using the Jordan form of matrix A .

Let Q be a transformation matrix which diagonalizes A . Then we can write

D = d i a g ( λ 1 , , λ n ) = Q 1 A Q . (35)

Thus,

f ( A ) = Q d i a g ( f ( λ 1 ) , , f ( λ n ) ) Q 1 = Q d i a g ( k = 0 a k λ 1 k , , k = 0 a k λ n k ) Q 1 = Q ( k = 0 a k D k ) Q 1 = k = 0 a k ( Q D Q 1 ) k = k = 0 a k A k . (36)

If A has an eigenvalue | λ i | R , then the series in Equation (33) diverges when evaluated at λ i . It follows that the series in Equation (34) also diverges. That is, if there exist eigenvalues of matrix A which fall outside | z | < R , then the series in Equation (34) diverges.

Therefore f ( A ) converges if and only if | λ i | < R , where λ i , 1 i n are the eigenvalues of matrix A .

In general, if the function f ( z ) can be expressed by using Taylor series and it converges in the disc | z | < R which contains the eigenvalues of A , then f ( A ) can be computed by substituting the matrix A for variable z in the function f ( z ) . For instance,

f ( z ) = 1 + z 1 z f ( A ) = ( I A ) 1 ( I + A )

The most important matrix functions which can be expressed by using the Taylor series are the following:

・ Exponential function

f ( z ) = e z = 1 + z + z 2 2 ! + z 3 3 ! + = k = 0 z k k ! (37)

f ( A ) = e A = I + A + A 2 2 ! + A 3 3 ! +

f ( A ) = k = 0 A k k ! (38)

・ Cosine function

f ( z ) = cos ( z ) = 1 z 2 2 ! + z 4 4 ! + = k = 0 ( 1 ) k z 2 k ( 2 k ) !

f ( A ) = c o s ( A ) = I A 2 2 ! + A 4 4 ! +

c o s ( A ) = k = 0 ( 1 ) k A 2 k ( 2 k ) ! (39)

・ Sine function

f ( z ) = sin ( z ) = z z 3 3 ! + z 5 5 ! + = k = 0 ( 1 ) k z 2 k + 1 ( 2 k + 1 ) !

f ( A ) = s i n ( A ) = A A 3 3 ! + A 5 5 ! + = k = 0 ( 1 ) k A 2 k + 1 ( 2 k + 1 ) ! . (40)

・ Logarithmic function

f ( z ) = l o g ( 1 + z ) = z z 2 2 + z 3 3 z 4 4 + = k = 1 ( 1 ) ( k + 1 ) z k k (41)

f ( A ) = l o g ( I + A ) = A A 2 2 + A 3 3 A 4 4 + = k = 1 ( 1 ) ( k + 1 ) A k k . (42)

・ Hyperbolic function

1) sinh function

f ( z ) = sinh ( z ) = z + z 3 3 ! + z 5 5 ! + = k = 0 z 2 k + 1 ( 2 k + 1 ) !

f ( A ) = s i n h ( A ) = A + A 3 3 ! + A 5 5 ! + = k = 0 A 2 k + 1 ( 2 k + 1 ) !

2) cosh function

f ( z ) = cosh ( z ) = 1 + z 2 2 ! + z 4 4 ! + = k = 0 z 2 k ( 2 k ) !

f ( A ) = c o s h ( A ) = I + A 2 2 ! + A 4 4 ! + = k = 0 A 2 k ( 2 k ) ! .

Each of these functions can (in theory) be used to define a centrality measure on a network with adjacency matrix A . For example, to obtain the centralities of all the nodes we can compute f ( A ) e , where e is the vector of ones, or diag ( f ( A ) ) .

We may need to be careful with these raw centrality measures as f ( A ) may contain negative (or even complex) entries. For instance, to compute the logarithmic function f ( A ) , we need to take care of the complex entries, since it is not possible to make ranking out of complex entries. To avoid the complex entries, we compute log ( γ I + A ) , where γ is a real constant chosen so that ( γ I + A ) has positive eigenvalues. The constant γ differs for different networks.

We can also define centrality measures by applying analytic continuations of f ( z ) outside its radius of convergence.

Recall that, if the attenuation factor α < 1 λ 1 , where λ 1 is the principal eigenvalue of A , then

k = 0 α k A k = ( I α A ) 1 .

But

f ( A ) = ( I α A ) 1

is also defined for | α | 1 λ 1 as long as α 1 λ i where λ i is an eigenvalue of A . Then we can generalize Katz centrality by the following definition:

k = diag ( I α A ) 1 or k = ( I α A ) 1 e with α > 1 ρ (A)

where e is the column vector of ones and ρ ( A ) is the spectrum of A .

To determine which of the matrix functions can be used to asses centrality in the network, we will do some experimental work on a variety of networks in the following section. We will perform the experimental work by making comparisons between the rankings based on the common centrality measures discussed in section V and the rankings based on these matrix functions.

8. Experimental Work and Discussion

In this section we aim to analyse experimentally the agreements between the centrality measures discussed in section V, and whether the matrix functions discussed in section VII can be used to determine the important nodes in a network. The experimental work will compare matrix functions to the common centrality measures.

A variety of techniques can be used to compute centrality measures (those discussed in section V) and matrix functions. To compute the exponential of a matrix, logarithmic of a matrix and other matrix functions we will use SciPy matrix functions [14]. In our new measures involving matrix functions and generalisations to Katz centrality, we will calculate centralities by using the diagonal entries of these functions. We will use the Kendall correlation coefficient in our experiments to compare the agreement between centrality measures.

8.1. Correlation (Kendall, Pearson, Spearman) Coefficient

Correlation is a bivariate analysis that measures the strengths of association between two variables. The value of the correlation coefficient varies between 1 and −1. The positive correlation signifies that the ranks of both variables are increasing, while the negative correlation signifies that as the rank of one variable increases, the rank of the other variable is decreasing. The correlation coefficient between the two variables is said to be a perfect association if it lies between ±1 [15]. The closer the value of the correlation coefficient to 1 or to −1, the stronger the relationship between the two variables. As the correlation coefficient value goes towards 0, the relationship between the two variables will be weaker. In statistics, we usually measure the strengths of association by: Pearson correlation, Kendall rank correlation and Spearman correlation.

The Kendall coefficient of correlation is the measure of the degree of correspondence between two set of ranks given to the same set of objects. The Kendall coefficient is interpreted as the difference between the probability of these objects being in the same order and the probability of these objects being in a different order [2].

Let X and Y be two observations, such that ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x n , y n ) is the set of joint random variables of X and Y, respectively. The values ( x i ) and ( y i ) are all unique.

・ Any pair of observations ( x i , y i ) and ( x j , y j ) is said to be concordant if

x i x j y i y j > 0 ,

which implies that either x i > x j and y i > y j or x i < x j and y i < y j .

・ The pair is said to be discordant if

x i x j y i y j < 0 ,

which implies that either x i > x j and y i < y j or x i < x j and y i > y j .

・ The pair is neither concordant nor discordant if x i = x j or y i = y j .

The Kendall ( τ ) correlation coefficient is defined as

τ = N o concordantpairs N o discordantpairs 1 2 n ( n 1 )

The Kendall rank coefficient can be interpreted as follows: the values of τ greater than zero show an agreement, being close to one indicates a strong agreement. On the other hand, values less than zero show a disagreement and those close to negative one indicate a strong disagreement [2]. Indeed, if all pairs are concordant, then τ = 1 , which implies that the variables are in exactly the same ranking (order). If they are all discordant then τ = 1 , which implies that the variables are in exactly the opposite ranking (order).

Pearson correlation is a measure of degree of linear relationship between two variables and is denoted by r. Pearson correlation is basically used to draw a line of best fit through the data of two variables, r indicates how far away all these data points are to the line of best fit.

The following formula is used to calculate the Pearson r correlation:

r = N x y x y ( N x 2 ( x ) 2 ) ( N y 2 ( y ) 2 ) (43)

where:

r = Pearson correlation coefficient

N = number of observation in each data set

x y = the sum of the products of paired scores

x = sum of scores of variable x

y = sum of scores of variable y scores

x 2 = sum of squared scores of variable x

y 2 = sum of squared scores of variable y

To use Pearson correlation r, the two variables must be measured either in interval or ratio scale. However, both variables do not need to be measured on the same scale (for instance, one variable can be ratio and one can be interval). We can not use the Pearson correlation for ordinal data, instead we use Spearman’s rank correlation or a Kendall’s Correlation.

Spearman rank correlation is the nonparametric version of the Pearson correlation coefficient that is used to measure the degree of association between two two continuous or ordinal variables.

The following formula is used to calculate the Spearman rank correlation:

ρ = 1 6 D i 2 N ( N 2 1 ) (44)

where; D i = R x i R y i is the difference between the two ranks of each observation; N is the number of observations.

We use the Spearman correlation coefficient when the relationship between variables is not linear.

Despite the fact that both Spearman and Kendall correlations measure monotonicity relationships and have a nice interpretation but in this paper we will opt to use the Kendall correlation coefficient due to the following reasons [16][17]:

・ The distribution of Kendall’s has better statistical properties.

・ The interpretation of Kendall’s in terms of the probabilities of observing the agreeable (concordant) and non-agreeable (discordant) pairs is very direct.

・ The Kendall correlation has a smaller gross error sensitivity (GES) (more robust) and a smaller asymptotic variance (AV) (more efficient), that is Kendall correlation has a O ( n 2 ) computation complexity comparing with O ( n l o g n ) of Spearman correlation, where n is the sample size.

8.2. Network Models

Networks have been around us for so many years and the study is not new. Graph theorists and mathematicians have been surrounded by problems where they were trying to make sense of these complex networks. As a result of this, random network theory was generated stating that nodes and links in a graph are connected randomly to each other. In this paper we will consider three networks model due to its significance:

・ Erdös-Rényi model: are formed by completely random interactions between the nodes. Each node chooses its neighbours at random, constrained either by an overall number of relationships that might be assigned in the graph, or a probability of connecting to a certain neighbour [18]. Mathematically, each network would be following a poison distribution. This distribution is such that vast majority of nodes have equal number of links and it is almost impossible to find outliers.

・ Barabási-Albert model: these are scale-free networks which are formed by two simple mechanisms, growth and preferential attachment. The main prediction which a scale free network makes is the presence of few outlier nodes which have many connections. These nodes are also known as hubs. Preferential attachment is a probabilistic mechanism in which a new node is free to connect to any node in the network, whether it is a hub or has a single link [18][19].

・ Watts-Strogatz model: is important because it shows how the “small-world effect” in networks can coexist with other commonly observed features of social networks, like a high clustering coefficient. More specifically, the model showed how adding a small fraction of random long-range links in an otherwise regular network can lead to slow, logarithmic scaling of the typical distance between nodes with network size [20].

8.2.1. First Experiment

We begin our experiments by considering a small network with 20 nodes. The network was randomly generated in text editor and drawn using Sage. The aim is to determine which of the matrix functions give similar rankings as the common centrality measures. We have many functions to choose from and we want to limit our choice. Note that we will not consider the exponential functions since it is similar to the subgraph centrality.

The experiment shows that the diagonal entries of the logarithmic function and cosine function give the ranking of the nodes in reverse order as compared to other rankings. Also, we observe that the sine function does not match any other centrality measure. The network in Figure 10, having 20 nodes and 42 edges gives us a real picture on node ranking.

The rankings of the nodes obtained for the graph in Figure 10 by using different centrality measures including matrix functions are shown in Table 1.

Table 1. Rankings of nodes using centrality measures and matrix functions for the network in Figure 10.

Figure 10. Network with 20 nodes and 42 edges.

Note that the ranking is from the most to the least important/central node with respect to the centrality measure used.

To avoid making many comparisons using Kendall correlation coefficient between the common centrality measures and those produced by matrix functions, we will choose one centrality measure among the other one. To do this we need to investigate whether the chosen centrality measure agrees with the other centralities measures. In this case, we make a comparison between the closeness centrality (CC) and degree centrality (DC), eigenvector centrality (EC), Katz centrality (KC) and subgraph centrality (SC) for graph in Figure 10.

Table 2 shows that there is an agreement between closeness centrality and other centrality measures.

We observe from graph of Figure 11 that there is an agreement between closeness centrality and other centrality measures.

In Table 1, we have to modify the cosine and logarithmic functions so that they match with other rankings. The best way of doing this seems to be by reversing the order of their rankings.

To be more confident about the rankings of nodes using matrix functions, we will use the Kendall correlation coefficient to make the comparison between closeness centrality and the matrix functions. We chose closeness centrality among the other standard centrality measure to make the comparison with matrix functions in as much as it takes into account neighbours and the neighbours of neighbours of a node to determine its centrality. In the comparisons, we denote by τ ( CC , f ) the Kendall coefficient between closeness centrality and centrality measure induced by f ( A ) .

In Table 3, we reversed the rankings given by the cosine and the logarithmic functions before calculating the Kendall coefficients. We observe in Table 3 that the Kendall coefficients τ ( CC , l o g ) , τ ( CC , c o s h ) , τ ( CC , sinh ) and τ ( CC , c o s ) are all positive. This implies the agreement of these matrix functions with the

Table 2. Kendall coefficients between closeness centrality and other centrality measure applied to graph in Figure 10.

Table 3. Kendall coefficients between closeness centrality and matrix functions applied to graph in Figure 10.

Figure 11. Agreement.

closeness centrality measure and the agreement is quite strong for logarithmic, cosh, and cosine functions since their Kendall coefficients are close to 1. On the other hand, the Kendall coefficient between closeness centrality and sine function is negative, which implies that there is no agreement between their rankings.

8.2.2. Second Experiment

We compare the agreement of centrality measures, by generating 10 random networks using the Barabási-Albert preferential attachment model.

The Barabási-Albert model is a simple scale-free random graph generator. The network begins with an initial set of m 0 2 nodes. The degree of each node in the initial network should be at least 1, if not, the network will always end up being disconnected. New nodes are free to attach to an existing node in the network. At each step, a new node is created and connected to an existing node. Each new node is connected to m m 0 existing nodes with a probability that is proportional to the number of links that the existing nodes already have. To use this method, we specify the number of nodes in the network (n) and the number of new nodes form as they appear (m) in such a manner that nodes with higher degree have a higher chance of being selected for attachment [21].

The comparisons involve rankings of nodes using centrality measures such as closeness centrality, degree centrality, eigenvector centrality, Katz centrality and subgraph centrality. We use k = ( I α A ) 1 e in this experiment to compute

Katz centrality. In all cases we take α < 1 λ 1 . For each choice of n and m, we

generate 5 networks and record the mean values of the Kendall coefficients. We denote the Kendall coefficient correlation by τ i according to the Table 4.

It is evident from Table 5, that all Kendall coefficients are positive, which indicates an agreement between the rankings. The Kendall coefficient shows that

Table 4. The notations of Kendall correlation coefficients.

Table 5. Kendall coefficients for centrality measures applied to different random networks generated by using the Barabási-Albert method.

n: Number of nodes in the network. m: Number of neighbours to attach to each new node.

closeness centrality is highly correlated with centrality measures corresponding to τ 2 , τ 3 and τ 4 . The eigenvector, Katz and subgraph centralities are also highly related, as indicated by τ 8 , τ 9 and τ 10 . The experiment shows that the agreement becomes stronger if the network is more connected. In general, we say that for sufficiently dense (i.e., very connected) networks, the two measures provide almost identical rankings, producing Kendall correlation coefficients close to 1.

8.2.3. Third Experiment

We generate 10 random networks as in the second experiment. This time, we fix the value of n to be 200 and we vary m. We calculate the Katz centrality for each network using different choices of α . Recall, the Katz centrality of the nodes is given by k = ( I α i A ) 1 e . We choose

α 1 = 0.1 λ m a x for k 1 , α 2 = 0.8 λ m a x for k 2 ,

α 3 = 1.5 λ m a x for k 3 , and α 4 = 10 λ m a x for k 4 .

We calculate the Kendall correlation coefficients τ ( k i , k j ) between all pairs of measures as denoted in Table 6.

Note that the Kendall coefficient involving k 3 and k 4 are computed by taking the absolute value of the inverse function, that is | ( I α A ) 1 | e and the other coefficient are computed by using the normal formula.

We observe in Table 7 that the Kendall coefficient between k 3 and k 4 is exactly 1. The Kendall coefficient between k 1 and both k 3 and k 4 , k 2 and both k 3 and k 4 are the same. Since k 3 and k 4 correspond to the generalised

Katz centrality (i.e. α 1 λ m a x ), then we conjecture that the rankings provided by any generalised Katz centrality are always the same regardless of the choice of α , this is when α 1 λ 1 .

8.2.4. Fourth Experiment

We repeat the second experiment involving generating 10 random networks with different number of nodes by using the Erdös-Rényi method. We will use the same notation for Kendall coefficients as we used in the second experiment, see Table 4.

The Erdös-Rényi model is used to generate random networks in which edges are set between nodes with equal probabilities. The model can be used to prove

Table 6. The notations of Kendall correlation coefficients.

Table 7. Kendall coefficients for generalized Katz centrality applied to random networks generated by using the Barabási-Albert method.

the existence of networks satisfying various properties, it can also be used to provide a rigorous definition of what it means for a property to hold for almost all networks [22]. To generate random networks using the Erdös-Rényi model, we need to specify two parameters: the number of nodes in the network denoted by n and the probability p that a link should be formed between any two nodes [22].

The Kendall coefficients in Table 8 are all positive and they are close to 1. Using the concept of Kendall coefficients, we say that rankings of nodes using centrality measures for random networks generated by using the Erdös-Rényi are highly correlated. This implies that there is a strong agreement in their rankings.

We repeat the third experiment but this time, generating 10 random networks with 200 nodes by using the Erdös-Rényi method. We will use the same definition of Katz centrality and the same notation as in Table 6.

Table 9 shows that there is a high agreement between the rankings of k 1 and k 2 . On the other hand, we observe that the rankings of the nodes produced by

Katz centrality when α < 1 λ 1 and those produced by generalised Katz centrality, when α 1 λ 1 disagree, as we see from Table 9 that the Kendall coefficient τ k 1 k 3 and τ k 2 k 3 are approximately zero.

8.2.5. Fifth Experiment

We repeat the second experiment, but now with 10 random networks generated by using the Watts-Strogatz method. The same notation for the Kendall coefficient will be used as that used in the second experiment, see Table 4.

Table 8. Kendall coefficients for centrality measures applied to different random networks generated by using the Erdös-Rényi method.

n: Number of nodes in the network. p: Probability of edge creation.

Table 9. Kendall coefficients for generalised Katz centrality applied to 10 random networks with 200 nodes generated by using the Erdös-Rényi method.

n: Number of nodes in the network. p: Probability of edge creation.

The Watts-Strogatz model was developed as a way to impose a high clustering coefficient onto classical random graphs. It produces networks with a small-world property. To generate these networks, we use watts_strogatz_graph(n, k, p) in Sage. Here, n denotes the number of nodes in the network which are arranged in a ring and connected to k nearest neighbours in the ring. Each node is considered independently and, with probability p, a link is added between the node and one of the other nodes in the network, chosen uniformly at random in accordance with experiments detailed in [1].

In our experiment, we varied n,k and p and in each case, five networks were created. The averages of the Kendall coefficients over these 5 networks for different centrality measures are given in Table 10. These Kendall coefficients are computed for the complete set of rankings. The Kendall coefficients show that the agreement between the centrality measures are much weaker than for the networks produced by Barabási-Albert and Erdös-Rényi methods. The experiment also shows that as the network becomes denser the correlation between measures becomes stronger.

We then repeat the third experiment using the Watts-Strogatz method. The same definition of Katz centrality and the same notation were used as in Table 6.

In Table 11 we see a high agreement between the rankings of k 1 and k 2 . The rankings by k 3 and k 4 are exactly the same. We also see the same pattern as in Table 9, so the same conclusion apply.

8.2.6. Sixth Experiment

The aim of this experiment is to use the three methods (Barabási-Albert, Erdös-Rényi and Watts-Strogatz) for generating random networks and to use

Table 10. Kendall coefficients for centrality measures applied to random networks generated by using the Watts-Strogatz method.

n: Number of nodes in the network. k: Each node is connected to k nearest neighbours. p: Probability of rewiring each edge.

Table 11. Kendall coefficients for generalised Katz centrality applied to 10 random networks with 200 nodes generated by using the Watts-Strogatz method.

the Kendall correlation coefficient to see whether there will be an agreement between the closeness centrality and matrix functions, such as the logarithmic, cosh, sinh, cosine and sine functions. Using each method, we generate 10 random networks and for each network we create five networks in which the Kendall coefficient will be obtained by taking the average over these 5 created networks. The aim is to see whether the pattern observed in our first experiment is repeated.

Table 12 shows that there is an agreement between closeness centrality and matrix functions (logarithmic, cosh and sinh) in their ranking measures for

Table 12. Kendall coefficients between closeness centrality and matrix functions for 10 random networks generated by using the Barabási-Albert method.

networks generated by using the Barabási-Albert method. On the other hand, the agreement between closeness centrality and other matrix functions such as cosine and sine is weak.

We now generate 10 random networks by using the Erdös-Rényi and Watts-Strogatz methods.

Table 13 shows that networks generated by using the Erdös-Rényi method agree in the ranking measures given by closeness centrality and matrix functions such as logarithmic, cosh and sinh.

Table 14 shows that the agreement between closeness centrality and the matrix functions (cosh, sinh, cosine and sine) are weak in many cases. When we use the Watts-Strogatz method to generate the networks, the logarithmic function, gives a ranking similar to closeness centrality when m 10 .

In general, the agreement between closeness centrality and hyperbolic functions (cosh and sinh) is not strong for the networks generated by using the Watt-Strogatz methods. The agreement between closeness centrality and the cosine function as well as the sine function in networks generated by using Barabási-Albert, Erdös-Rényi and Watts-Strogatz methods are all weak. Among the tested matrix functions, the logarithmic function gives the best agreement of ranking with other centrality measures.

8.2.7. Real-World Network Experiments

In this experiment, we will study the Kendall correlation coefficient for real-world networks. The networks in this experiment come from a variety of sources, some of these data we have obtained from Gephi sample datasets [23]and others are found in Pajek Data Sets [24]. We will compare only some of the centrality measures (closeness, subgraph and Katz) and the logarithmic matrix function. Note that we are not interested in the meaning of each node within the

Table 13. Kendall coefficients between closeness centrality and matrix functions for 10 random network generated by using the Erdös-Rényi method.

Table 14. Kendall coefficients between closeness centrality and matrix functions for 10 random networks generated by using the Watts-Strogatz method.

network, what we really want to know is whether there is an agreement between these centrality measures in real-world networks. We chose the logarithmic function over other matrix functions since it shows a high agreement with other centralities when applied to random networks.

We can clearly see from Table 15 that all Kendall coefficient are positive. This implies that there is an agreement between the rankings of nodes. We also observe that the agreement between centrality measures (closeness, Katz, subgraph) and the logarithmic function is high, irrespective of the connectivity of the underlying network. In general, we can say that the logarithmic function is the best of the tried matrix functions and can be used as a centrality measure, since it gives a ranking similar to rankings of other centrality measures.

Table 15. Kendall coefficients for the logarithmic function and some standard centrality measures as applied to 10 real-world networks.

9. Conclusions

In this work we examined the centrality measures such as closeness, degree, eigenvector, Katz and subgraph. We showed the relationship between the Katz centrality and eigenvector as well as degree centrality. We developed our notion of centrality measure by considering the rankings of nodes based on matrix functions such as logarithmic, cosine, sine, hyperbolic functions and the generalised Katz centrality. We showed experimentally by using various classes of graphs that the rankings of the nodes given by closeness, degree, eigenvector, Katz and subgraph centrality are highly correlated. Moreover, we showed experimentally that the rankings of nodes given by different choices of attenuation

factor α for the generalised Katz centrality, in which α 1 λ 1 where λ 1 is the

principal eigenvalue of the network adjacency matrix A , are exactly the same. In terms of matrix functions, the experiment shows that there is a high agreement between the rankings of nodes given by the logarithmic function and other common centrality measures discussed in Section V.

Similar results were found to hold for real-world networks: the rankings given by the logarithmic function and those given by closeness, Katz and subgraph centrality are highly correlated irrespective of the connectivity of the network. In general, we concluded that the logarithmic function, out of the matrix functions we have considered, is the best and can be used as a centrality measure.

In this work, we considered only the diagonal entries of the matrix functions with some modifications of the calculations of their Kendall correlation coefficients. We also found that the logarithmic function gives a relatively good ranking as compared to the rankings given by other centrality measures. We suggest that in the future, we can consider the row sums of these matrix functions with or without any modification and examine whether they give similar rankings as other centrality measures. The paper did not analyse the significance and the uses of the centrality measures based on matrix and it has been left for future work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Benzi, M. and Klymko, C. (2013) Total Communicability as a Centrality Measure. Journal of Complex Networks, 1, 124-149.
https://doi.org/10.1093/comnet/cnt007
[2] Kendall, M.G. (1938) A New Measure of Rank Correlation. Biometrika, 30, 81-93.
https://doi.org/10.1093/biomet/30.1-2.81
[3] Bavelas, A. (1948) A Mathematical Model for Group Structures. Human Organization, 7, 16-30. https://doi.org/10.17730/humo.7.3.f4033344851gl053
[4] Bavelas, A. and Barrett, D. (1951) An Experimental Approach to Organizational Communication. American Management Association, New York, 57-62.
[5] Cohn, B.S. and Marriott, M. (1958) Networks and Centres of Integration in Indian Civilization. Journal of Social Research, 1, 1-9.
[6] Pitts, F.R. (1965) A Graph Theoretic Approach to Historical Geography. The Professional Geographer, 17, 15-20.
https://doi.org/10.1111/j.0033-0124.1965.015_m.x
[7] Czepiel, J.A. (1974) Word-of-Mouth Processes in the Diffusion of a Major Technological Innovation. Journal of Marketing Research, 11, 172-180.
https://doi.org/10.2307/3150555
[8] Bolland, J.M. (1988) Sorting out Centrality: An Analysis of the Performance of Four Centrality Models in Real and Simulated Networks. Social Networks, 10, 233-253.
https://doi.org/10.1016/0378-8733(88)90014-7
[9] Borgatti, S.P. and Everett, M.G. (2006) A Graph-Theoretic Perspective on Centrality. Social Networks, 28, 466-484.
https://doi.org/10.1016/j.socnet.2005.11.005
[10] Landherr, A., Friedl, B. and Heidemann, J. (2010) A Critical Review of Centrality Measures in Social Networks. Business & Information Systems Engineering, 2, 371-385.
https://doi.org/10.1007/s12599-010-0127-3
[11] Estrada, E. (2012) The Structure of Complex Networks: Theory and Applications. Oxford University Press, Oxford.
[12] Higham, N.J. (2008) Bibliography of Functions of Matrices: Theory and Computation. SIAM.
https://doi.org/10.1137/1.9780898717778
[13] Boldi, P. and Vigna, S. (2014) Axioms for Centrality. Internet Mathematics, 10, 222-262.
https://doi.org/10.1080/15427951.2013.865686
[14] Higham, N.J. and Deadman, E. (2016) A Catalogue of Software for Matrix Functions. Version 2.0.
[15] Bethea, R.M. (2018) Statistical Methods for Engineers and Scientists. Routledge, Abingdon-on-Thames.
[16] Fredricks, G.A. and Nelsen, R.B. (2007) On the Relationship between Spearman’s Rho and Kendall’s Tau for Pairs of Continuous Random Variables. Journal of Statistical Planning and Inference, 137, 2143-2150.
https://doi.org/10.1016/j.jspi.2006.06.045
[17] Xu, W., Hou, Y., Hung, Y.S. and Zou, Y. (2010) Comparison of Spearman’s Rho and Kendall’s Tau in Normal and Contaminated Normal Models.
[18] Prettejohn, B.J., Berryman, M.J. and McDonnell, M.D. (2011) Methods for Generating Complex Networks with Selected Structural Properties for Simulations: A Review and Tutorial for Neuroscientists. Frontiers in Computational Neuroscience, 5, 11. https://doi.org/10.3389/fncom.2011.00011
[19] Bayati, M., Kim, J.H. and Saberi, A. (2010) A Sequential Algorithm for Generating Random Graphs. Algorithmica, 58, 860-910.
https://doi.org/10.1007/s00453-009-9340-1
[20] Mej, N. (2010) Networks: An Introduction.
[21] Albert, R. and Barabási, A.L. (2002) Statistical Mechanics of Complex Networks. Reviews of Modern Physics, 74, 47-97.
https://doi.org/10.1103/RevModPhys.74.47
[22] Newman, M.E., Strogatz, S.H. and Watts, D.J. (2001) Random Graphs with Arbitrary Degree Distributions and Their Applications. Physical Review E, 64, Article ID: 026118.
https://doi.org/10.1103/PhysRevE.64.026118
[23] Datasets, Wikipedia, the Free Encyclopedia.
https://wiki.gephi.org/index.php/Datasets
[24] Old Pajek Data Sets.
http://pajek.imfm.si/doku.php?id=data:pajek:vlado

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.