Centrality Measures Based on Matrix Functions

Network is considered naturally as a wide range of different contexts, such as biological systems, social relationships as well as various technological scenarios. Investigation of the dynamic phenomena taking place in the network, determination of the structure of the network and community and description of the interactions between various elements of the network are the key issues in network analysis. One of the huge network structure challenges is the identification of the node(s) with an outstanding structural position within the network. The popular method for doing this is to calculate a measure of centrality. We examine node centrality measures such as degree, closeness, eigenvector, Katz and subgraph centrality for undirected networks. We show how the Katz centrality can be turned into degree and eigenvector centrality by considering limiting cases. Some existing centrality measures are linked to matrix functions. We extend this idea and examine the centrality measures based on general matrix functions and in particular, the logarithmic, cosine, sine, and hyperbolic functions. We also explore the concept of generalised Katz centrality. Various experiments are conducted for different networks generated by using random graph models. The results show that the logarithmic function in particular has potential as a centrality measure. Similar results were obtained for real-world networks.


Introduction
Since its introduction by Euler in eighteen century, graph theory has proven its important applications in many different scientific fields.Graphs and linear al- ogy.These networks include protein-protein interaction networks, social networks, food webs, scientific collaboration networks, metabolic networks, lexical or semantic networks, neural networks, the World Wide Web and others.The use of network analysis is in various situations: from determining network structure and communities, describing the interactions between various elements of the network and investigating the dynamics phenomena taking place in the network [1].
One of the ground laying questions analysis of network is how to determine the "most important" nodes in a given network.Many centrality measures have been proposed, starting with the simplest of all, node degree centrality.This measure has being considered too "local", as it does not take into account the connectivity of the immediate neighbours of the node under consideration.A number of centrality measures have been introduced that take into account the global connectivity properties of the network.These include various types of eigenvector centrality for both directed and undirected networks, Katz centrality, subgraph centrality and PageRank centrality [1].The use of centrality scores provides rankings of the nodes in the networks.The higher the ranking of a node, the more important the node is believed to be within the network.There are many different ranking methods in use, and many algorithms have been developed to compute these rankings.
The purpose of this paper is to discuss some of the centrality measures and analyse the relationship between degree centrality, eigenvector centrality, and Katz centrality and to discuss the measures of centrality based on matrix functions including the logarithmic, sine, cosine, exponential and hyperbolic function.The main aim is to determine which of the matrix functions is highly correlated to "standard" centrality measures.We will use the Kendall correlation coefficient [2] in the experimental work to determine the correlations.

Literature Review
Bavelas [3] introduces the application of centrality of networks in human communication by measuring the communication within a small group in terms of the relationship between the structural centrality and influence in a group process.Afterwards, an application of centrality was made under the direction of Bavelas at the Group Networks Laboratory, M.I.T in the late 1940s.Leavitt in 1949 and Smith in 1950 conducted a study on centrality measure on which Bavelas in 1950 and Bavelas and Barrett in 1951 reported.These experiments all concluded that centrality was related to group efficiency in problem-solving and agreed with the subjective perception of leadership [4].
Various centrality measures in various contexts were then explored in the following decade.Cohn and Marriott in 1958 attempted to use the centrality to understand political integration in Indian social life [5].Pitts examined the con-sequences of centrality in communication paths for urban development [6].Later, Czepiel used the concept of centrality to explain the pattern of diffusion of a technological innovation in the steel industry [7].
Recently, Bolland analysed the stability of the degree (DC), closeness (CC), betweenness (BC), and eigenvector (EC) centrality in random and systematic variations of network structure, and found that betweenness centrality changes significantly with the variation of the network structure, while degree and closeness centrality are usually stable.He also found that eigenvector centrality is the most stable of all the indices analysed [8].Borgatti and Frantz extended the studies on the stability of centrality indices by considering the addition and deletion of nodes and links [9] as well as by differentiating several types of network topology such as uniformly random, small-world, core-periphery, scale-free, and cellular.Landherr reviewed critically the role of centrality measure in social networks [10].Estrada analysed examples of how a particular centrality measure is applied in social networks [11].
Benzi and Klymko analysed centrality measures, such as degree, eigenvector, Katz and subgraph for both undirected and directed networks.They used the local and global influence of a given node in the network as measured by walks of different lengths through that node.They analysed the relationship between centrality measures based on the diagonal entries and row sums of the matrix exponential and resolvent of the adjacency matrix involving the degree and eigenvector centrality.They showed experimentally that the rankings produced by exponential subgraph centrality, total communicability and resolvent subgraph centrality converge to those produced by degree centrality [1].
Most of the centrality measures notations considered are combinatorial in nature and based on the discrete structure of the underlying networks.We can extend our studies by defining the centrality measure by using the spectral techniques from linear algebra.Benzi and Klymko considered the diagonal entries of the matrix exponential and the Katz function A where 0 α > and A is the adjacency matrix of the network [1].Though, none of the previous studies considered other matrix functions such as the logarithmic, cosine, sine, hyperbolic functions and the generalized Katz centrality as centrality measures.In this work we will develop the notions of centrality based on matrix functions, and we will use the Kendall correlation coefficient [2] to determine the agreement between the node rankings produced by these matrix functions and those produced by the standard centrality measures.

Elements of Graph
Graphs are discrete structures which consist of vertices connected by edges.A graph can be written as ( ) V G is a non-empty set of vertices (also called nodes) and ( ) E G is a set of edges.An edge of the graph consists of two vertices associated with it, these two vertices are called endpoints.Open Journal of Discrete Mathematics An edge starting and ending at the same vertex is called a loop.
• Graphs can be represented by using points as nodes (vertices) and joining them using line segments for edges.We write uv to denote an edge between nodes u and v.
• We can assign numerical values to the edges of a graph in which case the graph is referred to as weighted.In an unweighted graph we assign to every edge the value 1. • If the edges of the graph are directed (Figure 1), then the graph is called a directed graph or digraph otherwise it is called an undirected graph.An undirected unweighted (Figure 2) graph without loops is simple if no two edges connect the same pair of vertices.
• For undirected graphs, if there are multiple edges between a pair of nodes then the graph is called a multi-graph or pseudo-graph.In digraphs we can have two edges connecting two vertices.

Basic Graph-Theoretic Terminology
If uv E ∈ is an edge in an undirected graph G, then nodes u and v are incident to the edge uv and we say that u and v are adjacent (neighbours) in G.It follows that u and v are the endpoints of an edge uv.If G is the directed graph and uv E ∈ , then u is said to be adjacent to v and v is said to be adjacent from u.We call u the initial node of uv and v the terminal or end node of uv.For an undirected graph G, the degree   , is the total number of edges with i v as their initial node.Loops contribute 1 to both the in-degree and out-degree.
A graph ( ) If H contains all edges of G that join two vertices in W, then we say that H is a subgraph induced or spanned by W.
The subgraphs of ( ) = can be obtained by deleting edges and vertices of G.We denote by with W V ⊂ , the subgraph of G obtained by deleting the vertices in W and all the edges incident with those vertices.In a similar manner, we denote by ( ) where F E ⊂ , the subgraph of G obtained by deleting all the edges in F.

Walk, Trail and Path
, , , , , , where i v and A trail in G is a walk in which no edges of G appear more than once (a walk with all different edges).A trail which begins and ends at the same node is known as a closed trail or circuit.
A path in G is a walk in which no nodes appear more than once with the exception that k v can be equal to 0 v .A cycle or a closed path is a path which begins and ends at the same node.Two nodes i v and j v in G are connected if there is a path between them.We say that graph G is connected if for every pair of nodes i v and j v there exists a path that starts at i v and ends at v and j v are nodes of the directed graph G, then G is said to be strongly connected if for any two nodes i v and j v we can find a path from i v to j v and from j v to i v .It is weakly connected if there is a path between every two nodes in the underlying undirected graph.The undirected graph is obtained by ignoring the directions of the edges in the directed graph.All strongly connected directed graphs are also weakly connected.
The digraph in Figure 3 is strongly connected because there is a path between any two ordered vertices in the directed graph.The digraph in Figure 4 is not strongly connected, since, for example, there is no directed path from S to T, nor from V to T, but it is weakly connected.

Matrices in Graphs
This section discuses the way of representing graphs using matrices.There are multiple ways to do this and any graph ( ) , G V E can be represented by using either adjacency, incidence or Laplacian matrices.

Matrices for Undirected Graph
Given an undirected graph G with n vertices and m edges.The adjacency matrix of ( ) , where The incidence matrix is given by , where 1 when edge is incident with node 0 otherwise The Laplacian matrix can be found by using the relation where D is a diagonal matrix whose ith diagonal entry is the degree of the ith node and A is the adjacency matrix.

D
The Laplacian matrix will be

Matrices for Directed Graph
Given a directed graph G with n nodes and m edges.The adjacency matrix of ( ) , where 1 if node is connected to node and directed from to 1 if node is connected to node and directed from to 0 otherwise The incidence matrix is given by , where 1 if node is the starting node of an edge 1 if node is the end node of an edge 0 otherwise Figure 6 is a digraph which shows that there is path from node A to any other node of the graph, nevertheless there is no path from node C to any other node of the network.The adjacency matrix A and the incidence matrix M will be 0 1 0 1 0 1 0 1 0 0 0

Distance in Graphs
Geodesic distance denoted as ( ) graph G is defined as the length of the shortest path between nodes i v and j v .
The diameter of the graph ( ) Geodesic distances in digraph Q in Figure 3: The distance matrix of a graph, denoted as D , is the square matrix where ( ) Consider the undirected unweighted graph in Figure 7 as example; The distance matrix for the graph in Figure 7, is given as;

Perron-Frobenius Theorem
We will state (without giving the proof) the Perron-Frobenius theorem which will be used later on in our work.
• The principal eigenvalue 1 λ has algebraic and geometry multiplicity of 1, and has a right eigenvector x with all positive elements, i.e. 0, 1, 2, , and a left eigenvector v with all positive elements, i.e. 0, 1, 2, , • Any non-negative right eigenvector is a multiple of x , any non-negative left eigenvector is a multiple of v .Furthermore, if A is the adjacency matrix of a directed network with a strongly connected component, then λ has algebraic and geometric multiplicity equal to 1, and has a left eigenvector x with non-negative elements, i.e. 0 i x > if node i belongs to the strongly connected component of the network or the out-components of the network, and 0 i x = if node i belongs to the in-component of the strongly connected component of the network.

Centrality Measures
Centrality of a given node is a measure of the importance and influence of that node in the corresponding network.The identification of which nodes are more important or central than the others is a key issue in network analysis.We can ask the following questions: • Which are the most central nodes in a network?
• Which are the most important nodes in a network?
• Which are the most influential nodes in a network?
These types of questions can have different interpretations in different networks.For instance; • when dealing with a social network, the most central node can be the most popular person, • when dealing with a web portal network, the most central node can be a web page with the best quality of content in a specific field, • in terms of the internet network, the most central node might be a network gateway (router) with the highest bandwidth.These ideas can be used to characterize types of centrality measure to find the most important nodes in a network in a given context.That is, there are many different centrality measures.When measuring the centrality of the node, we should be sure that:

• we know what each centrality measure means; Open Journal of Discrete Mathematics
• what they measure well; and • why a particular centrality measure is the most appropriate for the kind of set we are investigating.The most common centrality measures include degree centrality, betweenness centrality, eigenvector centrality, Katz centrality, PageRank centrality, closeness centrality and subgraph centrality [11] [13].

Degree Centrality
Degree centrality of a node i v in a given network is given by the total degree i d of i v .The degree centrality measures the ability of a node to communicate directly with other nodes.
In an undirected network, the degree i d of a node is given as where A is the adjacency matrix, i e is the ith standard basis vector (ith column of the identity matrix) and e is the vector of all entries one.
In a directed network, we can consider the in-degree of a node, given as or the out-degree of the node, given as .
In a directed graph, a source is a node with zero in-degree and a sink is a node with zero out-degree.
As an example, consider the undirected graph in Figure 8.We are interested in finding the central node using degree centrality.

A
The degree centralities are contained in the vector = d Ae ;  Using the degree centrality measure, node 1 is the most central node due to the fact that it has the highest degree.

Closeness Centrality
Closeness centrality measures the average shortest path length from a node to all other nodes.It uses neighbours and the neighbours of neighbours of a node i v to determine its centrality.Thus, nodes that are not directly connected to i v are taken into consideration as opposed to the degree centrality case.
Letting ( ) d ij be the length of the shortest path from node i to node j, the mean distance from node i to the other nodes in a network is given by; where N is the total number of nodes and D denotes the distance matrix.
In general, we want to associate high centrality score with important nodes.So we will use the reciprocal of i l as the value of the centrality.Thus, the closeness centrality i c for a node i v is given by: e De (5) For example, consider Network-1 in Figure 8.The distance matrix is given by ( ) ( ) Now, nodes 1 and 2 are identified as the most important nodes in the network.

Eigenvector Centrality
In an undirected network which is connected we can write the measure of centrality by using the eigenvector centrality.Eigenvector centrality takes into consideration the importance of neighbours of a node.In degree centrality a node is awarded one centrality point for each neighbour.Eigenvector centrality gives each node a score which is proportional to the sum of the score of its neighbours.In eigenvector centrality, a node is important if it is linked to other important nodes.The larger an entry on the node, the more important the node is considered to be.
From the Perron-Frobenius theorem, the eigenvector associated to the principal eigenvalue of the adjacency matrix A is unique if the network is strongly connected.We define the centrality of a node iteratively by using the sum of its neighbours' centralities.We initially assume that a node j has centrality ( ) 0 1 j x = .Then we calculate a new iteration ( ) x as the sum of the centralities of i's neighbours.
That is, where ij a are the entries of the adjacency matrix.
In matrix form we write this as: After k-steps, we have Note that the eigenvector centrality is defined as where the i c are constants.
Then, from Equation ( 8), we have , where i λ is the eigenvalue associated with the eigenvector i v and 1 λ is the principal eigenvalue. Since  , we have; This implies that the limiting centralities are proportional to the principal eigenvector 1 v of the adjacency matrix.
Therefore, in matrix form, the eigenvector centrality x satisfies Note that in eigenvector centrality, the higher the centrality of the neighbours of the node, the more important the node is.
For example, consider the network in Figure 9.

= EVC
Using the eigenvector centrality, we conclude that node 7 is the most important node.

Katz Centrality
Katz centrality takes into consideration both the number of direct neighbours and the further connections of a node in the network.That is, a node is important in Katz centrality if it has universal connections to other nodes in the network.Katz centrality takes into account all paths of arbitrary length from a node i to other nodes in the network.
The Katz centrality k is given by ( ) where e is the column vector of ones, α is called the attenuation factor and A is the adjacency matrix of the network.We can expand Equation ( 14) as and, if the sum converges, then ( ) where I is n n × identity matrix and e is the column vector of ones.
To ensure the convergence of ( ) and an accurate definition of Katz centrality, we must consider the attenuation factor α to be within the range For example, we compute the Katz centrality of Network-2 in Figure 9. Since the principal eigenvalue is 1 3.531

⇒ = k
Therefore, node 7 is the most important node.

Subgraph Centrality
Subgraph centrality attempts to measures the centrality of a node by taking into consideration the participation of each node in all subgraphs of the network.It does this indirectly by counting the number of closed walks in the network which start and end at a given node in the network: a relationship can be shown between subgraphs and these walks.
If A is the adjacency matrix of an unweighted network, we know that

• ( )
corresponds to the number of closed walks of length k starting at node i v .
• ( ) corresponds to the number of walks of length k that start at node i v and end at node j v .
We define ( ) ( ) as the local spectral moment of node In a similar way to Katz centrality, subgraph centrality of a node i is a weighted sum of closed walks of different lengths which start and end at node i.
The shorter the closed walk, the more the centrality of the node is influenced.
The subgraph centrality of node i in the network is given by ( ) ( ) ( ) ( ) ( ) Considering the exponential of the adjacency matrix, we observe that the numbers of closed walks associated with 2 A of length 2 are counted twice for every link in the network.On the other hand, the closed walk associated with 3 A of length 3 is counted 2 3 3! × = for every triangle.To avoid double counting, we penalize the closed walks of length 2 by 2! and closed walks of length 3 are penalized by 3! .In general, any circuit of length k can be traversed in 2 directions and there are k points where you can start, so that this circuit is counted 2 k × times.From the exponential of the adjacency matrix, when 4 k ≥ , the penalization included is not the same as the penalization of counting the number of repeated closed walks.
For instance, let us consider Network-2 in Figure 9.We want to find the subgraph centrality.We have; Then the subgraph centrality will consist of the diagonal entries of e

SC i =
We observe that node 7 has the highest subgraph centrality, thus, node 7 is the most central node.

Relationship between Centrality Measures
Among the challenges that arise in determining the importance of a node in a network using centrality is that it is not always clear which of the centrality measures should be used.It is not obvious whether two centrality measures will give the same ranking of the nodes in the given network.Also, there is the necessity of choosing the attenuation factor α in Katz centrality which adds another challenge.Different choices of α may lead to different rankings.
Experimentally, it has been seen that different centrality measures provide highly correlated rankings [1].Ranking becomes more stable when α approaches its limits, i.e. as

α α λ → →
We will prove these correlations and stability of ranking.This will relate the degree and eigenvector centrality to Katz centrality.Proof.The Katz centrality k is given as ( ) which can be written as ( ) ( ) where d is the vector of the degree centralities of the nodes.
Consider the relation ( ) It is clear that the ranking produced by ψ will be exactly the same as that produced by k , due to the fact that the score of each node has been scaled and shifted in the same way.Thus, . Then, where d is the vector of the degree centralities of the nodes.
Therefore, the ranking produced by the Katz centrality reduces to that produced by degree centrality.
To show the second relation, we write the column vector e , as 1 , where s i β ′ are constants and s i ′ v are eigenvectors of matrix A .
Then, we can write the Katz centrality as Consider the relation ( ) That is, the ranking produced by φ is exactly the same as that produced by k , due to the fact that the score of each node has been scaled and shifted in the same way.This implies that Then, This implies that the limiting centralities are proportional to the principal eigenvector 1 v of the adjacency matrix.Thus, the ranking produced by the Katz centrality reduces to those produced by eigenvector centrality.

Matrix Functions
This section discusses some of the matrix functions developed using Taylor series.
Matrix functions have applications throughout applied mathematics and scientific computing.Matrix functions are used in various fields, for example, in control theory and electromagnetism and can also be used to study complex networks like social networks.

Let ( )
f z be a complex-valued function, such that f z as a convergent power series ( ) for z R < , and , 0 k a k ≥ are complex-valued constants [12]. Let be a complex-valued matrix.Then we define the matrix function of A as ( ) The matrix series in Equation ( 32) is convergent to the n n if all n 2 scalar series that make up ( ) f A are convergent.It turns out that the series of ( ) f A converges if all eigenvalues of A lie in the region of convergence of ( ) f z in Equation ( 31).This can be proved by the following theorem.
Theorem 3. Suppose that ( ) f z has a power series representation, written as ( ) in an open disc z R < .Then the series Proof.We prove this theorem only for diagonalisable matrices using the Jordan form of matrix A .
Let Q be a transformation matrix which diagonalizes A .Then we can write ( ) ( ) .
If A has an eigenvalue i R λ ≥ , then the series in Equation (33) diverges when evaluated at i λ .It follows that the series in Equation (34) also diverges.
That is, if there exist eigenvalues of matrix A which fall outside z R < , then the series in Equation (34) diverges.
Therefore ( ) In general, if the function ( ) f z can be expressed by using Taylor series and it converges in the disc z R < which contains the eigenvalues of A , then ( ) f A can be computed by substituting the matrix A for variable z in the function ( ) The most important matrix functions which can be expressed by using the Taylor series are the following: Each of these functions can (in theory) be used to define a centrality measure on a network with adjacency matrix A .For example, to obtain the centralities Open Journal of Discrete Mathematics of all the nodes we can compute ( ) f A e , where e is the vector of ones, or ( ) ( ) We may need to be careful with these raw centrality measures as ( ) f A may contain negative (or even complex) entries.For instance, to compute the logarithmic function ( ) f A , we need to take care of the complex entries, since it is not possible to make ranking out of complex entries.To avoid the complex entries, we compute ( ) , where γ is a real constant chosen so that ( ) γ + I A has positive eigenvalues.The constant γ differs for different networks.
We can also define centrality measures by applying analytic continuations of ( ) f z outside its radius of convergence.
Recall that, if the attenuation factor .
λ is an eigenvalue of A .Then we can generalize Katz centrality by the following definition: where e is the column vector of ones and ( ) ρ A is the spectrum of A .
To determine which of the matrix functions can be used to asses centrality in the network, we will do some experimental work on a variety of networks in the following section.We will perform the experimental work by making comparisons between the rankings based on the common centrality measures discussed in section V and the rankings based on these matrix functions.

Experimental Work and Discussion
In this section we aim to analyse experimentally the agreements between the centrality measures discussed in section V, and whether the matrix functions discussed in section VII can be used to determine the important nodes in a network.The experimental work will compare matrix functions to the common centrality measures.
A variety of techniques can be used to compute centrality measures (those discussed in section V) and matrix functions.To compute the exponential of a matrix, logarithmic of a matrix and other matrix functions we will use SciPy matrix functions [14].In our new measures involving matrix functions and generalisations to Katz centrality, we will calculate centralities by using the diagonal entries of these functions.We will use the Kendall correlation coefficient in our experiments to compare the agreement between centrality measures.

Correlation (Kendall, Pearson, Spearman) Coefficient
Correlation is a bivariate analysis that measures the strengths of association between two variables.The value of the correlation coefficient varies between 1 and −1.The positive correlation signifies that the ranks of both variables are increasing, while the correlation signifies that as the rank of one variable increases, the rank of the other variable is decreasing.The correlation coefficient between the two variables is said to be a perfect association if it lies between ±1 [15].The closer the value of the correlation coefficient to 1 or to −1, the stronger the relationship between the two variables.As the correlation coefficient value goes towards 0, the relationship between the two variables will be weaker.In statistics, we usually measure the strengths of association by: Pearson correlation, Kendall rank correlation and Spearman correlation.
The Kendall coefficient of correlation is the measure of the degree of correspondence between two set of ranks given to the same set of objects.The Kendall coefficient is interpreted as the difference between the probability of these objects being in the same order and the probability of these objects being in a different order [2].
Let X and Y be two observations, such that ( • Any pair of observations ( ) x y and ( ) x y is said to be concordant if 0, • The pair is said to be discordant if • The pair is neither concordant nor discordant if i j x x = or i j y y = .
The Kendall ( ) The Kendall rank coefficient can be interpreted as follows: the values of τ greater than zero show an agreement, being close to one indicates a strong agreement.On the other hand, values less than zero show a disagreement and those close to negative one indicate a strong disagreement [2].Indeed, if all pairs are concordant, then 1 τ = , which implies that the variables are in exactly the same ranking (order).If they are all discordant then 1 τ = − , which implies that Pearson correlation is a measure of degree of linear relationship between two variables and is denoted by r.Pearson correlation is basically used to draw a line of best fit through the data of two variables, r indicates how far away all these data points are to the line of best fit.
The following formula is used to calculate the Pearson r correlation: To use Pearson correlation r, the two variables must be measured either in interval or ratio scale.However, both variables do not need to be measured on the same scale (for instance, one variable can be ratio and one can be interval).
We can not use the Pearson correlation for ordinal data, instead we use Spearman's rank correlation or a Kendall's Correlation.
Spearman rank correlation is the nonparametric version of the Pearson correlation coefficient that is used to measure the degree of association between two two continuous or ordinal variables.
The following formula is used to calculate the Spearman rank correlation: ( ) where; is the difference between the two ranks of each observation; N is the number of observations.
We use the Spearman correlation coefficient when the relationship between variables is not linear.
Despite the fact that both Spearman and Kendall correlations measure monotonicity relationships and have a nice interpretation but in this paper we will opt to use the Kendall correlation coefficient due to the following reasons [16] [17]: • The distribution of Kendall's has better statistical properties.
• The interpretation of Kendall's in terms of the probabilities of observing the agreeable (concordant) and non-agreeable (discordant) pairs is very direct.• The Kendall correlation has a smaller gross error sensitivity (GES) (more robust) and a smaller asymptotic variance (AV) (more efficient), that is

Network Models
Networks have been around us for so many years and the study is not new.
Graph theorists and mathematicians have been surrounded by problems where they were trying to make sense of these complex networks.As a result of this, random network theory was generated stating that nodes and links in a graph are connected randomly to each other.In this paper we will consider three networks model due to its significance: • Erdös-Rényi model: are formed by completely random interactions between the nodes.Each node chooses its neighbours at random, constrained either by an overall number of relationships that might be assigned in the graph, or a probability of connecting to a certain neighbour [18].Mathematically, each network would be following a poison distribution.This distribution is such that vast majority of nodes have equal number of links and it is almost impossible to find outliers.
• Barabási-Albert model: these are scale-free networks which are formed by two simple mechanisms, growth and preferential attachment.The main prediction which a scale free network makes is the presence of few outlier nodes which have many connections.These nodes are also known as hubs.
Preferential attachment is a probabilistic mechanism in which a new node is free to connect to any node in the network, whether it is a hub or has a single link [18] [19].
• Watts-Strogatz model: is important because it shows how the "small-world effect" in networks can coexist with other commonly observed features of social networks, like a high clustering coefficient.More specifically, the model showed how adding a small fraction of random long-range links in an otherwise regular network can lead to slow, logarithmic scaling of the typical distance between nodes with network size [20].

First Experiment
We begin our experiments by considering a small network with 20 nodes.The network was randomly generated in text editor and drawn using Sage.The aim is to determine which of the matrix functions give similar rankings as the common centrality measures.We have many functions to choose from and we want to limit our choice.Note that we will not consider the exponential functions since it is similar to the subgraph centrality.
The experiment shows that the diagonal entries of the logarithmic function and cosine function give the ranking of the nodes in reverse order as compared to other rankings.Also, we observe that the sine function does not match any other centrality measure.The network in Figure 10, having 20 nodes and 42 edges gives us a real picture on node ranking.
The rankings of the nodes obtained for the graph in Figure 10  Figure 10.Network with 20 nodes and 42 edges.
Note that the ranking is from the most to the least important/central node with respect to the centrality measure used.
To avoid making many comparisons using Kendall correlation coefficient between the common centrality measures and those produced by matrix  10.
Table 2 shows that there is an agreement between closeness centrality and other centrality measures.
We observe from graph of Figure 11 that there is an agreement between closeness centrality and other centrality measures.
In Table 1, we have to modify the cosine and logarithmic functions so that they match with other rankings.The best way of doing this seems to be by reversing the order of their rankings.
To be more confident about the rankings of nodes using matrix functions, we will use the Kendall correlation coefficient to make the comparison between closeness centrality and the matrix functions.We chose closeness centrality among the other standard centrality measure to make the comparison with matrix functions in as much as it takes into account neighbours and the neighbours of neighbours of a node to determine its centrality.In the comparisons, we denote by ( ) CC, f τ the Kendall coefficient between closeness centrality and centrality measure induced by ( ) f A .In Table 3, we reversed the rankings given by the cosine and the logarithmic functions before calculating the Kendall coefficients.We observe in Table 3   closeness centrality measure and the agreement is quite strong for logarithmic, cosh, and cosine functions since their Kendall coefficients are close to 1. On the other hand, the Kendall coefficient between closeness centrality and sine function is negative, which implies that there is no agreement between their rankings.

Second Experiment
We compare the agreement of centrality measures, by generating 10 random networks using the Barabási-Albert preferential attachment model.
The Barabási-Albert model is a simple scale-free random graph generator.The network begins with an initial set of 0 2 m ≥ nodes.The degree of each node in the initial network should be at least 1, if not, the network will always end up being disconnected.New nodes are free to attach to an existing node in the network.At each step, a new node is created and connected to an existing node.
Each new node is connected to 0 m m ≤ existing nodes with a probability that is proportional to the number of links that the existing nodes already have.To use this method, we specify the number of nodes in the network (n) and the number of new nodes form as they appear (m) in such a manner that nodes with higher degree have a higher chance of being selected for attachment [21].
The comparisons involve rankings of nodes using centrality measures such as closeness centrality, degree centrality, eigenvector centrality, Katz centrality and subgraph centrality.We use ( ) A e in this experiment to compute Katz centrality.In all cases we take  closeness centrality is highly correlated with centrality measures corresponding to 2 τ , 3 τ and 4 τ .The eigenvector, Katz and subgraph centralities are also highly related, as indicated by 8 τ , 9 τ and 10 τ .The experiment shows that the agreement becomes stronger if the network is more connected.In general, we say that for sufficiently dense (i.e., very connected) networks, the two measures provide almost identical rankings, producing Kendall correlation coefficients close to 1.

Third Experiment
We generate 10 random networks as in the second experiment.This time, we fix the value of n to be 200 and we vary m.We calculate the Katz centrality for each network using different choices of α .Recall, the Katz centrality of the nodes is given by ( ) A e. We choose τ k k between all pairs of measures as denoted in Table 6.
Note that the Kendall coefficient involving 3 k and 4 k are computed by taking the absolute value of the inverse function, that is ( ) A e and the other coefficient are computed by using the normal formula.
We observe in ), then we conjecture that the rankings provided by any generalised Katz centrality are always the same regardless of the choice of α , this is when

Fourth Experiment
We repeat the second experiment involving generating 10 random networks with different number of nodes by using the Erdös-Rényi method.We will use the same notation for Kendall coefficients as we used in the second experiment, see Table 4.
The Erdös-Rényi model is used to generate random networks in which edges are set between nodes with equal probabilities.The model can be used to prove Table 6.The notations of Kendall correlation coefficients.7. Kendall coefficients for generalized Katz centrality applied to random networks generated by using the Barabási-Albert method.
gebra have been used to model social interactions.Recently network models are now commonplace not only in the hard sciences but also in various technologi-L.L. Njotto DOI: 10.4236/ojdm.2018.8400880 Open Journal of Discrete Mathematics cal, social and biological scenarios.Networks are used to model a variety of highly interconnected systems, both in nature and man-made world of technol-

edges which are incident to the corresponding node iFigure 1 .
Figure 1.Digraph.
disc z R < , where R ∈  .Using Taylor's theorem, we can represent ( ) random variables of X and Y, respectively.The values ( )

L
. L. Njotto DOI: 10.4236/ojdm.2018.84008101 Open Journal of Discrete Mathematics the variables are in exactly the opposite ranking (order).

2 x∑ 2 y∑
correlation coefficient N = number of observation in each data set xy ∑ = the sum of the products of paired scores x ∑ = sum of scores of variable x y ∑ = sum of scores of variable y scores = sum of squared scores of variable x = sum of squared scores of variable y

Kendall correlation has a ( ) 2 O
n computation complexity comparing with ( ) log O n n of Spearman correlation, where n is the sample size.
each choice of n and m, we generate 5 networks and record the mean values of the Kendall coefficients.We denote the Kendall coefficient correlation by i τ according to the We calculate the Kendall correlation coefficients ( ), i j

Table 1 .
by using different centrality measures including matrix functions are shown in Table1.Rankings of nodes using centrality measures and matrix functions for the network in Figure10.
will choose one centrality measure among the other one.To do this we need to investigate whether the chosen centrality measure agrees with the other centralities measures.In this case, we make a comparison between the closeness centrality (CC) and degree centrality (DC), eigenvector centrality (EC), L. L. Njotto DOI: 10.4236/ojdm.2018.84008104 Open Journal of Discrete Mathematics functions, we

Table 2 .
that the Kendall coefficients between closeness centrality and other centrality measure applied to graph in Figure10.

Table 3 .
Kendall coefficients between closeness centrality and matrix functions applied to graph in Figure10.

Table 4 .
It is evident from Table5, that all Kendall coefficients are positive, which indicates an agreement between the rankings.The Kendall coefficient shows that

Table 4 .
The notations of Kendall correlation coefficients.

Table 5 .
Kendall coefficientsfor centrality measures applied to different random networks generated by using the Barabási-Albert method.
n: Number of nodes in the network.m: Number of neighbours to attach to each new node.
Table 7 that the Kendall coefficient between 3