Sparsity and Its Impact on Financial Network with Complete Core-Periphery Structure ()
1. Introduction
*Corresponding author.
A reduction of network sparsity, or in other words, an increase in the interconnectedness of financial institutions, underscores the critical role of financial network architecture in the financial market integration. For example, reducing the sparsity of a banking network can offer significant advantages. Upgrading a local bank branch to a regional headquarters improves the connectedness among regional branches. Similarly, reducing the overall sparsity of a banking network by strategically merging two financial networks into one can create substantial potential impacts on banking operations. These financial strategies highlight the importance of examining changes in interconnectedness and their broader impacts. Yet, the change in financial network sparsity and its impact remains not well-understood, presenting a significant gap in current research. To our best knowledge, this study is among the first to explore theoretically the impact of financial network with complete core-periphery structures.
Pioneering studies have examined the financial contagion and risk transmission through a given financial network structure (Allen and Gale [1], Acemoglu, Ozdaglar, and Tahbaz-Salehi [2], Glasserman and Young [3]). Our paper fills in the gap by investigating how the change in network sparsity influences the relationship between an explanatory variable for one agent and the dependent variable of all other agents in financial networks with core-periphery structures.
We derive closed-form solutions that quantify links between sparsity and its impact on financial networks with complete core-periphery components. Simulation results are also provided to validate our analytical solutions. Jie and Ma [4] further extend our analysis to investigate financial networks with the incomplete and random core-periphery structures.
Core-periphery networks, which consist of a fully connected core and a sparse periphery, have been widely recognized in the financial sector (Di Maggio, Kermani, and Song [5], Li and Schürhoff [6]). We focus on two strategies to reduce network sparsity whilst maintaining the core-periphery topology unchanged: 1) increasing the number of core agents, and 2) merging core-periphery components. The first strategy is to promote periphery agents to core agents who will connect with each other and also connect to the remaining periphery agents. For example, JPMorgan upgraded its periphery branch in Washington D.C. in 2022 to a core branch as a regional headquarter, which strengthened its connections to the remaining periphery branches of the banking network of Washington D.C. and the Greater Washington region.
The second sparsity reduction strategy is to merge two or more components into a big component whilst the core-periphery topology structure remains unchanged. This can be achieved by adding links between core and periphery agents across components. Finance examples include bank mergers and acquisitions that combine two financial sub-networks into one (see, e.g., Levine, Lin, and Wang [7]). We prove that sparsity reduction by both strategies will increase the network impact. In real world applications, these two sparsity reduction strategies may be combined in various ways to achieve sparsity reduction and greater network impact.
The remainder of the paper is organized as follows. Section 2 introduces basic definitions and model setup. Section 3 and Section 4 examine respectively two alternative sparse reduction strategies for financial networks with complete core-periphery components. Finally, Section 5 concludes. Proofs are provided in the Appendix.
2. Definition and Model Setup
Based on Jackson [8], the definition of a network is given as follows:
Definition 1 (Network) A network (
, W) consists of a set of agents
and an N × N network adjacency matrix W, where its each element
represents the relation between agent i and j.
In this paper, we focus on unweighted, symmetric networks where
if a link exists between i and j, and
otherwise. This means the network adjacency matrix W is a symmetric matrix with binary elements. Its diagonal entries are set to be zeros, following conventional definitions (LeSage and Pace [9]).
Following Diestel [10] (see p. 164), we define network sparsity as the edge density of the network matrix W.
Definition 2 (Network Sparsity) Given the initial structure of W, network sparsity is defined as the proportion of zero entries in W, excluding the diagonal elements. In other words, sparsity refers to the degree of looseness in the connections within a network. It serves as an inverse measure of network interconnectedness.
According to this definition, an increase in links among agents reduces the number of zeros in W, thereby decreasing network sparsity. Fixing the initial structure of W is essential because networks with the same sparsity level (i.e., the same percentage of zeros in non-diagonal elements) may exhibit different network impacts if their initial structures differ. Therefore, we need to fix the initial structure of W and the network topology to examine the impact of network sparsity.
In this paper, we focus on the network topology of sparse networks with core-periphery components, a widely studied financial network structure. Empirical research has demonstrated that financial networks often exhibit a core-periphery structure (e.g., Bech and Atalay [11], Di Maggio, Kermani, and Song [5], Hollifield, Neklyudov, and Spatt [12], Li and Schürhoff [6], Craig and Ma [13]). We follow Elliott, Golub, and Jackson [14] and Craig and Ma [13] to define the core-periphery structure.
Definition 3 (Core-periphery Component) A component (
, Wc) of size p has a core-periphery structure if it has a set of core agents
with size s and a set of periphery agents
of size p − s. The corresponding p × p adjacency matrix Wc can be arranged to the following structure:
(1)
The block CC defines the interconnected relationship among core agents, where CC = 10 with dimension of s × s consists of all ones except that the diagonal terms are zeros. The block PP is a zero matrix (PP = 0) with dimension of (p − s) × (p − s), indicating sparsity (no interaction) among periphery agents. The block CP = R with size s × (p − s), and the transposed block CP' = R' due to the symmetry of W. R is both row-regular and column-regular. R is row-regular implies that each row has at least one element equal 1, indicating that each core agent is connected to at least one periphery agent. Whilst R is also column-regular means that each column is covered by at least one element equal to 1, indicating that each periphery agent is connected to at least one core agent.
In other words, a core-periphery component consists of two tiers: a fully connected core, where all core agents are directly linked, and a sparse periphery, where periphery agents are not directly connected to each other. Additionally, each core agent is linked to at least one periphery agent, and vice versa.
A complete core-periphery component is defined as CP = R = 1, i.e., all entries are ones. In other words, every core agent in a complete core-periphery component is connected to all periphery agents. The complete core-periphery topology structure has been widely studied in the existing finance literature and has a rich family of varieties. Based on the ranking of sparsity, we have the star component as the highest sparsity and the complete component as the lowest sparsity. The star component represents the highest sparsity case of a core-periphery structure, where the block CC = 10 = (0) is a 1 × 1 matrix with a zero entry. This implies that there is only one core agent connected to multiple periphery agents, while the periphery agents have no direct links among themselves. The star networks are widely studied structure in network analysis. For example, Cerdeiro, Dziubinski and Goyal [15] examine the investment strategy in the cybersecurity network with a star component.
The complete component, on the other hand, represents the lowest sparsity case of a core-periphery structure, where the block PP = 0 = (0) is a 1 × 1 matrix with a zero entry. In this structure, there is only one periphery agent, with all other agents serving as core agents. Complete components are commonly studied structure in financial network analysis (e.g., see Acemoglu, Ozdaglar, and Tahbaz-Salehi [2]).
For the general complete core-periphery components with moderate sparsity, i.e., the number of core agents
, where p is the number of total agents of the component, they have been applied to investigate the interbank market network recently (in ’t Veld and van Lelyveld [16], in ’t Veld, van der Leij, and Hommes [17]). In this paper, we expand our analysis to cover the whole family of the complete core-periphery component, including both the star component and the complete component as the two extreme cases.
Jie and Ma [4] further examined the incomplete core-periphery component, which exists when at least one zero appears in the CP block. This implies that not all core agents are connected to all periphery agents in an incomplete core-periphery component. In this paper, we focus on the financial networks with complete core-periphery components.
To set the stage for our theoretical analysis, we consider a network W consisting of B homogeneous independent core-periphery components
of size p. Thus, the total number of agents in the network is N=Bp. Then the adjacency matrix W takes a block-diagonal form:
with
(2)
The size of matrix W is N × N and the size of component matrix
is p × p.
After defining the core-periphery network structure, we introduce the concept of network impacts. Following Bramoullé, Djebbari, and Fortin [18], our financial network analytical model is inspired by spatial econometrics literature. We define our financial network model, or, the spatial autoregressive (SAR) model, as follows:
(3)
where y and X are the dependent and the set of explanatory variables respectively, W is a network adjacency matrix, ρ and β are parameters, and ε is the error term.
It is conventional to assume that
, where
and
denote the smallest and largest eigenvalues of W, respectively, in the literature. This assumption ensures that both the estimation of SAR network model is consistent and the matrix
is nonsingular (LeSage and Pace [9], p. 47).
Based on the above SAR network model, we have
(4)
where .
In a linear regression model, parameter
represent the partial derivatives of the dependent variable y with respect to the explanatory variable
. In the SAR network model, the impact of
on the dependent variable y varies across observations of all agents. To summarize these varying effects, LeSage and Pace [9] defined three measures, including average direct impact, average indirect impact, and average total impact, to quantify network impacts.
Definition 4 (Network Impacts)
For any explanatory variable
, define:
(5)
(6)
(7)
These network impact measures have been widely applied in financial network research. For example, Grieser, Hadlock, LeSage, and Zekhnini [19] applied these measures to examine causal peer effects in capital structure decisions using a peer-network. The economic mechanism underlying the relationship between sparsity and network impact may be illustrated through an example of investment hubs provided by vom Lehn and Winberry [20]. By analyzing disaggregated asset-level data that detail purchases of 33 types of capital assets across various sectors, they constructed a 37-sector investment network for the U.S. each year. Their findings reveal that the investment network is highly sparse, dominated by just four key investment hubs: construction, machinery manufacturing, automobile manufacturing, and professional/technical services. Together, these hubs account for nearly 70% of total investment. Consequently, the production and employment in these sectors are significantly more sensitive to business cycle shocks compared to others.
Without loss of generality, we normalize the parameter
in our subsequent analyses to focus primarily on the financial network impacts.
3. Sparsity Reduction by Increasing the Number of Core Agents
One strategy to reduce network sparsity while keeping both the component size p and the number of components B fixed is to increase the number of core agents by promoting some of the periphery agents to core agents in each component.
For example, Figure 1(a) provides a network view of the process of increasing the number of core agents, transforming a star component with just one core agent (s = 1) eventually into a complete component with five core agents (s = 5),
Figure 1. Sparsity reduction by increasing the number of core agents within complete core-periphery components. (a) Network view of sparsity reduction; (b) Changes in the adjacency matrix of a component.
whilst always keeping the size of each component p = 6 to be constant. As each periphery agent is turned into a core agent, new links are formed between the newly designated core and the remaining periphery agents, thereby reducing the network sparsity. When s = 5, the component reaches its complete form with the lowest sparsity. This demonstrates that increasing the number of core agents in each component directly reduces network sparsity.
Figure 1(b) shows an example of the changes of the adjacency matrix
during the transformation of the component
from s = 1 to s = 3. The decrease in the number of zero entries in
confirms the reduction in sparsity. In general, for any given size p of a complete core-periphery component and the number of components B in a sparse network with size of N = Bp, we have its network sparsity as follows:
(8)
From this formula, it shows that sparsity decreases as the number of core agents
increases, since its derivative is
. Therefore, the number of core agents
can serve as an indirect measure of network sparsity. The next two propositions provide the theoretical findings of the reduction of sparsity and its network impact. Their proofs can be found in Appendix A.2.
Proposition 1. Let
be a block-diagonal matrix consisting of complete core-periphery components. Denote
as the submatrix representing each individual complete core-periphery component, which is given by:
(9)
If , then
is also a block-diagonal matrix with submatrix
of the following form:
(10)
where
is an identity matrix and
consists of all ones except that the diagonal terms are zeros.
Detailed formulae of
to
can found in the Appendix.
Based on Proposition 1, we can calculate network impacts following definition 4 in Section 2 and prove the following proposition in the Appendix A.2.
Proposition 2. Given the size
of complete core-periphery components, if the number of its core agents
increases that leads to a decrease in network sparsity, then the average direct, indirect, and total impacts of the network all increase.
We also run two sets of simulations to verify the signs of these network impacts. In the first set of simulations, we use the analytical solutions of average direct, indirect and total impacts from Proposition 2 derived in Appendix A.2 to compute their derivatives with respect to the number of core agents s, which is an inverse measure of network sparsity. Our simulation results are provided in Panel A, B, and C of Table 1. We find all the derivatives are positive, which aligns with our findings. As the number of core agents s increases that leads to a reduction of
Table 1. Simulation results of adding core agents in the sparse network with complete core-periphery components. Panel A to Panel C report simulation results of the derivatives of network impact with respect to
(the number of core agents), where
is an inverse measure of the network sparsity. The component size
is assigned to 10, 50, 100, 500, 10,000, 50,000, 100,000 and 200,000 sample sizes. The number of cores in each component
is chosen as quintiles of
, i.e.,
,
,
,
, and
.
is the median of
, i.e.,
. This will make
within the consistent and nonsingular range. Note that our network matrix
is an unnormalized
matrix, hence the
is unnormalized and is relatively small. If we normalize
, say, by dividing all its elements by
, then the normalized
(see Corollary 1 in Appendix A.1). In Panel A, all numbers are multiplied by 1012. In Panel B and Panel C, all numbers are multiplied by 106. Panel D reports the simulation results of relationship between sparsity and network impacts, where
,
and
are fixed, and
is the same as in Panel A to Panel C. The formula for sparsity is
.
Panel A. Derivatives of direct impact with respect to s (the number of core agents).
s |
|
|
|
|
|
p |
|
|
|
|
|
10 |
9.3752 |
6.8753 |
4.3752 |
1.8751 |
0.6251 |
50 |
9.8758 |
7.3762 |
4.8761 |
2.3757 |
0.1251 |
100 |
9.9391 |
7.4398 |
4.9398 |
2.439 |
0.0625 |
500 |
9.9951 |
7.4988 |
4.9988 |
2.495 |
0.0125 |
1000 |
10.009 |
7.5163 |
5.0163 |
2.5088 |
0.0063 |
5000 |
10.075 |
7.6126 |
5.1128 |
2.5749 |
0.0013 |
10,000 |
10.152 |
7.7298 |
5.2306 |
2.654 |
0.0007 |
50,000 |
10.831 |
8.7732 |
6.2991 |
3.3824 |
0.0002 |
100,000 |
11.852 |
10.43 |
8.0775 |
4.6426 |
0.0001 |
200,000 |
14.711 |
15.805 |
14.661 |
9.8214 |
0.0002 |
Panel B. Derivatives of indirect impact with respect to s (the number of core agents).
s |
|
|
|
|
|
p |
|
|
|
|
|
10 |
3.7501 |
2.7501 |
1.7501 |
0.7500 |
0.2500 |
50 |
3.9504 |
2.9504 |
1.9503 |
0.9502 |
0.0500 |
100 |
3.9758 |
2.9758 |
1.9757 |
0.9754 |
0.0250 |
500 |
3.999 |
2.9991 |
1.9985 |
0.9971 |
0.0050 |
1000 |
4.0055 |
3.0058 |
2.0045 |
1.0018 |
0.0025 |
5000 |
4.0399 |
3.0413 |
2.0351 |
1.0211 |
0.0005 |
10,000 |
4.0813 |
3.0845 |
2.0721 |
1.0438 |
0.0003 |
50,000 |
4.4408 |
3.4761 |
2.4154 |
1.2569 |
0.0001 |
100,000 |
4.9796 |
4.1176 |
3.0119 |
1.6423 |
0.0000 |
200,000 |
6.4900 |
6.3002 |
5.3711 |
3.3482 |
0.0001 |
Panel C. Derivatives of total impact with respect to s (the number of core agents).
s |
|
|
|
|
|
p |
|
|
|
|
|
10 |
3.7501 |
2.7501 |
1.7501 |
0.7500 |
0.2500 |
50 |
3.9504 |
2.9504 |
1.9503 |
0.9502 |
0.0500 |
100 |
3.9758 |
2.9758 |
1.9757 |
0.9754 |
0.0250 |
500 |
3.999 |
2.9991 |
1.9985 |
0.9971 |
0.0050 |
1000 |
4.0055 |
3.0058 |
2.0045 |
1.0018 |
0.0025 |
5000 |
4.0399 |
3.0413 |
2.0351 |
1.0211 |
0.0005 |
10,000 |
4.0813 |
3.0845 |
2.0721 |
1.0438 |
0.0003 |
50,000 |
4.4408 |
3.4761 |
2.4154 |
1.2569 |
0.0001 |
100,000 |
4.9796 |
4.1176 |
3.0119 |
1.6423 |
0.0000 |
200,000 |
6.4900 |
6.3002 |
5.3711 |
3.3482 |
0.0001 |
Panel D. Relationship between sparsity and network impacts.
s |
Sparsity |
Direct
impact |
Indirect
impact |
Total impact |
1 |
0.999990010 |
1.000000000 |
0.000005001 |
1.000005001 |
100 |
0.999050495 |
1.000000001 |
0.000475432 |
1.000475433 |
200 |
0.998200991 |
1.000000002 |
0.000900951 |
1.000900954 |
300 |
0.997451487 |
1.000000003 |
0.001276522 |
1.001276525 |
400 |
0.996801984 |
1.000000004 |
0.001602105 |
1.001602109 |
500 |
0.996252481 |
1.000000005 |
0.001877663 |
1.001877667 |
600 |
0.995802979 |
1.000000005 |
0.002103158 |
1.002103163 |
700 |
0.995453477 |
1.000000006 |
0.002278553 |
1.002278559 |
800 |
0.995203976 |
1.000000006 |
0.002403810 |
1.002403816 |
900 |
0.995054475 |
1.000000006 |
0.002478891 |
1.002478898 |
1000 |
0.995004975 |
1.000000006 |
0.002503759 |
1.002503766 |
sparsity, all measures of network impact are increased.
In the second set of simulations, we calculate the sparsity and network impacts and present the results in Panel D of Table 1 and Figure 2, which illustrates the negative relationship between sparsity and network impacts. The results from both sets of simulations further support Proposition 2. We notice that in our simulations, sparsity reduction is relatively small and most of the total impacts are from the direct impacts in Panel D of Table 1 and Figure 2. This is because our simulated networks are designed to exhibit very high sparsity, thereby the indirect network impacts are relatively weak. However, we find that the magnitudes of the derivatives of indirect impacts dominate that of total impacts: the derivatives of total impacts in Panel C are almost identical to that of indirect impacts in Panel B of Table 1. It implies that the network indirect impacts are more sensitive to the sparsity reduction than the direct impacts. Taking together, it illustrates that it is important to investigate both direct and indirect impacts to understand the total network impacts.
4. Sparsity Reduction by Merging Components
As an alternative strategy to reduce network sparsity while preserving the network topology, we merge complete core-periphery components.
We first discuss a sparse network composed of
independent complete components, which is a special case of complete core-periphery component with the number of core agents
, where
is the size of the complete component. The merging process is illustrated in Figure 3. Initially, the network consists of
complete components, each of size
with
core agents. From (a) to (b), every two complete components merge into a larger complete component with
core agents, reducing the network’s sparsity. As a result, the number of components
is halved, while the component size
doubles. Given that the size of the network is
, the network matrix
is:
and
(11)
The overall sparsity of the network
is given by:
(12)
From this equation, we observe that sparsity is a decreasing function of the
Figure 2. Sparsity and its impact on networks with complete core-periphery components.
component size
, as
. This confirms that increasing the component size
reduces the overall sparsity of the network. The network sparsity is reduced by merging two or more small components into larger homogeneous components.
Next, we examine the merging process for general types of complete core-periphery components where the number of core agents
, as illustrated in Figure 4. Initially, the network consists of
complete core-periphery components, each with size 3 and only one core agent (
) in (a). During the transition from (a) to (b), every two complete core-periphery components are merged into a larger complete core-periphery component, increasing the number of core agents to
.
The number of components
is again reduced by half, while the component size
doubles. New links are introduced between core and periphery agents within the new components, thereby reducing network sparsity.
Figure 3. Network view of sparsity reduction by merging complete components.
Figure 4. Sparsity reduction by merging complete core-periphery components.
Suppose the initial adjacency matrix has
core agents per component and a component size of
, therefore
is fixed. After merging small components into a single larger component, the number of core agents in each component becomes
, and the component size increases to
. This implies:
. In this circumstance,
has a linear relationship with
. In general, the network sparsity becomes
(13)
We can derive
, as
. This confirms that sparsity is a decreasing function of the number of core agents s. Therefore, the number of core agents (s) in each component may serve as an inverse measure of network sparsity. The proof of the following Proposition 3 is given in Appendix A.3.
Proposition 3.
is a block-diagonal matrix composed of complete core-periphery components. Each submatrix
represents a complete core-periphery component with the following form:
(14)
(i) (Complete Component) If
is a block-diagonal matrix with complete components
and , then
is also a block-diagonal matrix with submatrix
of the following form:
(15)
where
, and
.
(ii) (General Complete Core-periphery Component) If
is a block-diagonal matrix with general complete core-periphery components
and , then
is also a block-diagonal matrix with submatrix
of the following form:
(16)
where
is an identity matrix and
consists of all ones except that the diagonal terms are zeros.
Detailed formulae of
to
can be found in the Appendix.
From Proposition 3, we can calculate network impacts based on definition 4 in Section 2. We can prove the following proposition in the Appendix A.3.
Proposition 4. Given the network size N, if a merger of complete core-periphery components that leads to a reduction of network sparsity, then the average direct, indirect, and total impacts of the network all increase.
We also run simulations to verify the signs of all the network impacts in Table 2 and Table 3. They confirm that the merger of complete core-periphery components
Table 2. Simulation results of merging complete components (
). Panel A to Panel C report simulation results of derivatives of network impact with respect to
(the number of core agents), where
is an inverse measure of the network sparsity. The component sample size
is assigned to 10, 50,100, 500, 10,000, 50,000, 100,000, and 200,000, respectively. The number of core agents is
.
is chosen as three quartiles of
, i.e.,
,
, and
. This will make
within the consistent and nonsingular range. Note that our network matrix
is an unnormalized
matrix, hence the
is unnormalized and is relatively small. If we normalize
, say, by dividing all its elements by
, then the normalized
, respectively (see Corollary 1 in Appendix A.1). In Panel A, all numbers are multiplied by 1012. In Panel B and Panel C, all numbers are multiplied by 106. Panel D reports simulation results of relationship between sparsity and network impacts, where
is fixed,
is the median of
, i.e.,
. The formula for sparsity is
.
Panel A. Derivatives of direct impact with respect to s (the number of core agents) (N = 200,000).
|
|
|
|
p |
|
|
|
10 |
1.5625 |
6.2503 |
14.0640 |
50 |
1.5627 |
6.2516 |
14.0680 |
100 |
1.5629 |
6.2531 |
14.0730 |
500 |
1.5645 |
6.2657 |
14.1150 |
1000 |
1.5664 |
6.2814 |
14.1690 |
5000 |
1.5822 |
6.4092 |
14.6050 |
10,000 |
1.6023 |
6.5746 |
15.1800 |
50,000 |
1.7778 |
8.1633 |
21.3020 |
100,000 |
2.0408 |
11.1110 |
36.0000 |
200,000 |
2.7778 |
25.0000 |
225.0000 |
Panel B. Derivatives of indirect impact with respect to s (the number of core agents) (N = 200,000).
|
|
|
|
p |
|
|
|
10 |
1.2500 |
2.5001 |
3.7503 |
50 |
1.2502 |
2.5006 |
3.7514 |
100 |
1.2503 |
2.5012 |
3.7528 |
500 |
1.2516 |
2.5063 |
3.7641 |
1000 |
1.2531 |
2.5125 |
3.7783 |
5000 |
1.2658 |
2.5637 |
3.8947 |
10,000 |
1.2818 |
2.6298 |
4.0479 |
50,000 |
1.4222 |
3.2653 |
5.6804 |
100,000 |
1.6327 |
4.4444 |
9.6000 |
200,000 |
2.2222 |
10.0000 |
60.0000 |
Panel C. Derivatives of total impact with respect to s (the number of core agents) (N = 200,000).
|
|
|
|
p |
|
|
|
10 |
1.2500 |
2.5001 |
3.7503 |
50 |
1.2502 |
2.5006 |
3.7514 |
100 |
1.2503 |
2.5013 |
3.7528 |
500 |
1.2516 |
2.5063 |
3.7641 |
1000 |
1.2531 |
2.5125 |
3.7783 |
5000 |
1.2658 |
2.5637 |
3.8947 |
10,000 |
1.2818 |
2.6298 |
4.0479 |
50,000 |
1.4222 |
3.2653 |
5.6805 |
100,000 |
1.6327 |
4.4445 |
9.6000 |
200,000 |
2.2222 |
10.0000 |
60.0000 |
Panel D. Relationship between sparsity and network impacts.
p |
B |
Sparsity |
Direct impact |
Indirect impact |
Total impact |
10 |
20,000 |
0.999955000 |
1.000000000 |
0.000022501 |
1.000022501 |
50 |
4,000 |
0.999754999 |
1.000000000 |
0.000122515 |
1.000122516 |
100 |
2,000 |
0.999504998 |
1.000000001 |
0.000247562 |
1.000247563 |
500 |
400 |
0.997504988 |
1.000000003 |
0.001249061 |
1.001249064 |
1000 |
200 |
0.995004975 |
1.000000006 |
0.002503759 |
1.002503766 |
5000 |
40 |
0.975004875 |
1.000000032 |
0.012655697 |
1.012655728 |
10,000 |
20 |
0.950004750 |
1.000000064 |
0.025638463 |
1.025638527 |
50,000 |
4 |
0.750003750 |
1.000000357 |
0.142854337 |
1.142854694 |
100,000 |
2 |
0.500002500 |
1.000000833 |
0.333330278 |
1.333331111 |
200,000 |
1 |
0.000000000 |
1.000002500 |
0.999997500 |
2.000000000 |
Table 3. Simulation results of merging general complete core-periphery components (
). Panel A to Panel C report simulation results of derivatives of network impact with respect to
(the number of core agents), where
is an inverse measure of the network sparsity. The component sample size
is assigned to 10, 50,100, 500, 10,000, 50,000. 100,000, and 200,000, respectively.
is fixed, i.e., two components are merged, and
.
is chosen as three quartiles of
, i.e.,
,
, and
. This will make
within the consistent and nonsingular range. Note that our network matrix
is an unnormalized
matrix, hence the
is unnormalized and is relatively small. If we normalize
, say, by dividing all its elements by
, then the normalized
, respectively (see Corollary 1 in Appendix A.1). In Panel A, all numbers are multiplied by 10^12. In Panel B and Panel C, all numbers are multiplied by 10^6. Panel D reports simulation results of relationship between sparsity and network impacts, where
is fixed,
is the median of
, i.e.,
. The formula for sparsity is
.
Panel A. Derivatives of direct impact with respect to s (the number of core agents) (
).
p |
B |
s |
|
|
|
|
10 |
20,000 |
5 |
1.4063 |
5.6252 |
12.6570 |
50 |
4,000 |
25 |
1.5314 |
6.1262 |
13.7852 |
100 |
2,000 |
50 |
1.5472 |
6.1899 |
13.9298 |
500 |
400 |
250 |
1.5609 |
6.2492 |
14.0740 |
1000 |
200 |
500 |
1.5639 |
6.2673 |
14.1278 |
5000 |
40 |
2500 |
1.5769 |
6.3674 |
14.4627 |
10,000 |
20 |
5000 |
1.5920 |
6.4898 |
14.8830 |
50,000 |
4 |
25,000 |
1.7188 |
7.5899 |
18.9354 |
100,000 |
2 |
50,000 |
1.8975 |
9.3801 |
26.7407 |
200,000 |
1 |
100,000 |
2.3451 |
15.5992 |
72.4054 |
Panel B. Derivatives of indirect impact with respect to
(the number of core agents) (
).
p |
B |
s |
|
|
|
|
10 |
20,000 |
5 |
1.1250 |
2.2501 |
3.3752 |
50 |
4000 |
25 |
1.2251 |
2.4504 |
3.6759 |
100 |
2000 |
50 |
1.2377 |
2.4758 |
3.7142 |
500 |
400 |
250 |
1.2485 |
2.4989 |
3.7513 |
1000 |
200 |
500 |
1.2507 |
2.5053 |
3.7639 |
5000 |
40 |
2500 |
1.2596 |
2.5392 |
3.8391 |
10,000 |
20 |
5000 |
1.2697 |
2.5803 |
3.9336 |
50,000 |
4 |
25,000 |
1.3555 |
2.9579 |
4.8740 |
100,000 |
2 |
50,000 |
1.4790 |
3.5967 |
6.7877 |
200,000 |
1 |
100,000 |
1.7983 |
5.9504 |
18.9803 |
Panel C. Derivatives of total impact with respect to s (the number of core agents) (
).
p |
B |
s |
|
|
|
|
10 |
20,000 |
5 |
1.1250 |
2.2501 |
3.3752 |
50 |
4000 |
25 |
1.2251 |
2.4504 |
3.6759 |
100 |
2000 |
50 |
1.2377 |
2.4758 |
3.7142 |
500 |
400 |
250 |
1.2485 |
2.4989 |
3.7513 |
1000 |
200 |
500 |
1.2507 |
2.5053 |
3.7639 |
5000 |
40 |
2500 |
1.2596 |
2.5392 |
3.8391 |
10,000 |
20 |
5000 |
1.2697 |
2.5803 |
3.9336 |
50,000 |
4 |
25,000 |
1.3555 |
2.9579 |
4.8740 |
100,000 |
2 |
50,000 |
1.4790 |
3.5967 |
6.7877 |
200,000 |
1 |
100,000 |
1.7983 |
5.9504 |
18.9803 |
Panel D. Relationship between sparsity and network impacts.
p |
B |
s |
Sparsity |
Direct impact |
Indirect impact |
Total impact |
10 |
20,000 |
5 |
0.999965000 |
1.000000000 |
0.000017500 |
1.000017500 |
50 |
4000 |
25 |
0.999814999 |
1.000000000 |
0.000092510 |
1.000092510 |
100 |
2000 |
50 |
0.999627498 |
1.000000000 |
0.000186289 |
1.000186289 |
500 |
400 |
250 |
0.998127491 |
1.000000002 |
0.000937227 |
1.000937229 |
1000 |
200 |
500 |
0.996252481 |
1.000000005 |
0.001877663 |
1.001877667 |
5000 |
40 |
2500 |
0.981252406 |
1.000000024 |
0.009472385 |
1.009472409 |
10,000 |
20 |
5000 |
0.962502313 |
1.000000048 |
0.019147335 |
1.019147383 |
50,000 |
4 |
25,000 |
0.812501563 |
1.000000256 |
0.104601218 |
1.104601475 |
100,000 |
2 |
50,000 |
0.625000625 |
1.000000568 |
0.236362414 |
1.236362982 |
200,000 |
1 |
100,000 |
0.249998750 |
1.000001477 |
0.636363399 |
1.636364876 |
reduces network sparsity, increases all network impacts. Based on both Panel D of Table 2 and Table 3, Figure 5 and Figure 6 further depict the negative relationship between sparsity and network impacts, which supports the results of Proposition 4. Similar to the findings in the previous section, we observe that whilst the derivatives of indirect impacts significantly contribute to that of the total impact (see Panel B and C in Table 2 and Table 3 respectively), indirect impacts themselves are considerably weaker in their overall contributions to the level of total impacts (see Panel D in Table 2 and Table 3 respectively). This highlights the importance of analyzing both direct and indirect impacts to gain a comprehensive understanding of the total network impacts.
5. Conclusions
Complete core-periphery components are a commonly investigated structure in the study of financial networks. Their prominence stems from their ability to effectively represent the hierarchical and interconnected nature of many real-world financial systems. Researchers often analyze these structures to better understand systemic risks, network resilience, and the distribution of influence across agents in financial networks (e.g., see Acemoglu, Ozdaglar, and Tahbaz-Salehi [2]).
This paper explores the theoretical relationship between sparsity and its impacts on financial networks with a core-periphery structure. It focuses on networks with complete core-periphery components, investigating two strategies to reduce network sparsity: increasing the number of core agents within the components and merging multiple components into larger structures. Within this analytical framework, this study derives closed-form solutions to quantify the relationship between sparsity and its influence on financial networks. Supported by numerical simulations, the findings reveal that as sparsity decreases, the network’s impacts intensify.
Our research focuses on a static linear network model based on the spatial autoregressive framework. However, this approach has its limitations. For example, it may fail to capture the dynamic, nonlinear, and heterogeneous effects present in real-world networks. A promising direction for future research is to extend our study by incorporating a fully dynamic nonlinear general equilibrium model. This model would integrate heterogeneous sectors within a financial network, similar to the framework proposed by vom Lehn and Winberry [20]. Such
Figure 5. Sparsity and its impact on networks with complete components.
Figure 6. Sparsity and its impact on networks with general complete core-periphery components.
an extension would enable us to analyze the impacts of the complex, dynamic evolution of connections in real financial networks.
Another limitation of our current research is the assumption of network stability. Namely, the matrix
in Equation (3) remains nonsingular, even the network sparsity of W changes. Whether changes in network sparsity might affect its stability remains an open question, warranting further investigation in future studies.
Acknowledgements
We thank two anonymous referees, Zeyun Bei, John Chu, Andrew Grant, Michael Hanke (discussant), Qianqian Huang, Yunying Huang, Miguel Oliveira, Stephen Teng Sun, Gertjan Verdickt (discussant), Junbo Wang, Yinggang Zhou, and conference and seminar participants at 2024 Sydney Banking and Financial Stability Conference (SBFC), 37th Australasian Finance and Banking Conference (AFBC, 2024), City University of Hong Kong, and Xiamen University for their helpful comments. We are responsible for any remaining errors.
Appendix
A1. Proof for Section 2
Corollary 1.
is the range of consistency and nonsingular condition for
, regardless of variations in the size of the core-periphery component or changes in the number of core agents, where
is the total number of agents in network
.
Proof:
From Corollary 6.1.5 in Horn and Johnson [21], all the eigenvalues
should less than the minimum of maximum of row sums and the maximum of column sums in
, i.e.
(1’)
As
only has entries 0 or 1, thus the maximum eigenvalue of
is no greater than
. Thus
could be the range of consistency and nonsingular condition for
, regardless of variations in the size of the core-periphery component or changes in number of core agents.
From Section 1.2 in Horn and Johnson [21], the trace of
is equal to the sum of the eigenvalues of
, i.e.
(2’)
From (2’), if the maximum eigenvalue of
is larger than 0, the minimum eigenvalue of
would be less than 0.
From Theorem 1.12 in Magnus and Neudecker [22], there exists an orthogonal
matrix
and a diagonal matrix
whose diagonal elements are eigenvalues of
, we have
(3’)
From
, we have
. If all the eigenvalues of
are zeros, then
must be zero matrix. Thus at least one eigenvalue would not be 0. Hence, we exclude the case that all the eigenvalues are zeros if
is not a zero matrix. Therefore, the lower bound of
is negative and the upper bond of
is positive.
Since we only consider
, thus we assume
in all our analyses. Note that since our adjacency matrix
is a unnormalized
matrix, hence the
is unnormalized and is relatively small. If we normalize
, say, by dividing all its elements by
, then the normalized
. This result applies to our simulations conducted in Table 1 to Table 3.
QED
A2. Proofs for Section 3
Proof for Proposition 1:
(4’)
The adjacency matrix
of each core-periphery component
is as follows:
In this case,
is a
matrix with all ones except the diagonal terms.
is a
zero matrix.
is a
matrix with all ones.
is the transpose matrix of
. Thus we have
(5’)
(6’)
where
I is an identity matrix.
QED
Proof for Proposition 2
Proof:
First, we show that parameters a to e in the component submatrix
increases as the network sparsity decreases when
. As
, we have each element of
is non-negative due to the following formula:
(7’)
As a result, all parameters a to e are non-negative too.
(8')
where
, since
and
.
Therefore,
.
We also have:
(9')
where
As
, we have
. We also have
as
.
Hence
Thus
.
We have
(10')
where
.
As
, we have
.
Thus
and
.
Therefore, if
is fixed and
, then an increase in
will lead to a decrease in the network sparsity and an increase in all parameters a to e.
Second, we derive the formulae of network impacts with normalized
according to definition 4 in Section 2 and take their derivative with respect to
.
(11')
We take the derivative of average direct impact with respect to
:
(12')
where
Let
, then we have
.
Substitute
by
in
, we have:
For the first bracket, we have
, and hence
, therefore:
(13’)
For the second bracket inside
above, we have
, as
(14’)
Thus we have
and
.
Next, we have the average indirect impact with normalized
:
(15')
We take the derivative of average indirect impact with respect to
:
(16')
where
Let
again, then we have
.
Substitute
by
in
, we have:
For the first parenthesis:
as
(17’)
For the fourth parenthesis related to
, we have:
, (18’)
since
and
.
Thus, we have
and
. (19’)
Finally, we have the average total impact with normalized
:
(20')
Since
, we know that
if
. Thus we have
.
QED
A3. Proofs for Section 4
Proof for Proposition 3:
(i) (Complete Component) If
is a block-diagonal matrix with complete components
, the adjacency matrix
of a complete component
is as follows:
(21’)
In this case,
is a
matrix with all ones except the diagonal terms.
is a
zero matrix.
is a
matrix with all ones.
is the transpose matrix of
.
Thus, we have
(22’)
Since , then
is also a block-diagonal matrix with submatrix for each component
with the form as follows:
(23’)
where
and
. (24’)
It is straightforward to prove that
and
.
QED
(ii) (General Complete Core-Periphery Component) Similarly, if
is a block-diagonal matrix with general complete core-periphery components
, suppose before merging, the number of cores in a component is
and the size of a component is
. Then after merging, the number of cores in each component is
, where
the number of small components merged into a bigger component with the size of
. That is,
where
is fixed. Thus,
has a form of
(25’)
Since , then
is also a block-diagonal matrix with submatrix of
with the form as follows:
(26’)
where
QED
Proof for Proposition 4:
(i) (Complete Component) First, we prove that for the network with complete components
, the diagonal/nondiagonal element of
increases as the network sparsity decreases when the size of component
increases.
For the network with complete components
, recall
. We take the first derivative of
with respect to
to obtain:
(27’)
Thus if
, the diagonal element of
increases as the network sparsity decreases due to the increase of the size of each component
.
Recall
. We take the first derivative of
with respect to
to obtain:
(28')
Thus if
, then the non-diagonal element of
increases as the network sparsity decreases due to the increase of the size of each component
.
For the network with complete components
, the average direct impact with normalized
becomes
(29’)
From equation (27’),
increases as
enlarges if
. Thus we conclude that average direct impact increases as the network sparsity decreases due to the increase of the size of each component
.
We have the average indirect impact with normalized
as follows:
(30’)
Thus the derivative of average indirect impact with respect to
is
(31’)
We conclude that average indirect impact increases as the network sparsity decreases due to the increase of the size of each component
if
.
Finally, we have the average total impact with normalized
:
(32’)
Thus the derivative of the average total impact with respect to
is
(33’)
We conclude that average total impact increases as the network sparsity decreases due to the increase of the size of each component
if
.
QED
(ii) (General Complete Core-Periphery Component) Similarly, for the network with general complete core-periphery components
, we first show that the parameters a to e of the component submatrix
increase as the network sparsity decreases when
.
As
, (34’)
Thus each element of
is non-negative. As
, we have:
(35’)
(36’)
(37’)
Therefore, if
is fixed and
, then parameters a to e increase as the network sparsity decreases when
.
We then compute the formulae of the network impacts with normalized
and derive their derivatives respect to
.
(38’)
We take the derivative of average direct impact with respect to
:
(39’)
since
. Thus
.
Next, we have the average indirect impact with normalized
:
(40’)
We take the derivative of average indirect impact with respect to
:
(41’)
where
Let
and replace
by
in
to obtain:
(42’)
The first three terms of
are all positive. For the third parenthesis, we have:
as
and
(43’)
For the fourth parenthesis, we have:
as
(44’)
For the last parenthesis, we have:
(45')
as
due to
.
Thus we have
and
. (46’)
Finally, we have the average total impact with normalized
:
(47’)
We take the derivative of average total impact with respect to
:
(48’)
as
. Thus
.
QED