Sparsity and Its Impact on Financial Network with Complete Core-Periphery Structure

Abstract

This paper theoretically investigates network sparsity and its impact. We focus on a specific topology of the network, i.e., the network with core-periphery structures, which is a widely studied financial network structure. It is crucial to understand how changes in sparsity influence network impact. For example, reducing the sparsity of a banking network can offer significant advantages. Upgrading strategically a local bank branch to a regional headquarters improves the connectedness among regional branches. Similarly, reducing the overall sparsity of a banking network by strategically merging two financial networks into one can create substantial potential impacts on banking operations. We derive closed-form solutions that quantify links between sparsity and its impact on financial networks with complete core-periphery structures. We prove that reducing sparsity increases impacts on networks with complete core-periphery components through two sparsity reduction strategies. We further validate other impact measures through simulations.

Share and Cite:

Jie, E.Y. and Ma, Y. (2025) Sparsity and Its Impact on Financial Network with Complete Core-Periphery Structure. Journal of Mathematical Finance, 15, 244-276. doi: 10.4236/jmf.2025.152011.

1. Introduction

*Corresponding author.

A reduction of network sparsity, or in other words, an increase in the interconnectedness of financial institutions, underscores the critical role of financial network architecture in the financial market integration. For example, reducing the sparsity of a banking network can offer significant advantages. Upgrading a local bank branch to a regional headquarters improves the connectedness among regional branches. Similarly, reducing the overall sparsity of a banking network by strategically merging two financial networks into one can create substantial potential impacts on banking operations. These financial strategies highlight the importance of examining changes in interconnectedness and their broader impacts. Yet, the change in financial network sparsity and its impact remains not well-understood, presenting a significant gap in current research. To our best knowledge, this study is among the first to explore theoretically the impact of financial network with complete core-periphery structures.

Pioneering studies have examined the financial contagion and risk transmission through a given financial network structure (Allen and Gale [1], Acemoglu, Ozdaglar, and Tahbaz-Salehi [2], Glasserman and Young [3]). Our paper fills in the gap by investigating how the change in network sparsity influences the relationship between an explanatory variable for one agent and the dependent variable of all other agents in financial networks with core-periphery structures.

We derive closed-form solutions that quantify links between sparsity and its impact on financial networks with complete core-periphery components. Simulation results are also provided to validate our analytical solutions. Jie and Ma [4] further extend our analysis to investigate financial networks with the incomplete and random core-periphery structures.

Core-periphery networks, which consist of a fully connected core and a sparse periphery, have been widely recognized in the financial sector (Di Maggio, Kermani, and Song [5], Li and Schürhoff [6]). We focus on two strategies to reduce network sparsity whilst maintaining the core-periphery topology unchanged: 1) increasing the number of core agents, and 2) merging core-periphery components. The first strategy is to promote periphery agents to core agents who will connect with each other and also connect to the remaining periphery agents. For example, JPMorgan upgraded its periphery branch in Washington D.C. in 2022 to a core branch as a regional headquarter, which strengthened its connections to the remaining periphery branches of the banking network of Washington D.C. and the Greater Washington region.

The second sparsity reduction strategy is to merge two or more components into a big component whilst the core-periphery topology structure remains unchanged. This can be achieved by adding links between core and periphery agents across components. Finance examples include bank mergers and acquisitions that combine two financial sub-networks into one (see, e.g., Levine, Lin, and Wang [7]). We prove that sparsity reduction by both strategies will increase the network impact. In real world applications, these two sparsity reduction strategies may be combined in various ways to achieve sparsity reduction and greater network impact.

The remainder of the paper is organized as follows. Section 2 introduces basic definitions and model setup. Section 3 and Section 4 examine respectively two alternative sparse reduction strategies for financial networks with complete core-periphery components. Finally, Section 5 concludes. Proofs are provided in the Appendix.

2. Definition and Model Setup

Based on Jackson [8], the definition of a network is given as follows:

Definition 1 (Network) A network ( , W) consists of a set of agents ={ 1,2,,N } and an N × N network adjacency matrix W, where its each element w ij represents the relation between agent i and j.

In this paper, we focus on unweighted, symmetric networks where w ij = w ji =1 if a link exists between i and j, and w ij = w ji =0 otherwise. This means the network adjacency matrix W is a symmetric matrix with binary elements. Its diagonal entries are set to be zeros, following conventional definitions (LeSage and Pace [9]).

Following Diestel [10] (see p. 164), we define network sparsity as the edge density of the network matrix W.

Definition 2 (Network Sparsity) Given the initial structure of W, network sparsity is defined as the proportion of zero entries in W, excluding the diagonal elements. In other words, sparsity refers to the degree of looseness in the connections within a network. It serves as an inverse measure of network interconnectedness.

According to this definition, an increase in links among agents reduces the number of zeros in W, thereby decreasing network sparsity. Fixing the initial structure of W is essential because networks with the same sparsity level (i.e., the same percentage of zeros in non-diagonal elements) may exhibit different network impacts if their initial structures differ. Therefore, we need to fix the initial structure of W and the network topology to examine the impact of network sparsity.

In this paper, we focus on the network topology of sparse networks with core-periphery components, a widely studied financial network structure. Empirical research has demonstrated that financial networks often exhibit a core-periphery structure (e.g., Bech and Atalay [11], Di Maggio, Kermani, and Song [5], Hollifield, Neklyudov, and Spatt [12], Li and Schürhoff [6], Craig and Ma [13]). We follow Elliott, Golub, and Jackson [14] and Craig and Ma [13] to define the core-periphery structure.

Definition 3 (Core-periphery Component) A component ( c , Wc) of size p has a core-periphery structure if it has a set of core agents S c with size s and a set of periphery agents of size ps. The corresponding p × p adjacency matrix Wc can be arranged to the following structure:

W c =( CC CP C P PP )=( 1 0 R R 0 ) (1)

The block CC defines the interconnected relationship among core agents, where CC = 10 with dimension of s × s consists of all ones except that the diagonal terms are zeros. The block PP is a zero matrix (PP = 0) with dimension of (ps) × (ps), indicating sparsity (no interaction) among periphery agents. The block CP = R with size s × (ps), and the transposed block CP' = R' due to the symmetry of W. R is both row-regular and column-regular. R is row-regular implies that each row has at least one element equal 1, indicating that each core agent is connected to at least one periphery agent. Whilst R is also column-regular means that each column is covered by at least one element equal to 1, indicating that each periphery agent is connected to at least one core agent.

In other words, a core-periphery component consists of two tiers: a fully connected core, where all core agents are directly linked, and a sparse periphery, where periphery agents are not directly connected to each other. Additionally, each core agent is linked to at least one periphery agent, and vice versa.

A complete core-periphery component is defined as CP = R = 1, i.e., all entries are ones. In other words, every core agent in a complete core-periphery component is connected to all periphery agents. The complete core-periphery topology structure has been widely studied in the existing finance literature and has a rich family of varieties. Based on the ranking of sparsity, we have the star component as the highest sparsity and the complete component as the lowest sparsity. The star component represents the highest sparsity case of a core-periphery structure, where the block CC = 10 = (0) is a 1 × 1 matrix with a zero entry. This implies that there is only one core agent connected to multiple periphery agents, while the periphery agents have no direct links among themselves. The star networks are widely studied structure in network analysis. For example, Cerdeiro, Dziubinski and Goyal [15] examine the investment strategy in the cybersecurity network with a star component.

The complete component, on the other hand, represents the lowest sparsity case of a core-periphery structure, where the block PP = 0 = (0) is a 1 × 1 matrix with a zero entry. In this structure, there is only one periphery agent, with all other agents serving as core agents. Complete components are commonly studied structure in financial network analysis (e.g., see Acemoglu, Ozdaglar, and Tahbaz-Salehi [2]).

For the general complete core-periphery components with moderate sparsity, i.e., the number of core agents s[ 2,p2 ] , where p is the number of total agents of the component, they have been applied to investigate the interbank market network recently (in ’t Veld and van Lelyveld [16], in ’t Veld, van der Leij, and Hommes [17]). In this paper, we expand our analysis to cover the whole family of the complete core-periphery component, including both the star component and the complete component as the two extreme cases.

Jie and Ma [4] further examined the incomplete core-periphery component, which exists when at least one zero appears in the CP block. This implies that not all core agents are connected to all periphery agents in an incomplete core-periphery component. In this paper, we focus on the financial networks with complete core-periphery components.

To set the stage for our theoretical analysis, we consider a network W consisting of B homogeneous independent core-periphery components W i of size p. Thus, the total number of agents in the network is N=Bp. Then the adjacency matrix W takes a block-diagonal form:

W= ( W 1 0 0 W B ) N×N with W 1 = W 2 == W B . (2)

The size of matrix W is N × N and the size of component matrix W i is p × p.

After defining the core-periphery network structure, we introduce the concept of network impacts. Following Bramoullé, Djebbari, and Fortin [18], our financial network analytical model is inspired by spatial econometrics literature. We define our financial network model, or, the spatial autoregressive (SAR) model, as follows:

y=ρWy+Xβ+ε (3)

where y and X are the dependent and the set of explanatory variables respectively, W is a network adjacency matrix, ρ and β are parameters, and ε is the error term.

It is conventional to assume that ρ( 1 ω min , 1 ω max ) , where ω min and ω max denote the smallest and largest eigenvalues of W, respectively, in the literature. This assumption ensures that both the estimation of SAR network model is consistent and the matrix IρW is nonsingular (LeSage and Pace [9], p. 47).

Based on the above SAR network model, we have

y= ( IρW ) 1 ( Xβ+ε )=S( Xβ+ε ) (4)

where S = def ( IρW ) 1 .

In a linear regression model, parameter β k represent the partial derivatives of the dependent variable y with respect to the explanatory variable x k . In the SAR network model, the impact of x k on the dependent variable y varies across observations of all agents. To summarize these varying effects, LeSage and Pace [9] defined three measures, including average direct impact, average indirect impact, and average total impact, to quantify network impacts.

Definition 4 (Network Impacts)

For any explanatory variable x k X , define:

Average Direct Impact= 1 N i=1 N ( y i x ik ) =( 1 N i=1 N s ii ) β k (5)

Average Indirect Impact= 1 N i=1 N ji ( y i x jk ) =( 1 N i=1 N ji s ij ) β k (6)

Average Total Impact=Average Direct Impact+Average Indirect Impact =( 1 N i=1 N j=1 N s ij ) β k (7)

These network impact measures have been widely applied in financial network research. For example, Grieser, Hadlock, LeSage, and Zekhnini [19] applied these measures to examine causal peer effects in capital structure decisions using a peer-network. The economic mechanism underlying the relationship between sparsity and network impact may be illustrated through an example of investment hubs provided by vom Lehn and Winberry [20]. By analyzing disaggregated asset-level data that detail purchases of 33 types of capital assets across various sectors, they constructed a 37-sector investment network for the U.S. each year. Their findings reveal that the investment network is highly sparse, dominated by just four key investment hubs: construction, machinery manufacturing, automobile manufacturing, and professional/technical services. Together, these hubs account for nearly 70% of total investment. Consequently, the production and employment in these sectors are significantly more sensitive to business cycle shocks compared to others.

Without loss of generality, we normalize the parameter β k =1 in our subsequent analyses to focus primarily on the financial network impacts.

3. Sparsity Reduction by Increasing the Number of Core Agents

One strategy to reduce network sparsity while keeping both the component size p and the number of components B fixed is to increase the number of core agents by promoting some of the periphery agents to core agents in each component.

For example, Figure 1(a) provides a network view of the process of increasing the number of core agents, transforming a star component with just one core agent (s = 1) eventually into a complete component with five core agents (s = 5),

Figure 1. Sparsity reduction by increasing the number of core agents within complete core-periphery components. (a) Network view of sparsity reduction; (b) Changes in the adjacency matrix of a component.

whilst always keeping the size of each component p = 6 to be constant. As each periphery agent is turned into a core agent, new links are formed between the newly designated core and the remaining periphery agents, thereby reducing the network sparsity. When s = 5, the component reaches its complete form with the lowest sparsity. This demonstrates that increasing the number of core agents in each component directly reduces network sparsity.

Figure 1(b) shows an example of the changes of the adjacency matrix W i during the transformation of the component W i from s = 1 to s = 3. The decrease in the number of zero entries in W i confirms the reduction in sparsity. In general, for any given size p of a complete core-periphery component and the number of components B in a sparse network with size of N = Bp, we have its network sparsity as follows:

Sparsity= N( N1 )[ s( s1 )+2s( ps ) ]B N( N1 ) =1 ( 2p1 )s s 2 p( N1 ) (8)

From this formula, it shows that sparsity decreases as the number of core agents s increases, since its derivative is Sparsity s = 2s2p+1 p( N1 ) <0 . Therefore, the number of core agents s can serve as an indirect measure of network sparsity. The next two propositions provide the theoretical findings of the reduction of sparsity and its network impact. Their proofs can be found in Appendix A.2.

Proposition 1. Let W be a block-diagonal matrix consisting of complete core-periphery components. Denote W i as the submatrix representing each individual complete core-periphery component, which is given by:

W i =( 1 0 s×s 1 s×( ps ) 1 ( ps )×s 0 ( ps )×( ps ) ) (9)

If S = def ( IρW ) 1 , then S is also a block-diagonal matrix with submatrix S i of the following form:

S i = ( A s×s B s×( ps ) B ( ps )×s D ( ps )×( ps ) ) p×p (10)

where

A s×s =a I s×s +c 1 0 s×s

B s×( ps ) =e 1 s×( ps )

D ( ps )×( ps ) =b I ( ps )×( ps ) +d 1 0 ( ps )×( ps )

I is an identity matrix and 1 0 consists of all ones except that the diagonal terms are zeros.

Detailed formulae of a to e can found in the Appendix.

Based on Proposition 1, we can calculate network impacts following definition 4 in Section 2 and prove the following proposition in the Appendix A.2.

Proposition 2. Given the size p of complete core-periphery components, if the number of its core agents s increases that leads to a decrease in network sparsity, then the average direct, indirect, and total impacts of the network all increase.

We also run two sets of simulations to verify the signs of these network impacts. In the first set of simulations, we use the analytical solutions of average direct, indirect and total impacts from Proposition 2 derived in Appendix A.2 to compute their derivatives with respect to the number of core agents s, which is an inverse measure of network sparsity. Our simulation results are provided in Panel A, B, and C of Table 1. We find all the derivatives are positive, which aligns with our findings. As the number of core agents s increases that leads to a reduction of

Table 1. Simulation results of adding core agents in the sparse network with complete core-periphery components. Panel A to Panel C report simulation results of the derivatives of network impact with respect to s (the number of core agents), where s is an inverse measure of the network sparsity. The component size p is assigned to 10, 50, 100, 500, 10,000, 50,000, 100,000 and 200,000 sample sizes. The number of cores in each component s is chosen as quintiles of ( 0,p ) , i.e., p/5 , 2p/5 , 3p/5 , 4p/5 , and p1 . ρ is the median of ( 0, 1 2000001 ) , i.e., ρ= 1 2000001 × 1 2 . This will make ρ within the consistent and nonsingular range. Note that our network matrix W is an unnormalized N×N matrix, hence the ρ is unnormalized and is relatively small. If we normalize W , say, by dividing all its elements by N1 , then the normalized ρ ^ = 1 2 (see Corollary 1 in Appendix A.1). In Panel A, all numbers are multiplied by 1012. In Panel B and Panel C, all numbers are multiplied by 106. Panel D reports the simulation results of relationship between sparsity and network impacts, where N=200000 , p=1000 and B=200 are fixed, and ρ is the same as in Panel A to Panel C. The formula for sparsity is 1 ( 2p1 )s s 2 p( N1 ) .

Panel A. Derivatives of direct impact with respect to s (the number of core agents).

s

[ p/5 ]

[ 2p/5 ]

[ 3p/5 ]

[ 4p/5 ]

p1

p

10

9.3752

6.8753

4.3752

1.8751

0.6251

50

9.8758

7.3762

4.8761

2.3757

0.1251

100

9.9391

7.4398

4.9398

2.439

0.0625

500

9.9951

7.4988

4.9988

2.495

0.0125

1000

10.009

7.5163

5.0163

2.5088

0.0063

5000

10.075

7.6126

5.1128

2.5749

0.0013

10,000

10.152

7.7298

5.2306

2.654

0.0007

50,000

10.831

8.7732

6.2991

3.3824

0.0002

100,000

11.852

10.43

8.0775

4.6426

0.0001

200,000

14.711

15.805

14.661

9.8214

0.0002

Panel B. Derivatives of indirect impact with respect to s (the number of core agents).

s

[ p/5 ]

[ 2p/5 ]

[ 3p/5 ]

[ 4p/5 ]

p1

p

10

3.7501

2.7501

1.7501

0.7500

0.2500

50

3.9504

2.9504

1.9503

0.9502

0.0500

100

3.9758

2.9758

1.9757

0.9754

0.0250

500

3.999

2.9991

1.9985

0.9971

0.0050

1000

4.0055

3.0058

2.0045

1.0018

0.0025

5000

4.0399

3.0413

2.0351

1.0211

0.0005

10,000

4.0813

3.0845

2.0721

1.0438

0.0003

50,000

4.4408

3.4761

2.4154

1.2569

0.0001

100,000

4.9796

4.1176

3.0119

1.6423

0.0000

200,000

6.4900

6.3002

5.3711

3.3482

0.0001

Panel C. Derivatives of total impact with respect to s (the number of core agents).

s

[ p/5 ]

[ 2p/5 ]

[ 3p/5 ]

[ 4p/5 ]

p1

p

10

3.7501

2.7501

1.7501

0.7500

0.2500

50

3.9504

2.9504

1.9503

0.9502

0.0500

100

3.9758

2.9758

1.9757

0.9754

0.0250

500

3.999

2.9991

1.9985

0.9971

0.0050

1000

4.0055

3.0058

2.0045

1.0018

0.0025

5000

4.0399

3.0413

2.0351

1.0211

0.0005

10,000

4.0813

3.0845

2.0721

1.0438

0.0003

50,000

4.4408

3.4761

2.4154

1.2569

0.0001

100,000

4.9796

4.1176

3.0119

1.6423

0.0000

200,000

6.4900

6.3002

5.3711

3.3482

0.0001

Panel D. Relationship between sparsity and network impacts.

s

Sparsity

Direct impact

Indirect impact

Total impact

1

0.999990010

1.000000000

0.000005001

1.000005001

100

0.999050495

1.000000001

0.000475432

1.000475433

200

0.998200991

1.000000002

0.000900951

1.000900954

300

0.997451487

1.000000003

0.001276522

1.001276525

400

0.996801984

1.000000004

0.001602105

1.001602109

500

0.996252481

1.000000005

0.001877663

1.001877667

600

0.995802979

1.000000005

0.002103158

1.002103163

700

0.995453477

1.000000006

0.002278553

1.002278559

800

0.995203976

1.000000006

0.002403810

1.002403816

900

0.995054475

1.000000006

0.002478891

1.002478898

1000

0.995004975

1.000000006

0.002503759

1.002503766

sparsity, all measures of network impact are increased.

In the second set of simulations, we calculate the sparsity and network impacts and present the results in Panel D of Table 1 and Figure 2, which illustrates the negative relationship between sparsity and network impacts. The results from both sets of simulations further support Proposition 2. We notice that in our simulations, sparsity reduction is relatively small and most of the total impacts are from the direct impacts in Panel D of Table 1 and Figure 2. This is because our simulated networks are designed to exhibit very high sparsity, thereby the indirect network impacts are relatively weak. However, we find that the magnitudes of the derivatives of indirect impacts dominate that of total impacts: the derivatives of total impacts in Panel C are almost identical to that of indirect impacts in Panel B of Table 1. It implies that the network indirect impacts are more sensitive to the sparsity reduction than the direct impacts. Taking together, it illustrates that it is important to investigate both direct and indirect impacts to understand the total network impacts.

4. Sparsity Reduction by Merging Components

As an alternative strategy to reduce network sparsity while preserving the network topology, we merge complete core-periphery components.

We first discuss a sparse network composed of B independent complete components, which is a special case of complete core-periphery component with the number of core agents s=p1 , where p is the size of the complete component. The merging process is illustrated in Figure 3. Initially, the network consists of B complete components, each of size p=3 with s=2 core agents. From (a) to (b), every two complete components merge into a larger complete component with s=5 core agents, reducing the network’s sparsity. As a result, the number of components B is halved, while the component size p doubles. Given that the size of the network is N=Bp , the network matrix W is:

W= ( W 1 0 0 W B ) N×N and W 1 == W B =( 1 0 ( p1 )×( p1 ) 1 ( p1 )×1 1 1×( p1 ) 0 1×1 ) (11)

The overall sparsity of the network W is given by:

Sparsity= N( N1 )p( p1 )B N( N1 ) =1 p1 N1 (12)

From this equation, we observe that sparsity is a decreasing function of the

Figure 2. Sparsity and its impact on networks with complete core-periphery components.

component size p , as Sparsity p = 1 N1 <0 . This confirms that increasing the component size p reduces the overall sparsity of the network. The network sparsity is reduced by merging two or more small components into larger homogeneous components.

Next, we examine the merging process for general types of complete core-periphery components where the number of core agents s<p1 , as illustrated in Figure 4. Initially, the network consists of B complete core-periphery components, each with size 3 and only one core agent ( s=1 ) in (a). During the transition from (a) to (b), every two complete core-periphery components are merged into a larger complete core-periphery component, increasing the number of core agents to s=2 .

The number of components B is again reduced by half, while the component size p doubles. New links are introduced between core and periphery agents within the new components, thereby reducing network sparsity.

Figure 3. Network view of sparsity reduction by merging complete components.

Figure 4. Sparsity reduction by merging complete core-periphery components.

Suppose the initial adjacency matrix has s 0 core agents per component and a component size of p 0 , therefore λ= p 0 s 0 >1 is fixed. After merging small components into a single larger component, the number of core agents in each component becomes s=μ s 0 , and the component size increases to p=μ p 0 . This implies: p= p 0 s 0 s=λs . In this circumstance, p has a linear relationship with s . In general, the network sparsity becomes

Sparsity= N( N1 )[ s( s1 )+2s( λss ) ]B N( N1 ) =1 ( 2λ1 )s1 λ( N1 ) (13)

We can derive Sparsity s = 2λ1 λ( N1 ) <0 , as λ>1 . This confirms that sparsity is a decreasing function of the number of core agents s. Therefore, the number of core agents (s) in each component may serve as an inverse measure of network sparsity. The proof of the following Proposition 3 is given in Appendix A.3.

Proposition 3. W is a block-diagonal matrix composed of complete core-periphery components. Each submatrix W i represents a complete core-periphery component with the following form:

W i =( 1 0 s×s 1 s×( ps ) 1 ( ps )×s 0 ( ps )×( ps ) ) (14)

(i) (Complete Component) If W is a block-diagonal matrix with complete components ( s=p1 ) and S = def ( IρW ) 1 , then S is also a block-diagonal matrix with submatrix S i of the following form:

S i = ( A ( p1 )×( p1 ) B ( p1 )×1 C 1×( p1 ) D 1×1 ) p×p = s kk I p×p + s kj 1 0 p×p (15)

where

s kk = ( ρp2ρ1 )/ [ ( 1+ρ )( ρpρ1 ) ] , and

s kj =ρ/ [ ( 1+ρ )( ρpρ1 ) ] .

(ii) (General Complete Core-periphery Component) If W is a block-diagonal matrix with general complete core-periphery components ( s<p1 ) and S = def ( IρW ) 1 , then S is also a block-diagonal matrix with submatrix S i of the following form:

S i = ( A s×s B s×( ps ) B ( ps )×s D ( ps )×( ps ) ) p×p (16)

where

A s×s =a I s×s +c 1 0 s×s

B s×( ps ) =e 1 s×( ps )

D ( ps )×( ps ) =b I ( ps )×( ps ) +d 1 0 ( ps )×( ps )

I is an identity matrix and 1 0 consists of all ones except that the diagonal terms are zeros.

Detailed formulae of a to e can be found in the Appendix.

From Proposition 3, we can calculate network impacts based on definition 4 in Section 2. We can prove the following proposition in the Appendix A.3.

Proposition 4. Given the network size N, if a merger of complete core-periphery components that leads to a reduction of network sparsity, then the average direct, indirect, and total impacts of the network all increase.

We also run simulations to verify the signs of all the network impacts in Table 2 and Table 3. They confirm that the merger of complete core-periphery components

Table 2. Simulation results of merging complete components ( s=p1 ). Panel A to Panel C report simulation results of derivatives of network impact with respect to s (the number of core agents), where s is an inverse measure of the network sparsity. The component sample size p is assigned to 10, 50,100, 500, 10,000, 50,000, 100,000, and 200,000, respectively. The number of core agents is s=p1 . ρ is chosen as three quartiles of ( 0, 1 2000001 ) , i.e., 1 2000001 × 1 4 , 1 2000001 × 1 2 , and 1 2000001 × 3 4 . This will make ρ within the consistent and nonsingular range. Note that our network matrix W is an unnormalized N×N matrix, hence the ρ is unnormalized and is relatively small. If we normalize W , say, by dividing all its elements by N1 , then the normalized ρ ^ = 1 4 , 1 2 , 3 4 , respectively (see Corollary 1 in Appendix A.1). In Panel A, all numbers are multiplied by 1012. In Panel B and Panel C, all numbers are multiplied by 106. Panel D reports simulation results of relationship between sparsity and network impacts, where N=200000 is fixed, ρ is the median of ( 0, 1 N1 ) , i.e., ρ= 1 2000001 × 1 2 . The formula for sparsity is 1 ( p1 )/ ( N1 ) .

Panel A. Derivatives of direct impact with respect to s (the number of core agents) (N = 200,000).

ρ

1/4 ( N1 )

1/2 ( N1 )

3/4 ( N1 )

p

10

1.5625

6.2503

14.0640

50

1.5627

6.2516

14.0680

100

1.5629

6.2531

14.0730

500

1.5645

6.2657

14.1150

1000

1.5664

6.2814

14.1690

5000

1.5822

6.4092

14.6050

10,000

1.6023

6.5746

15.1800

50,000

1.7778

8.1633

21.3020

100,000

2.0408

11.1110

36.0000

200,000

2.7778

25.0000

225.0000

Panel B. Derivatives of indirect impact with respect to s (the number of core agents) (N = 200,000).

ρ

1/4 ( N1 )

1/2 ( N1 )

3/4 ( N1 )

p

10

1.2500

2.5001

3.7503

50

1.2502

2.5006

3.7514

100

1.2503

2.5012

3.7528

500

1.2516

2.5063

3.7641

1000

1.2531

2.5125

3.7783

5000

1.2658

2.5637

3.8947

10,000

1.2818

2.6298

4.0479

50,000

1.4222

3.2653

5.6804

100,000

1.6327

4.4444

9.6000

200,000

2.2222

10.0000

60.0000

Panel C. Derivatives of total impact with respect to s (the number of core agents) (N = 200,000).

ρ

1/4 ( N1 )

1/2 ( N1 )

3/4 ( N1 )

p

10

1.2500

2.5001

3.7503

50

1.2502

2.5006

3.7514

100

1.2503

2.5013

3.7528

500

1.2516

2.5063

3.7641

1000

1.2531

2.5125

3.7783

5000

1.2658

2.5637

3.8947

10,000

1.2818

2.6298

4.0479

50,000

1.4222

3.2653

5.6805

100,000

1.6327

4.4445

9.6000

200,000

2.2222

10.0000

60.0000

Panel D. Relationship between sparsity and network impacts.

p

B

Sparsity

Direct

impact

Indirect impact

Total

impact

10

20,000

0.999955000

1.000000000

0.000022501

1.000022501

50

4,000

0.999754999

1.000000000

0.000122515

1.000122516

100

2,000

0.999504998

1.000000001

0.000247562

1.000247563

500

400

0.997504988

1.000000003

0.001249061

1.001249064

1000

200

0.995004975

1.000000006

0.002503759

1.002503766

5000

40

0.975004875

1.000000032

0.012655697

1.012655728

10,000

20

0.950004750

1.000000064

0.025638463

1.025638527

50,000

4

0.750003750

1.000000357

0.142854337

1.142854694

100,000

2

0.500002500

1.000000833

0.333330278

1.333331111

200,000

1

0.000000000

1.000002500

0.999997500

2.000000000

Table 3. Simulation results of merging general complete core-periphery components ( s<p1 ). Panel A to Panel C report simulation results of derivatives of network impact with respect to s (the number of core agents), where s is an inverse measure of the network sparsity. The component sample size p is assigned to 10, 50,100, 500, 10,000, 50,000. 100,000, and 200,000, respectively. λ=2 is fixed, i.e., two components are merged, and s=p/2 . ρ is chosen as three quartiles of ( 0, 1 2000001 ) , i.e., 1 2000001 × 1 4 , 1 2000001 × 1 2 , and 1 2000001 × 3 4 . This will make ρ within the consistent and nonsingular range. Note that our network matrix W is an unnormalized N×N matrix, hence the ρ is unnormalized and is relatively small. If we normalize W , say, by dividing all its elements by N1 , then the normalized ρ ^ = 1 4 , 1 2 , 3 4 , respectively (see Corollary 1 in Appendix A.1). In Panel A, all numbers are multiplied by 10^12. In Panel B and Panel C, all numbers are multiplied by 10^6. Panel D reports simulation results of relationship between sparsity and network impacts, where N=200000 is fixed, ρ is the median of ( 0, 1 N1 ) , i.e., ρ= 1 2000001 × 1 2 . The formula for sparsity is 1 ( 2λ1 )s1 λ( N1 ) .

Panel A. Derivatives of direct impact with respect to s (the number of core agents) ( N=pB=200000 ).

p

B

s

ρ

1/4 ( N1 )

1/2 ( N1 )

3/4 ( N1 )

10

20,000

5

1.4063

5.6252

12.6570

50

4,000

25

1.5314

6.1262

13.7852

100

2,000

50

1.5472

6.1899

13.9298

500

400

250

1.5609

6.2492

14.0740

1000

200

500

1.5639

6.2673

14.1278

5000

40

2500

1.5769

6.3674

14.4627

10,000

20

5000

1.5920

6.4898

14.8830

50,000

4

25,000

1.7188

7.5899

18.9354

100,000

2

50,000

1.8975

9.3801

26.7407

200,000

1

100,000

2.3451

15.5992

72.4054

Panel B. Derivatives of indirect impact with respect to s (the number of core agents) ( N=pB=200000 ).

p

B

s

ρ

1/4 ( N1 )

1/2 ( N1 )

3/4 ( N1 )

10

20,000

5

1.1250

2.2501

3.3752

50

4000

25

1.2251

2.4504

3.6759

100

2000

50

1.2377

2.4758

3.7142

500

400

250

1.2485

2.4989

3.7513

1000

200

500

1.2507

2.5053

3.7639

5000

40

2500

1.2596

2.5392

3.8391

10,000

20

5000

1.2697

2.5803

3.9336

50,000

4

25,000

1.3555

2.9579

4.8740

100,000

2

50,000

1.4790

3.5967

6.7877

200,000

1

100,000

1.7983

5.9504

18.9803

Panel C. Derivatives of total impact with respect to s (the number of core agents) ( N=pB=200000 ).

p

B

s

ρ

1/4 ( N1 )

1/2 ( N1 )

3/4 ( N1 )

10

20,000

5

1.1250

2.2501

3.3752

50

4000

25

1.2251

2.4504

3.6759

100

2000

50

1.2377

2.4758

3.7142

500

400

250

1.2485

2.4989

3.7513

1000

200

500

1.2507

2.5053

3.7639

5000

40

2500

1.2596

2.5392

3.8391

10,000

20

5000

1.2697

2.5803

3.9336

50,000

4

25,000

1.3555

2.9579

4.8740

100,000

2

50,000

1.4790

3.5967

6.7877

200,000

1

100,000

1.7983

5.9504

18.9803

Panel D. Relationship between sparsity and network impacts.

p

B

s

Sparsity

Direct impact

Indirect impact

Total impact

10

20,000

5

0.999965000

1.000000000

0.000017500

1.000017500

50

4000

25

0.999814999

1.000000000

0.000092510

1.000092510

100

2000

50

0.999627498

1.000000000

0.000186289

1.000186289

500

400

250

0.998127491

1.000000002

0.000937227

1.000937229

1000

200

500

0.996252481

1.000000005

0.001877663

1.001877667

5000

40

2500

0.981252406

1.000000024

0.009472385

1.009472409

10,000

20

5000

0.962502313

1.000000048

0.019147335

1.019147383

50,000

4

25,000

0.812501563

1.000000256

0.104601218

1.104601475

100,000

2

50,000

0.625000625

1.000000568

0.236362414

1.236362982

200,000

1

100,000

0.249998750

1.000001477

0.636363399

1.636364876

reduces network sparsity, increases all network impacts. Based on both Panel D of Table 2 and Table 3, Figure 5 and Figure 6 further depict the negative relationship between sparsity and network impacts, which supports the results of Proposition 4. Similar to the findings in the previous section, we observe that whilst the derivatives of indirect impacts significantly contribute to that of the total impact (see Panel B and C in Table 2 and Table 3 respectively), indirect impacts themselves are considerably weaker in their overall contributions to the level of total impacts (see Panel D in Table 2 and Table 3 respectively). This highlights the importance of analyzing both direct and indirect impacts to gain a comprehensive understanding of the total network impacts.

5. Conclusions

Complete core-periphery components are a commonly investigated structure in the study of financial networks. Their prominence stems from their ability to effectively represent the hierarchical and interconnected nature of many real-world financial systems. Researchers often analyze these structures to better understand systemic risks, network resilience, and the distribution of influence across agents in financial networks (e.g., see Acemoglu, Ozdaglar, and Tahbaz-Salehi [2]).

This paper explores the theoretical relationship between sparsity and its impacts on financial networks with a core-periphery structure. It focuses on networks with complete core-periphery components, investigating two strategies to reduce network sparsity: increasing the number of core agents within the components and merging multiple components into larger structures. Within this analytical framework, this study derives closed-form solutions to quantify the relationship between sparsity and its influence on financial networks. Supported by numerical simulations, the findings reveal that as sparsity decreases, the network’s impacts intensify.

Our research focuses on a static linear network model based on the spatial autoregressive framework. However, this approach has its limitations. For example, it may fail to capture the dynamic, nonlinear, and heterogeneous effects present in real-world networks. A promising direction for future research is to extend our study by incorporating a fully dynamic nonlinear general equilibrium model. This model would integrate heterogeneous sectors within a financial network, similar to the framework proposed by vom Lehn and Winberry [20]. Such

Figure 5. Sparsity and its impact on networks with complete components.

Figure 6. Sparsity and its impact on networks with general complete core-periphery components.

an extension would enable us to analyze the impacts of the complex, dynamic evolution of connections in real financial networks.

Another limitation of our current research is the assumption of network stability. Namely, the matrix IρW in Equation (3) remains nonsingular, even the network sparsity of W changes. Whether changes in network sparsity might affect its stability remains an open question, warranting further investigation in future studies.

Acknowledgements

We thank two anonymous referees, Zeyun Bei, John Chu, Andrew Grant, Michael Hanke (discussant), Qianqian Huang, Yunying Huang, Miguel Oliveira, Stephen Teng Sun, Gertjan Verdickt (discussant), Junbo Wang, Yinggang Zhou, and conference and seminar participants at 2024 Sydney Banking and Financial Stability Conference (SBFC), 37th Australasian Finance and Banking Conference (AFBC, 2024), City University of Hong Kong, and Xiamen University for their helpful comments. We are responsible for any remaining errors.

Appendix

A1. Proof for Section 2

Corollary 1. ( 0, 1 N1 ) is the range of consistency and nonsingular condition for ρ , regardless of variations in the size of the core-periphery component or changes in the number of core agents, where N is the total number of agents in network W .

Proof:

From Corollary 6.1.5 in Horn and Johnson [21], all the eigenvalues ω i should less than the minimum of maximum of row sums and the maximum of column sums in W , i.e.

ω i min{ max i j=1 N | w ij | , max j i=1 N | w ij | },i=1,2,,N (1’)

As W only has entries 0 or 1, thus the maximum eigenvalue of W is no greater than N1 . Thus ( 1 ω min , 1 N1 ) could be the range of consistency and nonsingular condition for ρ , regardless of variations in the size of the core-periphery component or changes in number of core agents.

From Section 1.2 in Horn and Johnson [21], the trace of W is equal to the sum of the eigenvalues of W , i.e.

tr( W )= i=1 N w ii = i=1 N ω i =0 (2’)

From (2’), if the maximum eigenvalue of W is larger than 0, the minimum eigenvalue of W would be less than 0.

From Theorem 1.12 in Magnus and Neudecker [22], there exists an orthogonal N×N matrix T and a diagonal matrix Λ whose diagonal elements are eigenvalues of W , we have

T WT=Λ (3’)

From T WT=Λ , we have W= ( T ) 1 Λ T 1 . If all the eigenvalues of W are zeros, then W must be zero matrix. Thus at least one eigenvalue would not be 0. Hence, we exclude the case that all the eigenvalues are zeros if W is not a zero matrix. Therefore, the lower bound of ρ is negative and the upper bond of ρ is positive.

Since we only consider ρ>0 , thus we assume ρ( 0, 1 N1 ) in all our analyses. Note that since our adjacency matrix W is a unnormalized N×N matrix, hence the ρ is unnormalized and is relatively small. If we normalize W , say, by dividing all its elements by N1 , then the normalized ρ ^ ( 0, N1 N1 )=( 0,1 ) . This result applies to our simulations conducted in Table 1 to Table 3.

QED

A2. Proofs for Section 3

Proof for Proposition 1:

W i = ( 0 1 1 0 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 ) p×p =( CC CP PC PP )=( 1 0 R R 0 ) (4’)

The adjacency matrix W i of each core-periphery component i is as follows:

In this case, CC= 1 0 is a s×s matrix with all ones except the diagonal terms. PP is a ( ps )×( ps ) zero matrix. CP is a s×( ps ) matrix with all ones. PC is the transpose matrix of CP . Thus we have

Iρ W i = ( 1 ρ ρ ρ ρ ρ ρ 1 ρ ρ ρ ρ ρ ρ 1 ρ ρ ρ ρ ρ ρ 1 0 0 ρ ρ ρ 0 1 0 ρ ρ ρ 0 0 1 ) p×p =( Iρ 1 0 ρR ρ R I ) (5’)

S i = def ( Iρ W i ) 1 = ( Iρ 1 0 ρR ρ R I ) 1 =( A B B D ) (6’)

where

A s×s = ( Iρ 1 0 ρ 2 R R ) 1 =( a c c a c c c c a )=a I s×s +c 1 0 s×s

a= 1+2ρsρ+p ρ 2 s ρ 2 ps ρ 2 + s 2 ρ 2 ( 1+ρ )( 1+ρsρps ρ 2 + s 2 ρ 2 )

c= ρ( 1+pρsρ ) ( 1+ρ )( 1+ρsρps ρ 2 + s 2 ρ 2 )

B s×( ps ) = ( Iρ 1 0 ) 1 ρR [ I ρ 2 R ( Iρ 1 0 ) 1 R ] 1 =( e e e e e e e e e ) =e 1 s×( ps )

e= ρ 1+ρsρps ρ 2 + s 2 ρ 2

D ( ps )×( ps ) = [ I ρ 2 R ( Iρ 1 0 ) 1 R ] 1 =( b d d b d d d d b ) =b I ( ps )×( ps ) +d 1 0 ( ps )×( ps )

b= 1+ρsρ+s ρ 2 ps ρ 2 + s 2 ρ 2 1+ρsρps ρ 2 + s 2 ρ 2

d= s ρ 2 1+ρsρps ρ 2 + s 2 ρ 2

I is an identity matrix.

QED

Proof for Proposition 2

Proof:

First, we show that parameters a to e in the component submatrix S i increases as the network sparsity decreases when s[ 1,p1 ] . As ρ( 0, 1 N1 ) , we have each element of S i is non-negative due to the following formula:

S i = def ( Iρ W i ) 1 = k=0 ρ k W i k (7’)

As a result, all parameters a to e are non-negative too.

a s = c s = ρ 3 f( s ) ( 1+ρρsp ρ 2 s+ ρ 2 s 2 ) 2 (8')

where

f( s )=1+2p+ p 2 ρ2s2pρs+ρ s 2 =2( ps )+ρ ( ps ) 2 1

>ρ ( ps ) 2 +1>0 , since s<p1 and 2( ps )>2 .

Therefore, a s = c s >0 .

We also have:

b s = d s = ρ 2 g( s ) ( 1+ρρsp ρ 2 s+ ρ 2 s 2 ) 2 (9')

where

g( s )=1+ρ ρ 2 s 2 >1+ρ ρ 2 ( p1 ) 2

As ρ< 1 N1 , we have 1+ρ>ρN . We also have ρ< 1 p1 as pN .

Hence

g( s )=1+ρ ρ 2 ( p1 ) 2 >ρN ρ 2 ( p1 ) 2 >ρ( p1 )[ 1ρ( p1 ) ]>0

Thus b s = d s >0 .

We have

e s = ρ 2 h( s ) ( 1+ρρsp ρ 2 s+ ρ 2 s 2 ) 2 (10')

where

h( s )=1+pρ2ρs>1+pρ2ρ( p1 )=1+2ρpρ .

As ρ< 1 p1 , we have 1pρ>ρ .

Thus h( s )>ρ>0 and e s >0 .

Therefore, if p is fixed and ρ( 0, 1 N1 ) , then an increase in s will lead to a decrease in the network sparsity and an increase in all parameters a to e.

Second, we derive the formulae of network impacts with normalized β k =1 according to definition 4 in Section 2 and take their derivative with respect to s .

Averge Direct Impact = p2pρ+psρp ρ 2 +s ρ 2 ( 1p+ p 2 )+( 1p ) s 2 ρ 2 p( 1+ρ )( 1ρ+sρ+ps ρ 2 s 2 ρ 2 ) + ( p 2 p )s ρ 3 +( 12p ) s 2 ρ 3 + s 3 ρ 3 p( 1+ρ )( 1ρ+sρ+ps ρ 2 s 2 ρ 2 ) (11')

We take the derivative of average direct impact with respect to s :

Average Direct Impact s = γ 1 ( s ) p( 1+ρ ) ( 1ρ+ρs+p ρ 2 s ρ 2 s 2 ) 2 (12')

where

γ 1 ( s )= [ ρ 2 ( 12p+2s ) + ρ 3 ( 13p+4s2ps+2 s 2 ) + ρ 4 ( p+2s2ps+ s 2 +2p s 2 2 s 3 )+ ρ 5 ( p 2 s 2 2p s 3 + s 4 ) ]

Let z=2( ps )1 , then we have z[ 1,2( N1 )1 ] .

Substitute p by s+ z+1 2 in γ 1 ( s ) , we have:

γ 1 ( s )= 1 4 ρ 2 [ 4z+ρ( 2+6z+4zs )+ ρ 2 ( 2+2z+4zs4z s 2 ) ρ 3 s 2 ( z+1 ) 2 ] = 1 4 ρ 2 { [ 4z ρ 3 s 2 ( z+1 ) 2 ]+[ 4zsρ4z s 2 ρ 2 ]+ρ( 2+6z )+ ρ 2 ( 2+2z+4zs ) }

For the first bracket, we have 0<ρ<1/ ( N1 ) , and hence 0<ρ< 1 s , therefore:

4z ρ 3 s 2 ( z+1 ) 2 4zρ ( z+1 ) 2 =4[ 2( ps )1ρ ( ps ) 2 ] 4[ 2( ps )1( ps ) ]( sinceρ( ps )<1duetoρρs<ρp<1+ρ ) = 4[ ( ps )1 ]>0as( ps )10 (13’)

For the second bracket inside γ 1 ( s ) above, we have

4zsρ4z s 2 ρ 2 =4zsρ( 1sρ )>0 , as 0<ρ< 1 s (14’)

Thus we have γ 1 ( s )>0 and Average Direct Impact s >0 .

Next, we have the average indirect impact with normalized β k =1 :

Average Indirect Impact = sρ( 1+2ps+ p 2 ρpsρp ρ 2 + p 2 ρ 2 +s ρ 2 2ps ρ 2 + s 2 ρ 2 ) p( 1+ρ )( 1ρ+sρ+ps ρ 2 s 2 ρ 2 ) (15')

We take the derivative of average indirect impact with respect to s :

Average Indirect Impact s = γ 2 ( s ) p( 1+ρ ) ( 1ρ+ρs+p ρ 2 s ρ 2 s 2 ) 2 (16')

where

γ 2 ( s )=ρ( 1+2( ps ) )+ ρ 2 ( 1+2( ps )+ ( ps ) 2 ) + ρ 3 ( p+2 p 2 +2s6ps+4 s 2 ) + ρ 4 ( p+ p 2 +2s4ps+2 s 2 ( p+1s ) ) + ρ 5 ( p 2 s 2 2p s 3 + s 4 )

Let z=2( ps )1 again, then we have z[ 1,2( N1 )1 ] .

Substitute p by s+ z+1 2 in γ 2 ( s ) , we have:

γ 2 ( s )=( z ρ 2 zs )+ 1 4 ( 1+6z+ z 2 )ρ+ 1 4 ρ 2 ( 2z+2 z 2 ) + 1 4 ρ 3 ( 1+ z 2 4zs+4z s 2 )+ 1 4 ρ 4 ( s 2 +2z s 2 + z 2 s 2 )

For the first parenthesis:

z ρ 2 zs=z( 1 ρ 2 s )>z( 1ρ )>0 as 0<ρ< 1 s (17’)

For the fourth parenthesis related to 1 4 ρ 3 , we have:

1+ z 2 4zs+4z s 2 =4zs( s1 )+ z 2 1>0 , (18’)

since z[ 1,2( N1 )1 ] and s[ 1,p1 ] .

Thus, we have γ 2 ( s )>0 and Average Indirect Impact s >0 . (19’)

Finally, we have the average total impact with normalized β k =1 :

Average Total Impact s = ρ( 1+ρ )( 1+2p2s+ ( ps ) 2 ρ ) p ( 1+ρsρ+s( p+s ) ρ 2 ) 2 (20')

Since 1+2p2s+ ( ps ) 2 ρ= ( ps ) 2 ρ+2( ps )1 , we know that 2( ps )1>1 if s[ 1,p1 ] . Thus we have Average Total Impact s >0 .

QED

A3. Proofs for Section 4

Proof for Proposition 3:

(i) (Complete Component) If W is a block-diagonal matrix with complete components ( s=p1 ) , the adjacency matrix W i of a complete component i is as follows:

W i =( 1 0 s×s 1 s×( ps ) 1 ( ps )×s 0 ( ps )×( ps ) )= ( 1 0 ( p1 )×( p1 ) 1 ( p1 )×1 1 1×( p1 ) 0 1×1 ) p×p =( CC CP PC PP )=( 1 0 R R 0 ) (21’)

In this case, CC= 1 0 is a ( p1 )×( p1 ) matrix with all ones except the diagonal terms. PP is a 1×1 zero matrix. CP is a ( p1 )×1 matrix with all ones. PC is the transpose matrix of CP .

Thus, we have

Iρ W i = ( 1 ρ ρ ρ ρ ρ ρ 1 ρ ρ ρ ρ ρ ρ 1 ρ ρ ρ ρ ρ ρ 1 ρ ρ ρ ρ ρ ρ 1 ρ ρ ρ ρ ρ ρ 1 ) p×p =( Iρ 1 0 ( p1 )×( p1 ) ρ 1 ( p1 )×1 ρ 1 1×( p1 ) I 1×1 ) (22’)

Since S = def ( IρW ) 1 , then S is also a block-diagonal matrix with submatrix for each component S i with the form as follows:

S i = def ( I i ρ W i ) 1 =( s kk s kj s kj s kk s kj s kj s kj s kj s kk )= s kk I p×p + s kj 1 0 p×p (23’)

where

s kk = ρp2ρ1 ( 1+ρ )( ρpρ1 ) and s kj = s jk = ρ ( 1+ρ )( ρpρ1 ) . (24’)

It is straightforward to prove that S i ( Iρ W i )=I and S( IρW )=I .

QED

(ii) (General Complete Core-Periphery Component) Similarly, if W is a block-diagonal matrix with general complete core-periphery components ( s<p1 ) , suppose before merging, the number of cores in a component is s 0 and the size of a component is p 0 . Then after merging, the number of cores in each component is s=μ s 0 , where μ the number of small components merged into a bigger component with the size of p=μ p 0 . That is, p= p 0 s 0 s=λs where λ= p 0 s 0 >1 is fixed. Thus, W i has a form of

W i =( 1 0 s×s 1 s×( ps ) 1 ( ps )×s 0 ( ps )×( ps ) ) (25’)

Since S i = def ( Iρ W i ) 1 , then S is also a block-diagonal matrix with submatrix of S i with the form as follows:

S i = def ( Iρ W i ) 1 = ( Iρ 1 0 s×s ρ 1 s×( ps ) ρ 1 ( ps )×s I ) 1 = ( A s×s B s×( ps ) C ( ps )×s D ( ps )×( ps ) ) p×p (26’)

where

A ( p1 )×( p1 ) = ( Iρ 1 0 ρ 2 R R ) 1 =( a c c a c c c c a )=a E s×s +c 1 0 s×s

a= 1+2ρsρ+( λ1 )s ρ 2 ( λ1 ) s 2 ρ 2 ( 1+ρ )( 1+ρsρλ s 2 ρ 2 + s 2 ρ 2 )

c= ρ( 1+λsρsρ ) ( 1+ρ )( 1+ρsρλ s 2 ρ 2 + s 2 ρ 2 )

B= ( Iρ 1 0 ) 1 ρR [ I ρ 2 R ( Iρ 1 0 ) 1 R ] 1 =( e e e e e e e e e )

e= ρ 1+ρsρλ s 2 ρ 2 + s 2 ρ 2

C=ρ R ( Iρ 1 0 ρ 2 R R ) 1 = B =( e e e e e e e e e )

D= [ I ρ 2 R ( Iρ 1 0 ) 1 R ] 1 =( b d d b d d d d b )

b= 1+ρsρ+s ρ 2 λ s 2 ρ 2 + s 2 ρ 2 1+ρsρλ s 2 ρ 2 + s 2 ρ 2

d= s ρ 2 1+ρsρλ s 2 ρ 2 + s 2 ρ 2

QED

Proof for Proposition 4:

(i) (Complete Component) First, we prove that for the network with complete components ( s=p1 ) , the diagonal/nondiagonal element of S i increases as the network sparsity decreases when the size of component p increases.

For the network with complete components ( s=p1 ) , recall s kk = ρp2ρ1 ( 1+ρ )( ρpρ1 ) . We take the first derivative of s kk with respect to p  to obtain:

s kk p = ρ( 1+ρ )( ρpρ1 )( ρp2ρ1 )( 1+ρ )ρ ( 1+ρ ) 2 ( ρpρ1 ) 2 = ρ 2 ( 1+ρ ) ( ρpρ1 ) 2 (27’)

Thus if ρ( 0, 1 N1 ) , the diagonal element of S i increases as the network sparsity decreases due to the increase of the size of each component p .

Recall s kj = s jk = ρ ( 1+ρ )( ρpρ1 ) . We take the first derivative of s kj with respect to p to obtain:

s kj p = s jk p =( ρ 1+ρ ) ρ ( ρpρ1 ) 2 = ρ 2 ( 1+ρ ) ( ρpρ1 ) 2 >0 (28')

Thus if ρ( 0, 1 N1 ) , then the non-diagonal element of S i increases as the network sparsity decreases due to the increase of the size of each component p .

For the network with complete components ( s=p1 ) , the average direct impact with normalized β k =1 becomes

Average Direct Impact=( 1 N Bp s kk )= s kk (29’)

From equation (27’), s kk increases as p enlarges if ρ( 0, 1 N1 ) . Thus we conclude that average direct impact increases as the network sparsity decreases due to the increase of the size of each component p .

We have the average indirect impact with normalized β k =1 as follows:

Average Indirect Impact= 1 N Bp( p1 ) s kj =( p1 ) s kj = ρ( p1 ) ( 1+ρ )( 1ρ( p1 ) ) (30’)

Thus the derivative of average indirect impact with respect to p is

Average Indirect Impact p = ρ ( 1+ρ ) ( ρpρ1 ) 2 >0 (31’)

We conclude that average indirect impact increases as the network sparsity decreases due to the increase of the size of each component p if ρ( 0, 1 N1 ) .

Finally, we have the average total impact with normalized β k =1 :

Average Total Impact= s kk +( p1 ) s kj = 1 1ρpρ (32’)

Thus the derivative of the average total impact with respect to p is

Average Total Impact p = ρ ( ρpρ1 ) 2 >0 (33’)

We conclude that average total impact increases as the network sparsity decreases due to the increase of the size of each component p if ρ( 0, 1 N1 ) .

QED

(ii) (General Complete Core-Periphery Component) Similarly, for the network with general complete core-periphery components ( s<p1 ) , we first show that the parameters a to e of the component submatrix S i increase as the network sparsity decreases when s[ 1,p1 ) .

As ρ( 0, 1 N1 ) , S i = def ( Iρ W i ) 1 = k=0 ρ k W i k (34’)

Thus each element of S i is non-negative. As λ>1 , we have:

a s = c s = ρ 2 ( λρ+λρ2ρs+2λρs+ ρ 2 s 2 2λ ρ 2 s 2 + λ 2 ρ 2 s 2 ) ( 1+ρ ) ( 1ρ+ρs ρ 2 s 2 +λ ρ 2 s 2 ) 2 = ρ 2 [ λ+( λ1 )ρ+2ρs( λ1 )+ ρ 2 s 2 ( λ1 ) 2 ] ( 1+ρ ) ( 1ρ+ρs ρ 2 s 2 +λ ρ 2 s 2 ) 2 >0 (35’)

b s = d s = ρ 2 [ 1+ρ+( λ1 ) ρ 2 s 2 ] ( 1ρ+ρs ρ 2 s 2 +λ ρ 2 s 2 ) 2 >0 (36’)

e s = ρ 2 [ 1+2ρs( λ1 ) ] ( 1ρ+ρs ρ 2 s 2 +λ ρ 2 s 2 ) 2 >0 (37’)

Therefore, if N is fixed and ρ( 0, 1 N1 ) , then parameters a to e increase as the network sparsity decreases when s[ 1,p1 ) .

We then compute the formulae of the network impacts with normalized β k =1 and derive their derivatives respect to s .

Average Direct Impact = λ2λρ+ ρ 2 λ ρ 2 +λρs+ ρ 2 sλ ρ 2 s+ ρ 3 sλ ρ 3 sλ ρ 2 s 2 + λ 2 ρ 2 s 2 + ρ 3 s 2 2λ ρ 3 s 2 + λ 2 ρ 3 s 2 λ( 1+ρ )( 1ρ+sρ+λ s 2 ρ 2 s 2 ρ 2 ) (38’)

We take the derivative of average direct impact with respect to s :

Average Direct Impact s = ρ 2 [ 2λ1+ρ( 3( λ1 )+2s( λ1 ) )+ ρ 3 ( s 2 ( λ1 ) 2 )+ ρ 2 ( λ1+2 s 2 ( λ1 ) 2 ) ] λ( 1+ρ ) ( 1ρ+ρs ρ 2 s 2 +λ ρ 2 s 2 ) 2 >0 (39’)

since λ>1 . Thus Average Direct Impact s >0 .

Next, we have the average indirect impact with normalized β k =1 :

Average Indirect Impact = ρ( 1s+2λs+ ρ 2 sλ ρ 2 sλρ s 2 + λ 2 ρ s 2 + ρ 2 s 2 2λ ρ 2 s 2 + λ 2 ρ 2 s 2 ) λ( 1+ρ )( 1ρ+sρ+λ s 2 ρ 2 s 2 ρ 2 ) (40’)

We take the derivative of average indirect impact with respect to s :

Average Indirect Impact s = ξ( s ) λ( 1+ρ ) ( 1ρ+ρs ρ 2 s 2 +λ ρ 2 s 2 ) 2 (41’)

where

ξ( s )=( 2λ1 )ρ+ ρ 2 [ 2+2λ+2λs( λ1 ) ] + ρ 3 [ 1λ+4s ( λ1 ) 2 + s 2 ( λ1 ) 2 ] + ρ 4 [ ( 1λ+2s ( λ1 ) 2 s 2 ( λ1 ) 2 ) ]+ ρ 5 [ s 2 ( λ1 ) 2 ]

Let w=λ1>0 and replace λ by w+1 in ξ( s ) to obtain:

ξ( s )=ρ( 1+2w ) ρ 5 s 2 w 2 + ρ 2 ( 2w+2sw+2s w 2 ) + ρ 3 ( w+4s w 2 + s 2 w 2 )+ ρ 4 ( 2s w 2 s 2 w 2 w ) =ρ( 1+2w )+ ρ 2 ( 2sw+2s w 2 )+4s w 2 ρ 3 +( 2 ρ 2 w ρ 3 w ρ 4 w ) +( ρ 3 s 2 w 2 ρ 4 s 2 w 2 )+( 2s w 2 ρ 4 ρ 5 s 2 w 2 ) (42’)

The first three terms of ξ( s ) are all positive. For the third parenthesis, we have:

2 ρ 2 w ρ 3 w ρ 4 w= ρ 2 w( 2ρ ρ 2 )>0 as 0<ρ<1 and 0< ρ 2 <1 (43’)

For the fourth parenthesis, we have:

ρ 3 s 2 w 2 ρ 4 s 2 w 2 = ρ 3 s 2 w 2 ( 1ρ ) as 0<ρ<1 (44’)

For the last parenthesis, we have:

2s w 2 ρ 4 ρ 5 s 2 w 2 =s w 2 ρ 4 ( 2ρs )>0 (45')

as 0<ρ< 1 s due to 0<ρ< 1 N1 .

Thus we have ξ( s )>0 and Average Indirect Impact s >0 . (46’)

Finally, we have the average total impact with normalized β k =1 :

Average Total Impact= λρ+λρsρ+λρs λ( 1ρ+sρ+λ s 2 ρ 2 s 2 ρ 2 ) (47’)

We take the derivative of average total impact with respect to s :

Average Total Impact s = ρ( 1ρs+λρs )( 1+2λ2ρ+2λρρs+λρs ) λ ( 1ρ+sρ+λ s 2 ρ 2 s 2 ρ 2 ) 2 =  ρ[ 1+( λ1 )ρs ][ 2λ1+2ρ( λ1 )+( λ1 )ρs ] λ ( 1ρ+sρ+λ s 2 ρ 2 s 2 ρ 2 ) 2 >0 (48’)

as λ>1 . Thus Average Total Impact s >0 .

QED

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Allen, F. and Gale, D. (2000) Financial Contagion. Journal of Political Economy, 108, 1-33.
https://doi.org/10.1086/262109
[2] Acemoglu, D., Ozdaglar, A. and Tahbaz-Salehi, A. (2015) Systemic Risk and Stability in Financial Networks. American Economic Review, 105, 564-608.
https://doi.org/10.1257/aer.20130456
[3] Glasserman, P. and Young, H.P. (2016) Contagion in Financial Networks. Journal of Economic Literature, 54, 779-831.
https://doi.org/10.1257/jel.20151228
[4] Jie, E.Y. and Ma, Y. (2025) An Analysis of Incomplete and Random Financial Networks. Journal of Mathematical Finance, in Press.
[5] Di Maggio, M., Kermani, A. and Song, Z. (2017) The Value of Trading Relations in Turbulent Times. Journal of Financial Economics, 124, 266-284.
https://doi.org/10.1016/j.jfineco.2017.01.003
[6] Li, D. and Schürhoff, N. (2018) Dealer Networks. The Journal of Finance, 74, 91-144.
https://doi.org/10.1111/jofi.12728
[7] Levine, R., Lin, C. and Wang, Z. (2020) Bank Networks and Acquisitions. Management Science, 66, 5216-5241.
https://doi.org/10.1287/mnsc.2019.3428
[8] Jackson, M. (2008) Social and Economic Networks. Princeton University Press.
https://doi.org/10.1515/9781400833993
[9] LeSage, J.P. and Pace, R.K. (2009) Introduction to Spatial Econometrics. Chapman and Hall/CRC.
https://doi.org/10.1201/9781420064254
[10] Diestel, R. (2005) Graph Theory. 3rd Edition, Springer-Verlag.
[11] Bech, M.L. and Atalay, E. (2010) The Topology of the Federal Funds Market. Physica A: Statistical Mechanics and Its Applications, 389, 5223-5246.
https://doi.org/10.1016/j.physa.2010.05.058
[12] Hollifield, B., Neklyudov, A. and Spatt, C. (2017) Bid-Ask Spreads, Trading Networks, and the Pricing of Securitizations. The Review of Financial Studies, 30, 3048-3085.
https://doi.org/10.1093/rfs/hhx027
[13] Craig, B. and Ma, Y. (2022) Intermediation in the Interbank Lending Market. Journal of Financial Economics, 145, 179-207.
https://doi.org/10.1016/j.jfineco.2021.11.003
[14] Elliott, M., Golub, B. and Jackson, M.O. (2014) Financial Networks and Contagion. American Economic Review, 104, 3115-3153.
https://doi.org/10.1257/aer.104.10.3115
[15] Cerdeiro, D.A., Dziubiński, M. and Goyal, S. (2017) Individual Security, Contagion, and Network Design. Journal of Economic Theory, 170, 182-226.
https://doi.org/10.1016/j.jet.2017.05.006
[16] in ’t Veld, D. and van Lelyveld, I. (2014) Finding the Core: Network Structure in Interbank Markets. Journal of Banking & Finance, 49, 27-40.
https://doi.org/10.1016/j.jbankfin.2014.08.006
[17] in ’t Veld, D., van der Leij, M. and Hommes, C. (2020) The Formation of a Core-Periphery Structure in Heterogeneous Financial Networks. Journal of Economic Dynamics and Control, 119, Article 103972.
https://doi.org/10.1016/j.jedc.2020.103972
[18] Bramoullé, Y., Djebbari, H. and Fortin, B. (2009) Identification of Peer Effects through Social Networks. Journal of Econometrics, 150, 41-55.
https://doi.org/10.1016/j.jeconom.2008.12.021
[19] Grieser, W., Hadlock, C., LeSage, J. and Zekhnini, M. (2022) Network Effects in Corporate Financial Policies. Journal of Financial Economics, 144, 247-272.
https://doi.org/10.1016/j.jfineco.2021.05.060
[20] vom Lehn, C. and Winberry, T. (2021) The Investment Network, Sectoral Comovement, and the Changing U.S. Business Cycle. The Quarterly Journal of Economics, 137, 387-433.
https://doi.org/10.1093/qje/qjab020
[21] Horn, R.A. and Johnson, C.R. (1985) Matrix Analysis. Cambridge University Press.
https://doi.org/10.1017/cbo9780511810817
[22] Magnus, J.R. and Magnus, J.R. (2019) Matrix Differential Calculus with Applications in Statistics and Econometrics. Wiley.
https://doi.org/10.1002/9781119541219

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.