_{1}

^{*}

This paper investigates the networked evolutionary model based on snow-drift game with the strategy of rewards and penalty. Firstly, by using the semi-tensor product of matrices approach, the mathematical model of the networked evolutionary game is built. Secondly, combined with the matrix expression of logic, the mathematical model is expressed as a dynamic logical system and next converted into its evolutionary dynamic algebraic form. Thirdly, the dynamic evolution process is analyzed and the final level of cooperation is discussed. Finally, the effects of the changes in the rewarding and penalty factors on the level of cooperation in the model are studied separately, and the conclusions are verified by examples.

Cooperation widely exists in various complex systems from biological to economic and social networks. Cooperative behavior is regarded as a key factor in the evolution process. So research on cooperation has great significance for group development. In recent years, with the development of network research and the innovation of experimental results, cooperation has been combined with different networks in different disciplines. As a new field from the classical games, evolutionary game theory provides an important and effective theoretical framework for the study on the evolution of cooperation between competing individuals.

To convert the strategy profile dynamics of the evolutionary game into a logical dynamic system, a useful tool, called the semi-tensor product of matrices emerged as the times require. It was proposed by professor Cheng [

Combining the predecessors’ research on the controllability of networked evolutionary games [

The composition of this paper is as follows: In Section 2, we give some preliminary knowledge, including semi-tensor product of the matrices, networked evolutionary game and replication dynamics. Section 3 discusses the dynamic model of networked evolution based on the snow-drift game. In Section 4, we use specific examples to discuss the ultimate level of cooperation among the players in the game, which is followed by a brief conclusion in Section 5.

For statement ease, we first introduce some notations:

・ M m × n is the set of m × n real matrices;

・ C o l i ( M ) refers to the i-th column of matrix M, C o l ( M ) is the set of columns of M;

・ The domain of the k-valued logic is marked as D k : = { 1,2, ⋯ , k } ;

・ I n is the identity matrix, δ n i : = C o l i ( I n ) refers to the i-th column of I n , Δ n : = C o l ( I n ) is the set of columns of I n ;

・ M ∈ m × n is called a logical matrix if C o l ( M ) ⊂ Δ m , the set of m × n logical functions is recorded as L m × n ;

・ Assume L ∈ L m × n , then L = [ δ m i 1 , δ m i 2 , ⋯ , δ m i n ] , and it can be abbreviated as L = δ m [ i 1 , i 2 , ⋯ , i m ] .

Definition 2.1: [

A ⋉ B = ( A ⊗ I α n ) ( B ⊗ I α p ) (1)

where α is the least common multiple of n , p , ⊗ is the tensor product of the matrix, that is Kronecker product which can be denoted as:

A ⊗ B = [ a 11 B ⋯ a 1 n B ⋮ ⋱ ⋮ a m 1 B ⋯ a m n B ] (2)

Definition 2.2: [

A ∗ B = [ C o l 1 ( A ) ⋉ C o l 1 ( B ) , ⋯ , C o l s ( A ) ⋉ C o l s ( B ) ] (3)

Considering the multi-valued logical function [

f : Δ k 1 × Δ k 2 × ⋯ × Δ k n → Δ k 0 (4)

Assume x i ∈ Δ k i , i = 1 , ⋯ , n , then we have x : = ⋉ i = 1 n x i , where k = ∏ i = 1 n k i .

Using the notation, we have f ( x ) : = f ( x 1 , ⋯ , x n ) , there is a unique logical matrix M f ∈ L k 0 × k , which is called the structural matrix of f, so that there is a vectorial form as:

f ( x 1 , x 2 , ⋯ , x n ) = M f ⋉ i = 1 n x i (5)

where M f = [ f ( δ k 1 f ( δ k 2 ⋯ f ( δ k 1 ] and f ( x ) = M f x .

Definition 2.3: [

・ A network ( N , E ) , where N = 1 , 2 , ⋯ , n represents a set of nodes and E ⊂ N × N denotes a set of edges;

・ Basic networked game G, such that if ( i , j ) ∈ E , then the strategies adopted by i and j are respectively marked as x i ( t ) and x j ( t ) ;

・ Π indicates the strategy updating rules for local information.

In a network ( N , E ) : The set of adjacent nodes of i is called the neighborhood of i and denoted by U ( i ) , in this paper, we assume that i ∈ U ( i ) . Regardless of the directionality of the edges, if there exits a path whose length from i to j is less than or equal to r, then j is called an r-neighbor node of i. The set of r-neighbor node of i is denoted by U r ( i ) .

The network used in this paper is an undirected cycle graph, and the degree of each node is the same 2, so the cycle graph S n is as

Definition 2.4: [

c i ( t ) = ∑ j ∈ U ( i ) \ i c i j ( t ) , i ∈ N (6)

The strategy of i at the time ( t + 1 ) depends on the information of its neighbors at time t, including their tactics and corresponding payments. Let x i ( t ) be the strategy of player i at time t. In the networked evolutionary game, the strategy updating rule is expressed by Π :

x i ( t + 1 ) = f i ( { x j ( t ) , c j ( t ) | j ∈ U ( t ) } ) t ≥ 0 , i ∈ N (7)

This paper mainly use the strategy updating rules of unconditional imitation, as follows: The strategy of player i at time ( t + 1 ) , x i ( t + 1 ) , is selected as the best strategy from strategies of neighborhood players U i at time t. At this time:

x i ( t + 1 ) = x j * ( t ) (8)

where

j * = arg max j ∈ U ( i ) c j ( x ( t ) ) (9)

When the player with the best payoff is not unique,

j * = min { u | u ∈ arg max j ∈ U ( i ) c j ( x ( t ) ) } (10)

Consider that in a homogenized population, each individual can play with all other individuals in the population. Each pair of individuals proceeds in accordance

with the payoff matrix ( R S T P ) . Assume that the proportion of individuals using a cooperative strategy is x, and the proportion of people who choose to become a defector is y. The benefits of cooperator/defector in the population are:

{ P C = R x + S y P D = T x + P y (11)

According to replication dynamics: The rate of changing a strategy in a population is proportional to the proportion of individuals using this strategy and their benefits:

{ d x d t = x ( P C − ϕ ) d y d t = y ( P D − ϕ ) (12)

where φ = x P c + y P D is the average income of the population. Because in the process of evolutionary game, the individual’s fitness is closely related to the proportion of individuals adopting various strategies. According to formula (11), (12), and in combination with x + y = 1 , we can obtain the partner’s replicating dynamic equation:

d x d t = x ( 1 − x ) [ ( R − S − T + P ) x + S − P ] (13)

According to the above formula, the nonlinear differential equation is closely related to the parameters of the payoff matrix. Considering the different characteristics of dynamics, we can discuss the following four situations separately:

・ Defection dominates (D dominated C): T > R > P > S , the individual benefits of defectors are better than those cooperators, such as Prisoner’s Dilemma;

・ Coexistence (C and D coexist): T > R > S > P , at this time, cooperation and defection are in a symbiotic state, such as snow-drift game and Hawk-Dove game;

・ Bistable situation (C and D are bistable): R > T > P > S , at this time, the player’s optimal strategy is to be consistent with the opponent: choosing cooperation or defection at the same time, such as: Stag Hunt Game;

・ Cooperation dominates (D dominated C): When T < R and P < S , no matter how the opponent chooses, the cooperative strategy is better than the defective strategy.

In the traditional snow-drift game, there are two strategies for the players to choose from: cooperation and defection. Considering that in a snowy night, the two men drive in opposite directions and are obstructed by the same snowdrift. Assuming that the cost of removing the snowdrifts to make the roads smooth is c, the benefits of smooth roads for everyone are b, b > c . The cost of shoveling snow is evenly shared by cooperative snow shoveler. In this process, those who do not contribute are defector, and in order to promote the player to cooperate, we propose such a setting: If someone chooses to cooperate, then the cooperator can gain additional profits β , while the defector will be deducted the proceeds γ . When all the people choose to cooperate, they can get additional benefits α , so the original snowdrift model mutates into a mutated snow-drift game model with rewarding and penalty strategy. In order to better understand the framework model, we give the payoff bi-matrix in

C | D | |
---|---|---|

C | ( b − c 2 + α , b − c 2 + α ) | ( b − c + β , b − γ ) |

D | ( b − γ , b − c + β ) | ( 0,0 ) |

Next we discuss the conditions of Nash equilibrium conditions for the mutated snow-drift game: The benefits of cooperators and defectors in the population are:

{ P C = ( b − c 2 + α ) x + ( b − c + β ) ( 1 − x ) = ( c 2 + α − β ) x + b − c − β P D = ( b − γ ) x + o ⋅ y = ( b − γ ) x (14)

with φ = x P C + y P D = x P C + ( 1 − x ) P D , the cooperator’s replicator dynamical equation is:

d x d t = x ( P C − φ ) = x ( 1 − x ) [ ( R − S − T + P ) x + S − P ] (15)

According to the different characteristics of the dynamics, when the game is judged as a variation of the snow-drift game, there is the following inequality relationship:

T > R > S > P (16)

that is:

b − γ > b − c 2 + α > b − c + β > 0 (17)

Therefore, the relationship between rewarding and punishment factors is:

{ b > γ α + γ < c 2 β + γ < c (18)

where

0 < α < c 2 , 0 < β < c , 0 < γ < c 2

In (7), since c j ( t ) depends only on U ( j ) and U ( j ) ⊂ U 2 ( i ) , then the dynamic evolution can be rewritten as:

x i ( t + 1 ) = f i ( { x j ( t ) | j ∈ U 2 ( i ) } ) t ≥ 0 , i ∈ N (19)

We calculate the basic evolutionary equation for any node. In the cycle S n , the neighborhood of node i is recorded as:

U ( i ) = { i − 1 , i , i + 1 } , U 2 ( i ) = { i − 2 , i − 1 , i , i + 1 , i + 2 }

Based on the situation, which is the strategy of each point on U 2 ( i ) , we can get the benefits of each point on U ( i ) . Then according to the benefits, and applying the strategy updating rules, we can get a new strategy:

x i + 1 ( t ) = f i ( x i − 2 ( t ) , x i − 1 ( t ) , x i ( t ) , x i + 1 ( t ) , x i + 2 ( t ) ) (20)

Let 1 ∼ δ 2 1 ,2 ∼ δ 2 2 , then in vector form, we obtain that:

x i + 1 ( t ) = M f i x i − 2 ( t ) x i − 1 ( t ) x i ( t ) x i + 1 ( t ) x i + 2 ( t ) = M ′ f i x ( t ) , i ∈ N (21)

where M ′ f i is the structural matrix obtained by the players by adopting an evolutionary strategy that imitates his neighbor’s, x ( t ) = x 1 ( t ) x 2 ( t ) ⋯ x n ( t ) .

From the formula (4) and the above formula, we have the algebraic form of the evolutionary dynamics as:

x ( t + 1 ) = M G x ( t ) (22)

where M G = M ′ f 1 ∗ M ′ f 2 ∗ ⋯ ∗ M ′ f n , M G ∈ L 2 n × 2 n is called the transition matrix of the game.

Based on the calculation methods for M f i and M G , we can draw the following conclusions:

c o l 1 ( M G ) = ⋉ i = 1 n c o l 1 ( M f i ) = c o l 1 ( M f 1 ) ⋉ c o l 1 ( M f 2 ) ⋉ ⋯ ⋉ c o l 1 ( M f n ) = δ 2 n 1 c o l 2 ( M G ) = ⋉ i = 1 n c o l 2 ( M f i ) = c o l 2 ( M f 1 ) ⋉ c o l 2 ( M f 2 ) ⋉ ⋯ ⋉ c o l 2 ( M f n ) = δ 2 n 2 n (23)

This formula show that if the player first choose the strategy to cooperate, eventually they maintain the cooperative strategy, and conversely, if the player first choose the strategy is uncooperative, and eventually they will maintain defection like first.

The matrix M G is the transition matrix of the game. Assuming any initial state z ( 0 ) ∈ Δ 2 n , there are

z ( t ) = M G t z ( 0 ) (24)

We have

G = lim t → ∞ M G t (25)

Next we discuss G, we can assume that

H = { δ 2 n i | C o l i ( G ) = δ 2 n 1 , 1 ≤ i ≤ 2 n − 1 } (26)

In other words, for any initial state, if z ( 0 ) = δ 2 n i ∈ H , then

z ( ∞ ) = G z ( 0 ) = G δ 2 n i = C o l i ( G ) = δ 2 n 1 (27)

Therefore, we conclude that if the initial state is selected as z ( 0 ) = δ 2 n i ∈ H , the final state of all players will remain cooperation.

According to the conditions mentioned above, in the snow-drift game with rewarding and penalty strategy, we give the following examples: n = 4 , b = 1.5 , c = 1 , α = 0.1 , β = 0.4 , γ = 0.2 , then the payoffs are shown in

The basic evolutionary equation can be figured out as in

Then according to the strategy updating rules of unconditional imitation, the situation at time t is ( x 1 ( t ) , x 2 ( t ) , x 3 ( t ) , x 4 ( t ) ) , and we have the strategy of each player i at time t + 1 is f i ( i = 1 , 2 , 3 , 4 ) , expressed in vector form as:

x i ( t + 1 ) = f i ( x 1 ( t ) , x 2 ( t ) , x 3 ( t ) , x 4 ( t ) ) = M i x 1 ( t ) , x 2 ( t ) , x 3 ( t ) , x 4 ( t ) , i = 1 , 2 , 3 , 4 (28)

C | D | |
---|---|---|

C | ( 1.1,1.1 ) | ( 0.9,1.3 ) |

D | ( 1.3,0.9 ) | ( 0,0 ) |

Profile | 1111 | 1112 | 1121 | 1122 | 1211 | 1212 | 1221 | 1222 |
---|---|---|---|---|---|---|---|---|

c_{1} | 2.2 | 2 | 2.2 | 2 | 2 | 1.8 | 2 | 1.8 |

c_{2} | 2.2 | 2.2 | 2 | 2 | 2.6 | 2.6 | 1.3 | 1.3 |

c_{3} | 2.2 | 2 | 2.6 | 1.3 | 2 | 1.8 | 1.3 | 0 |

c_{4} | 2.2 | 2.6 | 2 | 1.3 | 2.2 | 2.6 | 2 | 1.3 |

f_{1} | 1 | 2 | 1 | 1 | 2 | 2 | 1 | 1 |

f_{2} | 1 | 1 | 2 | 1 | 2 | 2 | 1 | 1 |

f_{3} | 1 | 2 | 2 | 1 | 2 | 2 | 1 | 2 |

f_{4} | 1 | 2 | 2 | 1 | 1 | 2 | 1 | 1 |

Profile | 2111 | 2112 | 2121 | 2122 | 2211 | 2212 | 2221 | 2222 |

c_{1} | 2.6 | 1.3 | 2.6 | 1.3 | 1.3 | 0 | 1.3 | 0 |

c_{2} | 2 | 2 | 1.8 | 1.8 | 1.3 | 1.3 | 0 | 0 |

c_{3} | 2.2 | 2 | 2.6 | 1.3 | 2 | 1.8 | 1.3 | 0 |

c_{4} | 2 | 1.3 | 1.8 | 0 | 2 | 1.3 | 1.8 | 0 |

f_{1} | 2 | 1 | 2 | 1 | 1 | 2 | 1 | 2 |

f_{2} | 2 | 1 | 2 | 1 | 1 | 1 | 2 | 2 |

f_{3} | 1 | 1 | 2 | 1 | 1 | 1 | 1 | 2 |

f_{4} | 2 | 1 | 2 | 2 | 1 | 1 | 1 | 2 |

where

M 1 = δ 2 [ 1 , 2 , 1 , 1 , 2 , 2 , 1 , 1 , 2 , 1 , 2 , 1 , 1 , 2 , 1 , 2 ] M 2 = δ 2 [ 1 , 1 , 2 , 1 , 2 , 2 , 1 , 1 , 2 , 1 , 2 , 1 , 1 , 1 , 2 , 2 ] M 3 = δ 2 [ 1 , 2 , 2 , 1 , 2 , 2 , 1 , 2 , 1 , 1 , 2 , 1 , 1 , 1 , 1 , 2 ] M 4 = δ 2 [ 1 , 2 , 2 , 1 , 1 , 2 , 1 , 1 , 2 , 1 , 2 , 2 , 1 , 1 , 1 , 2 ] (29)

Now we have the dynamic situation as

x ( t + 1 ) = M G x ( t ) (30)

where M G = M 1 ∗ M 2 ∗ M 3 ∗ M 4 , x ( t ) = x 1 ( t ) x 2 ( t ) x 3 ( t ) x 4 ( t ) , it is easy to figure out that

M G = δ 16 [ 1 , 12 , 8 , 1 , 15 , 16 , 1 , 3 , 14 , 1 , 16 , 2 , 1 , 9 , 5 , 16 ] (31)

where

M G 2 = δ 16 [ 1 , 2 , 3 , 1 , 5 , 16 , 1 , 8 , 9 , 1 , 16 , 12 , 1 , 14 , 15 , 16 ]

M G 3 = δ 16 [ 1 , 12 , 8 , 1 , 15 , 16 , 1 , 3 , 14 , 1 , 16 , 2 , 1 , 9 , 5 , 16 ] (32)

M G 4 = δ 16 [ 1 , 2 , 3 , 1 , 5 , 16 , 1 , 8 , 9 , 1 , 16 , 12 , 1 , 14 , 15 , 16 ]

Through the above dynamics, we can get the conclusion, if x ( 0 ) ∈ { δ 16 1 , δ 16 4 , δ 16 7 , δ 16 10 , δ 16 13 } , then x ( t ) = δ 16 1 . In other words, if the initial strategy of the four players is one of the following, they will eventually choose the cooperative strategy:

( 1,1,1,1 ) , ( 1,1,2,2 ) , ( 1,2,2,1 ) , ( 2,1,1,2 ) , ( 2,2,1,1 ) (33)

From this, we can conclude that the final situation of maintaining cooperation accounts for 5 16 of the original total situations.

Based on the above, we can know that there are two conditions to promote cooperation:

・ R + S ≥ 2 T , when the status of two neighbors of a cooperator is cooperation and defection, and the two neighbors of a defector are all cooperators, if the cooperators benefits is greater than or equal to the defectors, the strategy chosen by the player will be biased towards the cooperation. At this time, we can achieve the purpose of promoting cooperation. That is

( b − c 2 + α ) + ( b − c + β ) ≥ 2 ( b − γ ) , then − 3 2 c + α + β + 2 γ ≥ 0 ;

・ S > T , if in some initial state, the strategy that the cooperator’s neighbors select is a defective strategy, and the defector’s neighbors are all cooperators. At this time, when the cooperator’s income is greater than the defector, the chance of cooperation will increase greatly. That is b − c + β > b − γ , then γ − c + β > 0 .

In order to study the effect of changes in various parameters on the level of cooperation, we first change the values of α , β , γ respectively, and obtain the final state of stable cooperation. Then we study the proportion of cooperation under the steady state, and observe the changes in cooperation rates.

・ Firstly, we change the value of α . Since β , γ is unchanged, by the first condition we get that α ≥ 3 2 c − β − 2 γ , taking the initial parameter value, we

have α ≥ 0.7 . Normally, we take α = 0.7 . The payoff bi-matrix at this time is as

According to the basic evolution equationary under this parameter, the final result is:

M G = δ 16 [ 1 , 1 , 1 , 1 , 1 , 16 , 1 , 1 , 9 , 1 , 16 , 1 , 1 , 9 , 1 , 16 ] (34)

where

M G 2 = δ 16 [ 1 , 1 , 1 , 1 , 1 , 16 , 1 , 1 , 9 , 1 , 16 , 1 , 1 , 9 , 1 , 16 ]

M G k = δ 16 [ 1 , 1 , 1 , 1 , 1 , 16 , 1 , 1 , 9 , 1 , 16 , 1 , 1 , 9 , 1 , 16 ] (35)

Therefore, at time t, if

x ( 0 ) ∈ { δ 16 1 , δ 16 2 , δ 16 3 , δ 16 4 , δ 16 5 , δ 16 7 , δ 16 8 , δ 16 10 , δ 16 12 , δ 16 13 , δ 16 15 } (36)

then x ( t ) = δ 16 1 . That is, if the initial strategy of the four players is one of the following situations, they will eventually choose to cooperate:

( 1,1,1,1 ) , ( 1,1,1,2 ) , ( 1,1,2,1 ) ( 1,1,2,2 ) , ( 1,2,1,1 ) , ( 1,2,2,1 ) , ( 1,2,2,2 ) ( 2,1,1,2 ) , ( 2,1,2,2 ) , ( 2,2,1,1 ) , ( 2,2,2,1 ) (37)

The practical significance of this result is: when we improve the reward factor α , the proportion of the profile that ultimately maintains cooperation improves

to 11 16 . The probability of cooperation has been greatly increased by increasing the benefits of cooperation.

・ Secondly, changing the value of β , we can get β ≥ 1 from condition 1 and β > 0.8 from condition 2, so we take β = 0.9 . At this time, the payoff bi-matrix is as

Similarly, we have:

M G = δ 16 [ 1 , 12 , 8 , 1 , 15 , 1 , 1 , 3 , 14 , 1 , 1 , 2 , 1 , 9 , 5 , 16 ]

M G 2 = δ 16 [ 1 , 2 , 3 , 1 , 5 , 1 , 1 , 8 , 9 , 1 , 1 , 12 , 1 , 14 , 15 , 16 ]

M G 3 = δ 16 [ 1 , 12 , 8 , 1 , 15 , 1 , 1 , 3 , 14 , 1 , 1 , 2 , 1 , 9 , 5 , 16 ]

M G 4 = δ 16 [ 1 , 2 , 3 , 1 , 5 , 1 , 1 , 8 , 9 , 1 , 1 , 12 , 1 , 14 , 15 , 16 ] (38)

At time t, when

x ( 0 ) ∈ { δ 16 1 , δ 16 4 , δ 16 6 , δ 16 7 , δ 16 10 , δ 16 11 , δ 16 13 } (39)

then x ( t ) = δ 16 1 . Then we have the profiles as:

( 1,1,1,1 ) , ( 1,1,2,2 ) , ( 1,2,1,2 ) ( 1,2,2,1 ) , ( 2,1,1,2 ) , ( 2,1,2,1 ) , ( 2,2,1,1 ) (40)

We can see that compared with the original, the proportion of the profile that

ultimately maintains cooperation has increased to 7 16 . That is to say, when one

player chooses to cooperate and the other chooses to defect, increasing the reward of the cooperator can promote the proportion of cooperation.

・ Thirdly, changing the value of γ , from condition 1 we can get γ ≥ 0.5 and γ > 0.6 from condition 2, so we take γ = 0.5 , and the payoff bi-matrix is in

Eventually, we have:

M G = δ 16 [ 1 , 1 , 1 , 1 , 1 , 16 , 1 , 1 , 9 , 1 , 16 , 1 , 1 , 9 , 1 , 16 ]

M G 2 = δ 16 [ 1 , 1 , 1 , 1 , 1 , 16 , 1 , 1 , 9 , 1 , 16 , 1 , 1 , 9 , 1 , 16 ]

M G k = δ 16 [ 1 , 1 , 1 , 1 , 1 , 16 , 1 , 1 , 9 , 1 , 16 , 1 , 1 , 9 , 1 , 16 ] (41)

Similarly at time t, if there is

x ( 0 ) ∈ { δ 16 1 , δ 16 2 , δ 16 3 , δ 16 4 , δ 16 5 , δ 16 7 , δ 16 8 , δ 16 10 , δ 16 12 , δ 16 13 , δ 16 15 } (42)

then x ( t ) = δ 16 1 , the initial strategies of the player that ultimately choose to cooperate are as follows:

( 1,1,1,1 ) , ( 1,1,1,2 ) , ( 1,1,2,1 ) ( 1,1,2,2 ) , ( 1,2,1,1 ) , ( 1,2,2,1 ) , ( 1,2,2,2 ) ( 2,1,1,2 ) , ( 2,1,2,2 ) , ( 2,2,1,1 ) , ( 2,2,2,1 ) (43)

At first when one of two players choose to cooperate and the other does not cooperate, we can increase the punishments to reduce the gains of the defector. In the end, the proportion of the situation that maintaining cooperation has

increased to 11 16 , and it has also achieved the purpose of promoting cooperation.

C | D | |
---|---|---|

C | ( 1.7,1.7 ) | ( 0.9,1.3 ) |

D | ( 1.3,0.9 ) | ( 0,0 ) |

C | D | |
---|---|---|

C | ( 1.1,1.1 ) | ( 1.4,1.3 ) |

D | ( 1.3,1.4 ) | ( 0,0 ) |

C | D | |
---|---|---|

C | ( 1.1,1.1 ) | ( 0.9,1 ) |

D | ( 1,0.9 ) | ( 0,0 ) |

In this paper, we have investigated the networked evolutionary model based on snow-drift game with rewarding and penalty strategy. By using semi-tensor product of matrices approach, the mathematical model of the networked evolutionary game is expressed as a dynamic logical system and next converted into its evolutionary dynamic algebraic form. Based on the form, many properties of the games evolutionary dynamics have been revealed. We have found the following interesting result: when the rewards for cooperators and the punishment for defectors are increased, that will promote the players to cooperate. But there are still many problems worth studying in our model and conclusion.

The author declares no conflicts of interest regarding the publication of this paper.

Chen, L. (2019) Networked Evolutionary Model of Snow-Drift Game Based on Semi-Tensor Product. Journal of Applied Mathematics and Physics, 7, 726-737. https://doi.org/10.4236/jamp.2019.73050