Share This Article:

Some Properties of the g-Good-Neighbor (g-Extra) Diagnosability of a Multiprocessor System

Abstract Full-Text HTML XML Download Download as PDF (Size:506KB) PP. 259-266
DOI: 10.4236/ajcm.2016.63027    1,200 Downloads   1,632 Views   Citations

ABSTRACT

Diagnosability of a multiprocessor system is one important study topic. In 2012, Peng et al. proposed a measure for fault tolerance of the system, which is called the g-good-neighbor diagnosability that restrains every fault-free node containing at least g fault-free neighbors. In 2015, Zhang et al. proposed a measure for fault diagnosis of the system, namely, g-extra diagnosability, which restrains that every fault-free component has at least g+1 fault-free nodes. In this paper, we obtain some properties of the g-good-neighbor (g-extra) diagnosability of the system and give the g-good-neighbor (g-extra) diagnosability of some graphs under the PMC model and MM* model.

1. Introduction

Many multiprocessor systems take interconnection networks (networks for short) as underlying topologies and a network is usually represented by a graph where nodes represent processors and links represent communication links between processors. We use graphs and networks interchangeably. For a multiprocessor system, study on the topological properties of its network is important. Furthermore, some processors may fail in the system, so processor fault identification plays an important role for reliable computing. The first step to deal with faults is to identify the faulty processors from the fault-free ones. The identification process is called the diagnosis of the system. A system is said to be t-diagnosable if all faulty processors can be identified without replacement, provided that the number of faults presented does not exceed t. The diagnosability t ( G ) of a system G is the maximum value of t such that G is t-diagnosable [1] - [3] . For a t-diagnosable system, Dahbura and Masson [1] proposed an algorithm with time complex O ( n 2.5 ) , which can effectively identify the set of faulty processors.

Several diagnosis models were proposed to identify the faulty processors. One major approach is the Preparata, Metze, and Chien’s (PMC) diagnosis model introduced by Preparata et al. [4] . The diagnosis of the system is achieved through two linked processors testing each other. Another major approach, namely the comparison diagnosis model (MM model), was proposed by Maeng and Malek [5] . In the MM model, to diagnose a system, a node sends the same task to two of its neighbors, and then compares their responses. In 2005, Lai et al. [3] introduced a restricted diagnosability of multiprocessor systems called conditional diag- nosability. They consider the situation that any fault set cannot contain all the neighbors of any vertex in a system. In 2012, Peng et al. [6] proposed a measure for fault diagnosis of the system, namely, g-good-neighbor diagnosability (which is also called g-good-neighbor conditional diagnosability), which requires that every fault- free node has at least g fault-free neighbors. In [6] , they studied the g-good-neighbor diagnosability of the n-dimensional hypercube under the PMC model. In [7] , Wang and Han studied the g-good-neighbor diag- nosability of the n-dimensional hypercube under MM* model. Yuan et al. [8] and [9] studied the g-good- neighbor diagnosability of the k-ary n-cube ( k 3 ) under the PMC model and MM* model. The Cayley graph C Γ n generated by the transposition tree Γ n has recently received considerable attention. In [10] [11] , Wang et al. studied the g-good-neighbor diagnosability of C Γ n under the PMC model and MM* model for g = 1 , 2 . In 2015, Zhang et al. [12] proposed a new measure for fault diagnosis of the system, namely, g-extra diagnosability, which restrains that every fault-free component has at least ( g + 1 ) fault-free nodes. In [12] , they studied the g-extra diagnosability of the n-dimensional hypercube under the PMC model and MM* model. The n- dimensional bubble-sort star graph B S n has many good properties. In 2016, Wang et al. [13] studied the 2-extra diagnosability of B S n under the PMC model and MM* model. In this paper, we obtain some properties of the g-good-neighbor (g-extra) diagnosability of the system and give the g-good-neighbor (g-extra) diag- nosability of some graphs under the PMC model and MM* model.

2. Preliminaries

In this section, some definitions and notations needed for our discussion, some results, the PMC model and the MM* model are introduced.

2.1. Diagnosability

Under the PMC model, to diagnose a system G, two adjacent nodes in G are capable to perform tests on each other. For two adjacent nodes u and v in V ( G ) , the test performed by u on v is represented by the ordered pair ( u , v ) . The outcome of a test ( u , v ) is 1 (resp. 0) if u evaluate v as faulty (resp. fault-free). In the PMC model, we usually assume that the testing result is reliable (resp. unreliable) if the node u is fault-free(resp. faulty). A test assignment T for a system G is a collection of tests for every adjacent pair of vertices. It can be modeled as a directed testing graph T = ( V ( G ) , L ) , where ( u , v ) L implies that u and v are adjacent in G. The collection of all test results for a test assignment T is called a syndrome. Formally, a syndrome is a function σ : L { 0,1 } .

The set of all faulty processors in the system is called a faulty set. This can be any subset of V ( G ) . For a given syndrome s, a subset of vertices F V ( G ) is said to be consistent with s if syndrome s can be produced from the situation that, for any ( u , v ) L such that u V \ F , σ ( u , v ) = 1 if and only if v F . This means that F is a possible set of faulty processors. Since a test outcome produced by a faulty processor is unreliable, a given set F of faulty vertices may produce a lot of different syndromes. On the other hand, different fault sets may produce the same syndrome. Let σ ( F ) denote the set of all syndromes which F is consistent with.

Under the PMC model, two distinct sets F 1 and F 2 in V ( G ) are said to be indistinguishable if σ ( F 1 ) σ ( F 2 ) , otherwise, F 1 and F 2 are said to be distinguishable. Besides, we say ( F 1 , F 2 ) is an indistinguishable pair if σ ( F 1 ) σ ( F 2 ) ; else, ( F 1 , F 2 ) is a distinguishable pair.

Using the MM model [14] , the diagnosis is carried out by sending the same testing task to a pair of processors and comparing their responses. Under the MM model, we always assume the output of a comparison performed by a faulty processor is unreliable. The comparison scheme of a system G = ( V , E ) is modeled as a multigraph, denoted by M = ( V ( G ) , L ) , where L is the labeled-edge set. A labeled edge ( u , v ) w L represents a comparison in which two vertices u and v are compared by a vertex w, which implies u w , v w E ( G ) , where w is called a comparator for processors u and v. The collection of all comparison results in M = ( V ( G ) , L ) is called the syndrome, denoted by σ * , of the diagnosis. If the comparison ( u , v ) w disagrees, then σ * ( ( u , v ) w ) = 1 , otherwise, σ * ( ( u , v ) w ) = 0 . Hence, a syndrome is a function from L to { 0,1 } . The MM* model is a special case of the MM model. In the MM* model, all comparisons of G are in the comparison scheme of G, i.e., if u w , v w E ( G ) , then ( u , v ) w L . Since the comparator processor w itself might be faulty, a comparison result σ * ( ( u , v ) w ) = 0 implies that if the comparator processor w is fault-free, then the compared processors u and v are fault-free; on the other hand, a comparison result σ * ( ( u , v ) w ) = 1 implies that at least one of three pro- cessors involved in the comparison must be faulty. The possible comparison results for the different conditions of the three processors involved in a comparison are shown in Table 1.

For a given syndrome σ * , a subset of vertices F V ( G ) is said to be consistent with σ * if syndrome s can be produced from the situation that, for any ( u , v ) w L such that w V \ F , σ ( u , v ) w = 1 if and only if u F or v F . Similar to the PMC model, we can define two distinct sets F 1 and F 2 in V ( G ) are indistinguishable (resp. distinguishable) under the MM* model.

A system G = ( V , E ) is g-good-neighbor t-diagnosable if F 1 and F 2 are distinguishable, for each distinct pair of g-good-neighbor faulty subsets F 1 and F 2 of V with | F 1 | t and | F 2 | t . The g-good-neighbor diagnosability t g ( G ) of G is the maximum value of t such that G is g-good-neighbor t-diagnosable.

Proposition 1. ( [5] ) For any given system G, t g ( G ) t g ( G ) if g g .

In a system G = ( V , E ) , a faulty set F V is called a conditional faulty set if it does not contain all of neighbors of any vertex in G. A system G is conditional t-diagnosable if every two distinct conditional faulty subsets F 1 , F 2 V with | F 1 | , | F 2 | t , are distinguishable. The conditional diagnosability t c ( G ) of G is the maximum number of t such that G is conditional t-diagnosable. By [15] , t c ( G ) t ( G ) .

Theorem 2. [10] For a system G = ( V , E ) , t ( G ) = t 0 ( G ) t 1 ( G ) t c ( G ) .

In [10] , Wang et al. proved that the 1-good-neighbor diagnosability of the Bubble-sort graph B n under the PMC model is 2 n 3 for n 4 . In [16] , Zhou et al. proved the conditional diagnosability of B n is 4 n 11 for n 4 under the PMC model. Therefore, t 1 ( G ) < t c ( G ) when n 5 and t 1 ( G ) = t c ( G ) when n = 4 .

In a system G = ( V , E ) , a faulty set F V is called a g-extra faulty set if every component of G F has more than g nodes. G is g-extra t-diagnosable if and only if for each pair of distinct faulty g-extra vertex subsets F 1 , F 2 V ( G ) such that | F i | t , F 1 and F 2 are distinguishable. The g-extra diagnosability of G, denoted by t ˜ g ( G ) , is the maximum value of t such that G is g-extra t-diagnosable.

Proposition 3. [13] For any given system G, t ˜ g ( G ) t ˜ g ( G ) if g g .

Theorem 4. [13] For a system G = ( V , E ) , t ( G ) = t ˜ 0 ( G ) t ˜ g ( G ) t g ( G ) .

Theorem 5. [13] For a system G = ( V , E ) , t ˜ 1 ( G ) = t 1 ( G ) .

2.2. Connectivity

A multiprocessor system is modeled as an undirected simple graph G = ( V , E ) , whose vertices (nodes) represent processors and edges (links) represent communication links. Given a nonempty vertex subset V of V, the induced subgraph by V in G, denoted by G [ V ] , is a graph, whose vertex set is V and the edge set is the set of all the edges of G with both endpoints in V . The degree d G ( v ) of a vertex v is the number of edges incident with v. The minimum degree of a vertex in G is denoted by δ ( G ) . For any vertex v, we define the neighborhood N G ( v ) of v in G to be the set of vertices adjacent to v. u is called a neighbor vertex or a

neighbor of v for u N G ( v ) . Let S V . We use N G ( S ) to denote the set v S N G ( v ) \ S . For neigh-

borhoods and degrees, we will usually omit the subscript for the graph when no confusion arises. Certain types

Table 1. The possible comparison results for different conditions of three processors in a comparison.

of graphs play prominent roles in graph theory. A graph G is said to be k-regular if for any vertex v, d G ( v ) = k . A complete graph is a simple graph in which any two vertices are adjacent. The connectivity κ ( G ) of a graph G is the minimum number of vertices whose removal results in a disconnected graph or only one vertex left when G is complete. Let F 1 and F 2 be two distinct subsets of V, and let the symmetric difference F 1 F 2 = ( F 1 \ F 2 ) ( F 2 \ F 1 ) . For graph-theoretical terminology and notation not defined here we follow [17] .

Let G = ( V , E ) . A fault set F V is called a g-good-neighbor faulty set if | N ( v ) ( V \ F ) | g for every vertex v in V \ F . A g-good-neighbor cut of G is a g-good-neighbor faulty set F such that G F is dis- connected. The minimum cardinality of g-good-neighbor cuts is said to be the g-good-neighbor connectivity of G, denoted by κ ( g ) ( G ) . A fault set F V is called a g-extra faulty set if every component of G F has at least ( g + 1 ) vertices. A g-extra cut of G is a g-extra faulty set F such that G F is disconnected. The minimum cardinality of g-extra cuts is said to be the g-extra connectivity of G, denoted by κ ˜ ( g ) ( G ) .

Proposition 6. Let G be a connected graph. Then κ ˜ ( g ) ( G ) κ ( g ) ( G ) .

Proof. Let F be a minimum g-good-neighbor cut of G. Then δ ( G F ) g . Therefore, every component of G F has at least ( g + 1 ) vertices and hence F is a g-extra cut of G. By the definition of the g-extra connectivity, κ ˜ ( g ) ( G ) | F | = κ ( g ) ( G ) . □

Proposition 7. Let G be a connected graph. Then κ ( 1 ) ( G ) = κ ˜ ( 1 ) ( G ) .

Proof. By Proposition 6, κ ˜ ( 1 ) ( G ) κ ( 1 ) ( G ) . Let F be a minimum 1-extra cut of G. Then every component of G F has at least two vertices. Note that every component of G F is connected. Therefore, δ ( G F ) 1 and hence F is a 1-good-neighbor cut of G. By the definition of the g-good-neighbor connectivity,

κ ( 1 ) ( G ) | F | = κ ˜ ( 1 ) ( G ) . Therefore, κ ( 1 ) ( G ) = κ ˜ ( 1 ) ( G ) . □

3. Basic Results

Theorem 8. Let a distinct pair of g-good-neighbor (g-extra) faulty subsets of V be S 1 and S 2 in a system G = ( V , E ) . If S 1 S 2 = V , then ( S 1 , S 2 ) is indistinguishable under the PMC model and MM* model.

Proof. Consider two case as follows.

Case 1. MM* model.

Consider a directed testing graph M = ( V ( G ) , L ) for the system G = ( V , E ) .

1). If i , j , k S 1 \ S 2 and i k , j k E ( G [ S 1 \ S 2 ] ) , then ( i , j ) k L .

1'). If i , j , k S 2 \ S 1 and i k , j k E ( G [ S 2 \ S 1 ] ) , then ( i , j ) k L .

1''). If i , j , k S 2 S 1 and i k , j k E ( G [ S 2 S 1 ] ) , then ( i , j ) k L .

2). If i , k S 1 \ S 2 , j S 2 and j S 1 \ S 2 , i k E ( G [ S 1 \ S 2 ] ) and j k E ( G ) , then ( i , j ) k L .

2'). If i , k S 2 \ S 1 , j S 1 and j S 2 \ S 1 ,, i k E ( G [ S 2 \ S 1 ] ) and j k E ( G ) , then ( i , j ) k L .

2''). If i , k S 2 S 1 , j F 1 F 2 , i k E ( G [ S 2 S 1 ] ) and j k E ( G ) , then ( i , j ) k L .

3). If k S 1 \ S 2 , i , j S 2 and i , j S 1 \ S 2 , i k , j k E ( G ) , then ( i , j ) k L .

3'). If k S 2 \ S 1 , i , j S 1 and i , j S 2 \ S 1 , i k , j k E ( G ) , then ( i , j ) k L .

4). If k S 2 S 1 , i , j F 1 F 2 , i k , j k E ( G ) , then ( i , j ) k L .

Consider a syndrome s for each ( i , j ) L .

1). If i , j , k S 1 \ S 2 and i k , j k E ( G [ S 1 \ S 2 ] ) , and ( i , j ) k L then, σ ( ( i , j ) k ) = 0 .

1'). If i , j , k S 2 \ S 1 and i k , j k E ( G [ S 2 \ S 1 ] ) , and ( i , j ) k L then, σ ( ( i , j ) k ) = 0 .

1''). If i , j , k S 2 S 1 and i k , j k E ( G [ S 2 S 1 ] ) , and ( i , j ) k L then, σ ( ( i , j ) k ) = 0 .

2). If i , k S 1 \ S 2 , j S 2 and j S 1 \ S 2 , i k E ( G [ S 1 \ S 2 ] ) and j k E ( G ) , and ( i , j ) k L , then σ ( ( i , j ) k ) = 1 .

2'). If i , k S 2 \ S 1 , j S 1 and j S 2 \ S 1 , i k E ( G [ S 2 \ S 1 ] ) and j k E ( G ) , and ( i , j ) k L , then σ ( ( i , j ) k ) = 1 .

2''). If i , k S 2 S 1 , j F 1 F 2 , i k E ( G [ S 2 S 1 ] ) and j k E ( G ) , and ( i , j ) k L , then σ ( ( i , j ) k ) = 1 .

3). If k S 1 \ S 2 , i , j S 2 and i , j S 1 \ S 2 , i k , j k E ( G ) , and ( i , j ) k L , then σ ( ( i , j ) k ) = 1 .

3'). If k S 2 \ S 1 , i , j S 1 and i , j S 2 \ S 1 , i k , j k E ( G ) , and ( i , j ) k L , then σ ( ( i , j ) k ) = 1 .

4). If k S 2 S 1 , i , j F 1 F 2 , i k , j k E ( G ) , and ( i , j ) k L , then σ ( ( i , j ) k ) = 1 .

Suppose that S2 is a g-good-neighbor faulty subset of V. 1). Let i , j , k S 1 \ S 2 and i k , j k E ( G [ S 1 \ S 2 ] ) . Since i, j and k are fault-free, and ( i , j ) k L , we have σ ( ( i , j ) k ) = 0 . 1'). Let i , j , k S 2 \ S 1 and i k , j k E ( G [ S 2 \ S 1 ] ) . Since i, j and k are fault, and ( i , j ) k L , we set σ ( ( i , j ) k ) = 0 . 1''). Let i , j , k S 2 S 1 and i k , j k E ( G [ S 2 S 1 ] ) . Since i, j and k are fault, and ( i , j ) k L , we set σ ( ( i , j ) k ) = 0 . 2). Let i , k S 1 \ S 2 , j S 2 and j S 1 \ S 2 , i k E ( G [ S 1 \ S 2 ] ) , j k E ( G ) , and ( i , j ) k L . Since i and k are fault-free and j is fault, and ( i , j ) k L , we have σ ( ( i , j ) k ) = 1 . 2'). Let i , k S 2 \ S 1 , j S 1 and j S 2 \ S 1 , i k E ( G [ S 2 \ S 1 ] ) , j k E ( G ) , and ( i , j ) k L . Since i and k are fault, j is fault-free, and ( i , j ) k L , we set σ ( ( i , j ) k ) = 1 . 2"). Let i , k S 2 S 1 , j F 1 F 2 , i k E ( G [ S 2 S 1 ] ) , j k E ( G ) , and ( i , j ) k L . Suppose that j S 2 \ S 1 . Since i, k and j are fault and ( i , j ) k L , we set σ ( ( i , j ) k ) = 1 . Suppose that j S 1 \ S 2 . Since i and k are fault, j is fault-free, and ( i , j ) k L , we set σ ( ( i , j ) k ) = 1 . 3). Let k S 1 \ S 2 , i , j S 2 and i , j S 1 \ S 2 , i k , j k E ( G ) , and ( i , j ) k L . Since k is fault-free, i and j are fault, and ( i , j ) k L , we have σ ( ( i , j ) k ) = 1 . 3'). Let k S 2 \ S 1 , i , j S 1 and i , j S 2 \ S 1 , i k , j k E ( G ) , and

( i , j ) k L . Since k is fault, i and j are fault-free, and ( i , j ) k L , we set σ ( ( i , j ) k ) = 1 . 4). Let k S 2 S 1 ,

i , j F 1 F 2 , i k , j k E ( G ) , and ( i , j ) k L . Since k is fault and ( i , j ) k L , we set σ ( ( i , j ) k ) = 1 . Therefore, σ σ ( S 2 ) . Suppose that S 1 is a g-good-neighbor faulty subset of V. Similar to that of above paragraph, we have that σ σ ( S 1 ) . Thus, the above syndrome belongs to σ ( F 1 ) σ ( F 2 ) . Therefore, S 1 and S 2 are indistinguishable.

Case 2. PMC model.

Consider a directed testing graph T = ( V ( G ) , L ) for the system G = ( V , E ) .

1). If i , j S 1 \ S 2 and i j E ( G [ S 1 \ S 2 ] ) , then ( i , j ) L .

1'). If i , j S 2 \ S 1 and i j E ( G [ S 2 \ S 1 ] ) , then ( i , j ) L .

2). If i S 1 \ S 2 , j S 2 \ S 1 and i j E ( G ) , then ( i , j ) L .

2'). If i S 2 \ S 1 , j S 1 \ S 2 and i j E ( G ) , then ( i , j ) L .

3). If i S 1 \ S 2 , j S 2 S 1 and i j E ( G ) , then ( i , j ) L .

3'). If i S 2 S 1 , j S 1 \ S 2 , and i j E ( G ) , then ( i , j ) L .

4). If i S 2 \ S 1 , j S 2 S 1 and i j E ( G ) , then ( i , j ) L .

4'). If i S 2 S 1 , j S 2 \ S 1 , and i j E ( G ) , then ( i , j ) L .

5). If i , j S 2 S 1 and i j E ( G ) , then ( i , j ) L .

Consider a syndrome s for each ( i , j ) L .

1). If i , j S 1 \ S 2 and i j E ( G [ S 1 \ S 2 ] ) , then ( i , j ) L and σ ( ( i , j ) ) = 0 .

1'). If i , j S 2 \ S 1 and i j E ( G [ S 2 \ S 1 ] ) , then ( i , j ) L and σ ( ( i , j ) ) = 0 .

2). If i S 1 \ S 2 , j S 2 \ S 1 and i j E ( G ) , then ( i , j ) L and σ ( ( i , j ) ) = 1 .

2'). If i S 2 \ S 1 , j S 1 \ S 2 and i j E ( G ) , then ( i , j ) L and σ ( ( i , j ) ) = 1 .

3). If i S 1 \ S 2 , j S 2 S 1 and i j E ( G ) , then ( i , j ) L and σ ( ( i , j ) ) = 1 .

3'). If i S 2 S 1 , j S 1 \ S 2 , and i j E ( G ) , then ( i , j ) L and σ ( ( i , j ) ) = 1 .

4). If i S 2 \ S 1 , j S 2 S 1 and i j E ( G ) , then ( i , j ) L and σ ( ( i , j ) ) = 1 .

4'). If i S 2 S 1 , j S 2 \ S 1 , and i j E ( G ) , then ( i , j ) L and σ ( ( i , j ) ) = 1 .

5). If i , j S 2 S 1 and i j E ( G ) , then ( i , j ) L and σ ( ( i , j ) ) = 1 .

Suppose that S2 is a g-good-neighbor faulty subset of V. 1). Let i , j S 1 \ S 2 = V \ S 2 and i j E ( G [ S 1 \ S 2 ] ) . Since i and j are fault-free, and ( i , j ) L , we have σ ( ( i , j ) ) = 0 . 1'). Let i , j S 2 \ S 1 and i j E ( G [ S 2 \ S 1 ] ) . Since i and j are fault, and ( i , j ) L , we set σ ( ( i , j ) ) = 0 . 2). Let i S 1 \ S 2 , j S 2 \ S 1 and i j E ( G ) . Since i is fault-free and j is fault, and ( i , j ) L , we have σ ( ( i , j ) ) = 1 . 2'). Let i S 2 \ S 1 , j S 1 \ S 2 and i j E ( G ) . Since i is fault and j is fault-free, and ( i , j ) L , we set σ ( ( i , j ) ) = 1 . 3). Let i S 1 \ S 2 , j S 2 S 1 and i j E ( G ) . Since i is fault-free and j is fault, and ( i , j ) L , we have σ ( ( i , j ) ) = 1 . 3'). Let i S 2 S 1 , j S 1 \ S 2 , and i j E ( G ) . Since i is fault and j is fault-free, and ( i , j ) L , we set σ ( ( i , j ) ) = 1 . 4). Let i S 2 \ S 1 , j S 2 S 1 and i j E ( G ) . Since i and j are fault, and ( i , j ) L , we set σ ( ( i , j ) ) = 1 . 4'). Let i S 2 S 1 , j S 2 \ S 1 , and i j E ( G ) . Since i and j are fault, and ( i , j ) L , we set σ ( ( i , j ) ) = 1 . 5). Let i , j S 2 S 1 and i j E ( G ) . Since i and j are fault, and ( i , j ) L , we set σ ( ( i , j ) ) = 1 . Therefore, σ σ ( S 2 ) . Suppose that S 1 is a g-good-neighbor faulty subsets of V. Similar to that of above paragraph, we have that σ σ ( S 1 ) . Thus, the above syndrome belongs to σ ( F 1 ) σ ( F 2 ) . Therefore, S 1 and S 2 are indistinguishable. □

Corollary 1. Let a distinct pair of g-good-neighbor (g-extra) faulty subsets of V be F 1 and F 2 in a system G = ( V , E ) . If F 1 F 2 = V , and | F 1 | m , | F 2 | m , then the g-good-neighbor (g-extra) diagnosability of G is less than m, t g ( G ) m 1 ( t ˜ g ( G ) m 1 ), under the PMC model and MM* model.

Proof. By Theorem 8, G is not g-good-neighbor (g-extra) m-diagnosable under PMC model and MM* model. Hence, by the definition of g-good-neighbor (g-extra) diagnosability, we conclude that the g-good-neighbor (g-extra) diagnosability of G is less than m, i.e., t g ( G ) m 1 ( t ˜ g ( G ) m 1 ). □

Corollary 2. Let a system G = ( V , E ) with ν vertices be g-good-neighbor (g-extra) t-diagnosable. If there is a distinct pair of g-good-neighbor (g-extra) faulty subsets F 1 and F 2 of V such that | F 1 | t and | F 2 | t , and ν = | F 1 F 2 | , then ν 2 t + 1 .

Proof. Suppose, on the contrary, that ν 2 t . Let a distinct pair of g-good-neighbor (g-extra) faulty subsets of V be F 1 and F 2 such that | F 1 | t and | F 2 | t , and ν = | F 1 F 2 | . By Corollary 1, G is not g-good-neighbor (g-extra) t-diagnosable, a contradiction. □

4. The g-Good-Neighbor (g-Extra) Diagnosability of Some Graphs under the PMC Model and the MM* Model

In this section, we will give the g-good-neighbor (g-extra) diagnosability of some graphs under the PMC model and the MM* model.

Theorem 9. ( [8] [13] ) A system G = ( V , E ) is g-good-neighbor (g-extra) t-diagnosable under the PMC model if and only if there is an edge u v E with u V \ ( F 1 F 2 ) and v F 1 F 2 for each distinct pair of g-good-neighbor (g-extra) faulty subsets F 1 and F 2 of V with | F 1 | t and | F 2 | t .

Theorem 10. ( [1] [8] [13] ) A system G = ( V , E ) is g-good-neighbor (g-extra) t-diagnosable under the MM* model if and only if for each distinct pair of g-good-neighbor (g-extra) faulty subsets F 1 and F 2 of V with | F 1 | t and | F 2 | t satisfies one of the following conditions.

1) There are two vertices u , w V \ ( F 1 F 2 ) and there is a vertex v F 1 F 2 such that u w E and v w E .

2) There are two vertices u , v F 2 \ F 1 and there is a vertex w V \ ( F 1 F 2 ) such that u w E and v w E .

3) There are two vertices u , v F 2 \ F 1 and there is a vertex w V \ ( F 1 F 2 ) such that u w E and v w E .

Theorem 11. Let 0 g n 2 1 . If n 3 , the g-good-neighbor (g-extra) diagnosability of the complete

graph K n under the PMC model, t g ( K n ) = n 2 1 ( t ˜ g ( K n ) = n 2 1 ). If n 4 , the g-good-neighbor (g-extra) diagnosability of K n under the MM* model, t g ( K n ) = n 2 1 ( t ˜ g ( K n ) = n 2 1 ).

Proof. Let n 3 , 0 g n 2 1 . Firstly, prove that t g ( K n ) n 2 1 ( t ˜ g ( K n ) n 2 1 ). If

X , Y V ( K n ) such that | X | = | Y | = n 2 and X Y = V ( K n ) . By Corollary 1, t g ( K n ) n 2 1

( t ˜ g ( K n ) n 2 1 ) for | X | n 2 1 and | Y | n 2 1 . Note that K n X is connected and

δ ( K n [ V ( K n X ) ] ) = n | X | 1 = n 2 1 g . Therefore, | V ( K n X ) | g + 1 . Next, prove that

t g ( K n ) n 2 1 ( t ˜ g ( K n ) n 2 1 ). By the definition of g-good-neighbor (g-extra) diagnosability, it is

sufficient to show that K n is g-good-neighbor (g-extra) ( n 2 1 ) -diagnosable. Consider two cases as follows.

Case 1. PMC model.

Let n 3 . By Theorem 9, to prove K n is g-good-neighbor (g-extra) ( n 2 1 ) -diagnosable, it is equivalent

to prove that there is an edge u v E ( K n ) with u V ( K n ) \ ( F 1 F 2 ) and v F 1 F 2 for each distinct pair

of g-good-neighbor (g-extra) faulty subsets F 1 and F 2 of V ( K n ) with | F 1 | n 2 1 and | F 2 | n 2 1 .

Since | F 1 | n 2 1 and | F 2 | n 2 1 , V ( K n ) \ ( F 1 F 2 ) holds. Thus, there is an

u V ( K n ) \ ( F 1 F 2 ) . Since g-good-neighbor (g-extra) faulty subsets F 1 and F 2 of V ( K n ) are distinct, without loss of generality, suppose that F 2 \ F 1 . Then there is a v F 2 \ F 1 F 1 F 2 . Since K n is complete, u v is an edge between V ( K n ) \ ( F 1 F 2 ) and F 1 F 2 . By Theorem 9, K n is g-good-neighbor

(g-extra) ( n 2 1 ) -diagnosable.

Case 2. MM* model.

Let n 4 . By the definition of g-good-neighbor (g-extra) diagnosability, it is sufficient to show that K n is

g-good-neighbor ( n 2 1 ) -diagnosable. By Theorem 10, to prove K n is g-good-neighbor (g-extra) ( n 2 1 ) -

diagnosable, it is equivalent to prove that for each distinct pair of g-good-neighbor (g-extra) faulty subsets F 1

and F 2 of V ( K n ) with | F 1 | n 2 1 and | F 2 | n 2 1 satisfies one of the following conditions. 1).

There are two vertices u , w V ( K n ) \ ( F 1 F 2 ) and there is a vertex v F 1 F 2 such that u w E ( K n ) and v w E ( K n ) . 2). There are two vertices u , v F 1 \ F 2 and there is a vertex w V ( K n ) \ ( F 1 F 2 ) such that u w E ( K n ) and v w E ( K n ) . 3). There are two vertices u , v F 2 \ F 1 and there is a vertex w V ( K n ) \ ( F 1 F 2 ) such that u w E ( K n ) and v w E ( K n ) .

Since | F 1 | n 2 1 and | F 2 | n 2 1 , V ( K n ) \ ( F 1 F 2 ) . Thus, there is u , w V ( K n ) \ ( F 1 F 2 )

when n is even. Since g-good-neighbor (g-extra) faulty subsets F 1 and F 2 of V ( K n ) are distinct, without loss of generality, suppose that F 2 \ F 1 . Thus, there is a v F 2 \ F 1 F 1 Δ F 2 . Since K n is complete, we have that u w E ( K n ) and v w is an edge between V ( K n ) \ ( F 1 F 2 ) and F 1 F 2 . Let n be odd as follows. Note that there is an u V ( K n ) \ ( F 1 F 2 ) . If u , w V ( K n ) \ ( F 1 F 2 ) , by the above proof, u w E ( K n ) and v w E ( K n ) , where v F 2 \ F 1 . Suppose that exactly one w V ( K n ) \ ( F 1 F 2 ) . Then

| F 1 \ F 2 | = n 1 | F 2 | n 1 ( n 2 1 ) = n n + 1 2 = n 1 2 . Since n 5 , we have | F 1 \ F 2 | 2 . Therefore, let

u , v F 1 \ F 2 F 1 Δ F 2 . Since K n is complete, we have that u w E ( K n ) and v w E ( K n ) . By Theorem 10,

K n is g-good-neighbor (g-extra) ( n 2 1 ) -diagnosable. □

Acknowledgements

This work is supported by the National Science Foundation of China (61370001).

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

Ren, Y. and Wang, S. (2016) Some Properties of the g-Good-Neighbor (g-Extra) Diagnosability of a Multiprocessor System. American Journal of Computational Mathematics, 6, 259-266. doi: 10.4236/ajcm.2016.63027.

References

[1] Dahbura, A.T. and Masson, G.M. (1984) An Fault Identification Algorithm for Diagnosable Systems. IEEE Transactions on Computers, 33, 486-492.
http://dx.doi.org/10.1109/TC.1984.1676472
[2] Fan, J. (2002) Diagnosability of Crossed Cubes under the Comparison Diagnosis Model. IEEE Transactions on Parallel and Distributed Systems, 13, 1099-1104.
http://dx.doi.org/10.1109/TPDS.2002.1041887
[3] Lai, P.-L., Tan, J..J.M., Chang, C.-P. and Hsu, L.-H. (2005) Conditional Diagnosability Measures for Large Multiprocessor Systems. IEEE Transactions on Computers, 54, 165-175.
http://dx.doi.org/10.1109/TC.2005.19
[4] Preparata, F.P., Metze, G. and Chien, R.T. (1967) On the Connection Assignment Problem of Diagnosable Systems. IEEE Transactions on Computers, EC-16, 848-854.
http://dx.doi.org/10.1109/PGEC.1967.264748
[5] Maeng, J. and Malek, M. (1981) A Comparison Connection Assignment for Self-Diagnosis of Multiprocessor Systems. Proceeding of 11th International Symposium on Fault-Tolerant Computing, Portland, 24-26 June 1981, 173-175.
[6] Peng, S.-L., Lin, C.-K., Tan, J.J.M. and Hsu, L.-H. (2012) The g-Good-Neighbor Conditional Diagnosability of Hypercube under PMC Model. Applied Mathematics and Computation, 218, 10406-10412.
http://dx.doi.org/10.1016/j.amc.2012.03.092
[7] Wang, S. and Han, W. (2016) The g-Good-Neighbor Conditional Diagnosability of n-Dimensional Hypercubes under the MM* Model. Information Processing Letters, 116, 574--577.
http://dx.doi.org/10.1016/j.ipl.2016.04.005
[8] Yuan, J., Liu, A., Ma, X., Liu, X., Qin, X. and Zhang, J. (2015) The g-Good-Neighbor Conditional Diagnosability of k-Ary n-Cubes under the PMC Model and MM* Model. IEEE Transactions on Parallel and Distributed Systems, 26, 1165-1177.
http://dx.doi.org/10.1109/TPDS.2014.2318305
[9] Yuan, J., Liu, A., Qin, X., Zhang, J. and Li, J. (2016) g-Good-Neighbor Conditional Diagnosability Measures for 3-Ary n-Cube Networks. Theoretical Computer Science, 622, 144-162.
http://dx.doi.org/10.1016/j.tcs.2016.01.046
[10] Wang, M., Guo, Y. and Wang, S. (2015) The 1-Good-Neighbor Diagnosability of Cayley Graphs Generated by Transposition Trees under the PMC Model and MM* Model. International Journal of Computer Mathematics, 1-12.
http://dx.doi.org/10.1080/00207160.2015.1119817
[11] Wang, M., Lin, Y. and Wang, S. (2016) The 2-Good-Neighbor Diagnosability of Cayley Graphs Generated by Transposition Trees under the PMC Model and MM* Model. Theoretical Computer Science, 628, 92-100.
http://dx.doi.org/10.1016/j.tcs.2016.03.019
[12] Zhang, S. and Yang, W. (2016) The g-Extra Conditional Diagnosability and Sequential t/k-Diagnosability of Hypercubes. International Journal of Computer Mathematics, 93, 482-497.
[13] Wang, S., Wang, Z. and Wang, M. (2016) The 2-Extra Connectivity and 2-Extra Diagnosability of Bubble-Sort Star Graph Networks. The Computer Journal, 1-18.
http://dx.doi.org/10.1093/comjnl/bxw037
[14] Sengupta, A. and Dahbura, A.T. (1992) On Self-Diagnosable Multiprocessor Systems: Diagnosis by the Comparison Approach. IEEE Transactions on Computers, 41, 1386-1396.
http://dx.doi.org/10.1109/12.177309
[15] Hsieh, S.Y. and Kao, C.Y. (2013) The Conditional Diagnosability of k-Ary n-Cubes under the Comparison Diagnosis Model. IEEE Transactions on Computers, 62, 839-843.
http://dx.doi.org/10.1109/TC.2012.18
[16] Zhou, S., Wang, J., Xu, X. and Xu, J.-M. (2013) Conditional Fault Diagnosis of Bubble Sort Graphs under the PMC Model. Intelligence Computation and Evolutionary Computation, 180, 53-59.
http://dx.doi.org/10.1007/978-3-642-31656-2_8
[17] Bondy, A. and Murty, U.S.R. (2007) Graph Theory. Springer, New York.

  
comments powered by Disqus

Copyright © 2019 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.