Mean Absolute Deviations about the Mean, the Cut Norm and Taxicab Correspondence Analysis

Abstract

Optimization has two faces, minimization of a loss function or maximization of a gain function. We show that the mean absolute deviation about the mean, d, maximizes a gain function based on the power set of the individuals; and nd, where n is the sample size, equals twice the value of the cut-norm of the deviations about the mean. This property is generalized to double-centered and triple-centered data sets. Furthermore, we show that among the three well known dispersion measures, standard deviation, least absolute deviation and d, d is the most robust based on the relative contribution criterion. More importantly, we show that the computation of each principal dimension of taxicab correspondence analysis (TCA) corresponds to balanced 2-blocks seriation. These ideas are applied on two data sets.

Share and Cite:

Choulakian, V. and Abou-Samra, G. (2020) Mean Absolute Deviations about the Mean, the Cut Norm and Taxicab Correspondence Analysis. Open Journal of Statistics, 10, 97-112. doi: 10.4236/ojs.2020.101008.

1. Introduction

Optimization has two faces, minimization of a loss function or maximization of a gain function. The following two well known dispersion measures, the variance (s2) and mean absolute deviations about the median (LAD), are optimal because each minimizes a different loss function

s 2 = i = 1 n ( y i y ¯ ) 2 n i = 1 n ( y i c ) 2 n (1)

and

L A D = i = 1 n | y i m e d i a n | n i = 1 n | y i c | n , (2)

where y 1 , y 2 , , y n and c represent a sample of ( n + 1 ) values. To our knowledge, no optimality property is known for the mean absolute deviations about the mean defined by

d = i = 1 n | y i y ¯ | n , (3)

even though it has been studied in several papers for modeling purposes by, see among others, [1] [2] [3]. [1] [2] essentially compare the dispersion measures d and s in the statistical literature, with their preference clearly oriented towards d for its simple interpretability. While the authors in [3] compare the statistics d and LAD with the Gini dispersion measure and conclude that “The downside of using (d and LAD) is that robustness is achieved by omitting the information on the intra-group variability”.

The following inequality L A D d s is well known and is a corollary to Lyapounov inequality, see for instance [4] : the first part L A D d follows from (2) and the second part d s follows from n s 2 = i = 1 n w i 2 n w ¯ 2 0 , where w i = | y i y ¯ | .

d is the measure of dispersion used in taxicab correspondence analysis (TCA), an L1 variant of correspondence analysis (CA), see [5]. An explanation for the robustness of d is the boundedness of the relative contribution of a point, see [6] - [11]. However, this paper provides further details on d, relating it to cut-norm and balanced 2-blocks seriation for double-centered data. [11] argued that often sparse contingency tables are better visualized by TCA; here, we present an analysis of a 0-1 affinity matrix, where TCA produces a much more interpretable map than CA, by comparing Figure 3 and Figure 4. We see that repetition and experience play an indispensable and illuminating role in data analysis.

This paper is organized as follows: In Section 2, we show the optimality of the d, s2 and LAD statistics based on maximizing gain functions, but d beats s2 and LAD with respect to the property of relative contribution of a point (a robustness measure used in French data analysis circles based on geometry): this results from Lemma 1, which states the fact that for a centered vector nd equals twice its cut-norm; Sections 3 and 4 generalize the optimality result of the d to double-centered and triple-centered arrays; and we conclude in Section 5. Balanced 2-blocks seriation of a matrix with application to TCA is discussed in Section 3.

2. Optimality of d

We consider the centered vector x = y y ¯ 1 n , where 1 n is composed of n ones. Let I = { 1,2, , n } and I = S S ¯ a binary partition of I. We have

i I x i = 0 = i S x i + i S ¯ x i ;

from which we deduce

i S x i = i S ¯ x i . (4)

We define the cut-norm of a centered vector x to be

x = m a x S i S x i = i S o p t x i = i S ¯ o p t x i by

where S o p t = { i : x i 0 for i = 1,2, , n } . By casting the computation of d as a combinatorial maximization problem, we have the following main result describing the optimality of the d-statistic over all elements of the power set of I.

Lemma 1: (2-equal parts property) n d = 2 x 2 i S x i for all S I .

Proof:

n d = i = 1 n | x i | = i S o p t x i i S ¯ o p t x i = 2 x by ( 4 ) 2 i S x i for all S I .

Corollary 1: d x u / n for u { 1,1 } n .

Proof: By defining u o p t ( i ) = 1 if i S o p t and u o p t ( i ) = 1 if i S ¯ o p t , we get d = x u o p t / n x u / n .

Corollary 2: L A D ( y m e d i a n 1 n ) u / n for u { 1,1 } n .

Corollary 2 shows that LAD has a second optimality property. We emphasize the fact that the optimizing function in (2) is a univariate loss function of c ; while the optimizing function in Corollary 2 is a multivariate gain function of u { 1,1 } n .

There is a similar result also for the variance in (1), based on Cauchy-Schwarz inequality stated in Lemma 2.

Lemma 2: s = ( y y ¯ 1 n ) / n 2 ( y y ¯ 1 n ) u / n for u u = 1 .

We note that Corollaries 1 and 2 and Lemma 2 represent particular cases of Hölder inequality, see [11].

Definition 1: We define the relative contribution of an element y i to d, LAD and s2, respectively, to be

R C d ( y i ) = | y i y ¯ | n d ,

R C s 2 ( y i ) = | y i y ¯ | 2 n s 2 ,

R C L A D ( y i ) = | y i m e d i a n | n L A D .

Then the following inequalities are true

0 R C d ( y i ) 0.5,

0 R C s 2 ( y i ) < 1 ,

0 R C L A D ( y i ) 1 ;

from which we conclude that the most robust dispersion measure among the three dispersion measures, based on the relative contribution criterion, is d because it is bounded above by 0.5.

We note that the inequality, 0 R C s 2 ( y i ) < 1 , is a weaker variant of Laguerre-Samuelson inequality; see for instance [13], whose MS thesis presents nine different proofs.

We have

Definition 2: An element x i = y i y ¯ is a heavyweight if R C d ( y i ) = 0.5 ; that is, | x i | = | y i y ¯ | = n d / 2 .

We note that a heavyweight element attains the upper bound of R C d ( y i ) , but it never attains the upper bound of R C s 2 ( y i ) and R C L A D ( y i ) .

3. 2-Way Interactions of a Correspondence Matrix

Let P = ( p i j ) be a correspondence matrix; that is, p i j 0 for i I and j J = { 1,2, , m } and j = 1 m i = 1 n p i j = 1 . As usual, we define p i = j = 1 m p i j and p j = i = 1 n p i j . Let P 1 = ( x i j = p i j p i p j ) for i I and j J ; then P 1 represents the residual matrix of P with respect to the independence model ( p i p j ) . In the jargon of statistics, the cell x i j represents the multiplicative 2-way interaction of the cell ( i , j ) I × J . P 1 is double-centered

P 1 1 m = 0 n and P 1 1 n = 0 m . (5)

From (5) we get

i S x i j = i S ¯ x i j for j J , (6)

j T x i j = j T ¯ x i j for i I , (7)

for T J . From (6) and (7), we get

j T i S x i j = j T i S ¯ x i j (8)

= i S j T ¯ x i j (9)

= i S ¯ j T ¯ x i j . (10)

We define the cut-norm of P 1 to be

P 1 = m a x S , T j T i S x i j = j T o p t i S o p t x i j .

The cut-norm P 1 is a well known quantity in theoretical computer science, because of its relationship to the famous Grothendieck inequality, which is based on P 1 1 , see among others [14].

The matrix P 1 can be considered as the starting point in taxicab correspondence analysis, an L1 variant of correspondence analysis, see [5]. The optimization criterion in TCA of P or P 1 is based on taxicab matrix norm, which is a combinatorial optimization problem

δ 1 = P 1 1 = P 1 1 = m a x u m P 1 u 1 u = m a x v n P 1 v 1 v = m a x u m , v n v P 1 u u v , = m a x P 1 u 1 subject to u { 1, + 1 } m , = m a x P 1 v 1 subject to v { 1, + 1 } n ,

= m a x v P 1 u subjectto u { 1, + 1 } m , v { 1, + 1 } n , (11)

= v 1 P 1 u 1 . (12)

In data analysis, the vectors v 1 and u 1 are interpreted as first taxicab principal axes and δ 1 as first taxicab dispersion. So we can compute the projection of the rows (resp. the columns) of P 1 on the first taxicab principal axis u 1 (resp. v 1 ) to be

a 1 = P 1 u 1 (13)

b 1 = P 1 v 1 . (14)

Equation (12) implies

v 1 = s i g n ( a 1 ) , (15)

u 1 = s i g n ( b 1 ) , (16)

named transition formulas, see [5] and [11]. We also note the following identities

1 n a 1 = 0 and δ 1 = a 1 1 , (17)

1 m b 1 = 0 and δ 1 = b 1 1 . (18)

Using the above results, we get the following

Lemma 3: (4-equal parts property) The norm δ 1 = P 1 1 = 4 P 1 4 j T i S x i j .

In data analysis, Lemma 3 implies balanced 2-blocks seriation of P 1 ; see example 1. The subsets T o p t and S o p t are positively associated and P 1 = j T o p t i S o p t x i j ; similarly the subsets T ¯ o p t and S ¯ o p t are positively associated and P 1 = j T ¯ o p t i S ¯ o p t x i j . While the subsets T ¯ o p t and S o p t are negatively associated and P 1 = j T ¯ o p t i S o p t x i j ; similarly the subsets T o p t and S ¯ o p t are negatively associated and P 1 = j T o p t i S ¯ o p t x i j . [15] presents an interesting overview of seriation and block seriation.

Using Definition 2, we get

Definition 3: The relative contribution of the row i to δ 1 (respectively of the column j to δ 1 ) is

R C δ 1 ( r o w i ) = | a 1 ( i ) | δ 1 and R C δ 1 ( c o l j ) = | b 1 ( j ) | δ 1 .

We have

0 R C δ 1 ( r o w i ) and R C δ 1 ( c o l j ) 0.5.

Definition 4: 1) On the first taxicab principlal axis the row i is heavyweight if R C δ 1 ( r o w i ) = 0.5 , and the column j is heavyweight if R C δ 1 ( c o l j ) = 0.5 .

2) On the first taxicab principlal axis the cell ( i , j ) is heavyweight if and only if both row i and column j are heavyweights; and in this case R C δ 1 ( p i j p i p j ) = | p i j p i p j | δ 1 = 0.25 .

For an application of Definitions 3 and 4 see [6].

Using Wedderburn’s rank-1 reduction rule, see [11], we construct the 2nd residual matrix P 2 = ( x i j = p i j p i p j a 1 ( i ) b 1 ( j ) σ 1 ) , which is also

double-centered, and repeat the above procedure. After k = r a n k ( P 1 ) iterations, we decompose the correspondence matrix P into ( k + 1 ) bilinear parts

p i j = p i p j + α = 1 k a α ( i ) b α ( j ) δ α ,

named taxicab singular value decomposition; which can be rewritten, similar to data reconstruction formula in correspondence analysis (CA), as

p i j = p i p j ( 1 + α = 1 k f α ( i ) g α ( j ) δ α ) ,

where in TCA

f α ( i ) = a α ( i ) / p i and g α ( j ) = b α ( j ) / p j . (19)

We note that Equations (5) through (18) are valid for higher residual correspondence matrices P α for α = 1, , k .

CA and TCA satisfy an important invariance property: columns (or rows) with identical profiles (conditional probabilities) receive identical factor scores g α ( j ) (or f α ( i ) ). The factor scores are used in the graphical displays. Moreover, merging of identical profiles does not change the results of the data analysis: This is named the principle of equivalent partitioning by [16]; it includes the famous invariance property named principle of distributional equivalence, on which [17] developed CA.

In the next subsections we shall present two data sets, where taxicab correspondence analysis (TCA) is applied. The first data set is a small contingency table taken from [18], for which we present the details of the computation of the first two principal dimensions; in particular we highlight the balanced 2-blocks seriation of the residual data sets P α for α = 1,2 during the computation of each principal dimension. The second data set is a networks affinity matrix from [19] who applied CA to visually explore it; on this data set we compare CA and TCA maps highlighting the robustness of the TCA map to rare observations on the second principal dimension.

The theory of CA can be found, among others, in [17] [20] - [25]; the recent book, authored by [18], presents a panoramic review of CA and related methods.

3.1. Selikoff’s Asbestos Data Set

Table 1, taken from [18], is a contingency table Y of size 5 × 4 cross-classifying 1117 New York workers with occupational exposure to asbestos; the workers are classified according to the number of exposure in years (five categories) and the asbestos grade diagnosed (four categories). Figure 1 and Figure 2 display the maps obtained by CA and TCA: almost no difference between them. Here, we present the details of the computation for TCA. Table 2 presents the residual correspondence table P 1 with respect to the independence model, where we see diagonal 2-blocks seriation of P 1 with: S o p t = { row1 , row2 } is positively associated with T o p t = { column1 } and the cut-norm P 1 = ( 0.1181 + 0.0151 ) = 0.1332 ; similarly, S ¯ o p t = { row3 , row4 , row5 } is positively associated with T ¯ o p t = { column2 , column3 , column4 } and P 1 = j T ¯ o p t i S ¯ o p t x i j = 0.0087 + + 0.0202 = 0.1332 . Note that the elements in the positively associated diagonal blocks have in majority positive values; while the elements in the negatively associated diagonal blocks have in majority negative values. The last three columns and the last three rows of Table 2 display principal axes ( v 1 and u 1 ), coordinates of the projected points ( a 1 and b 1 ) and coordinates of TCA factor scores ( f 1 and g 1 ).

Table 3 shows the 2nd residual correspondence matrix P 2 , where we note that its first column is zero, because by Definition 4a column 1 is heavyweight in P 1 : R C δ 1 ( G0 ) = 0.5 , see [6]. We see that columns (3 and 4) are positively associated with rows (1 and 5); similarly column 2 is positively associated with rows (2, 3 and 4). It is difficult to interpret the diagonal balanced 2-blocks seriation in Table 3; however, the map in Figure 2 is interpretable, it shows a Guttman effect known as horseshoe or parabola.

Figure 1. CA map of asbestos exposure data.

Table 1. Selikoff’s Asbestos contingency table of size 5 × 4 .

Table 2. Balanced 2-blocks seriation of P 1 = ( p i j p i p j ) of size 5 × 4 .

Table 3. Balanced 2-blocks seriation of P 2 = ( p i j p i p j a 1 i b 1 j δ 1 ) of size 5 × 4 .

Figure 2. TCA map of asbestos exposure data.

3.2. Western Hemisphere Countries and Their Memberships in Trade and Treaty Organizations

Table 4 presents a two-mode affiliation network matrix Z = ( z i j ) of size 22 × 15 taken from [19]. The 22 rows represent 22 countries and the 15 columns the regional trade and treaty organizations, described in Appendix A. The country i is a member of the organization j if z i j = 1 ; and z i j = 0 means the country i is not a member of the organization j. [19] visualized this data by correspondence analysis, see Figure 3, which is quite cluttered. She interpreted the first two principal dimensions by examining the factor scores of the countries and summarized the results in 3 points:

1) The first dimension contrasts South American countries and organizations on the one hand, and Central American countries and organizations on the other hand.

2) The second dimension clearly distinguishes Canada and the United States (both North American countries) along with NAFTA from other countries and organizations. In CA, the relative contribution of Canada (resp. US) to the second axis is R C σ 2 2 ( Canada ) = R C σ 2 2 ( US ) = 0.409 , and R C σ 2 2 ( NAFTA ) = 0.821 , where σ 2 2 is the variance, also named inertia, of the second principal dimension.

3) Organizations (SELA, OAS, and IDB) are in the center because they have membership profiles that are similar to the marginal profile: almost all countries belong to (SELA, OAS, and IDB), see Table 4.

Figure 4 provides the TCA map, which is much more interpretable than the corresponding CA map in Figure 3; where we see that, additionally to the three points mentioned by [19], the south american countries are divided into two

Figure 3. CA map of Western Hemisphere affinity network.

Table 4. Sociomatrix of American countries and their memberships.

Figure 4. TCA map of Western Hemisphere affinity network.

groups, northern (Venezuela, Bolivia, Peru and Ecuador) and southern countries (Brazil, Uruguay, Argentina, Paraguay and Chile). Furthermore, the contributions of the points Canada, the United States, and NAFTA to the second axis are not substantial compared to CA: R C δ 2 ( Canada ) = R C δ 2 ( US ) = 0.088 , and R C δ 2 ( NAFTA ) = 0.10 . This shows the robustness of TCA due to the robustness of the δ statistic following Definition 1.

It is well known that, CA is very sensitive to some particularities of a data set; further, how to identify and handle these is an open unresolved problem. However, for contingency tables [12] enumerated three under the umbrella of sparse contingency tables: rare observations, zero-block structure and relatively high-valued cells. It is evident that this data set has specially three rare observations (NAFTA, CANADA and USA), which determine the 2nd dimension of CA. A row or a column category is considered rare, if its marginal probablity is quite small.

3.3. Maximal Interaction Two-Mode Clustering of Continuous Data

[26] discussed maximum interaction two-mode clustering of continuous data. By generalizing their objective function, we want to show that the results of this section can be considered a particular robust L1 variant of their approach. Let Y = ( y i j ) be a 2-way array for i I , j J . As usual, we define, for instance, y ¯ j = i = 1 n y i j n and y ¯ = j = 1 m i = 1 n y i j m n . Let X = ( x i j ) be the additive double-centered array, where

x i j = y i j y ¯ i y ¯ j + y ¯ .

In the jargon of statistics, the cell x i j represents the additive 2-way interaction of the cell ( i , j ) I × J . The matrix X is double-centered, and it satisfies Equations (6) through (10). Let I = U α = 1 r S α be an r-partition of I and J = U β = 1 c T β be a c-partition of J. We consider the following maximization of the overall interaction problem for p 1

f p ( S α , T β : α = 1, , r and β = 1, , c ) = α = 1 r β = 1 c | S α | | T β | g p ( α , β ) ,

where | S α | is the cardinality of the set S α and

g p ( α , β ) = ( | i S α j T β x i j | S α | | T β | | ) p .

When p = 2 , then maximizing f 2 ( S α , T β : α = 1, , r and β = 1, , c ) , named maximal overall interaction, is the criterion computed in [26]. When p = 1 , r = c = 2 , then maximizing f 1 ( S α , T β : α = 1 , 2 and β = 1 , 2 ) = X 1 = 4 X by Lemma 3, which is the criterion computed in TCA.

4. Triple-Centered Arrays

To motivate our subject, we start with an example. Let Y = ( y i j k ) be a 3-way array for i I , j J and k K = { 1,2, , t } . As usual, we define, for instance, y ¯ i j = k = 1 t y i j k / t , y ¯ j = k = 1 t i = 1 n y i j k t n and y ¯ = k = 1 t j = 1 m i = 1 n y i j k t m n . Let X = ( x i j k ) be the triple-centered array, where

x i j k = y i j k y ¯ i j y ¯ i k y ¯ j k + y ¯ i + y ¯ j + y ¯ k y ¯ .

In the jargon of statistics, the cell x i j k represents the additive 3-way interaction of the cell ( i , j , k ) I × J × K . The tensor X is triple-centered; that is,

k = 1 t x i j k = j = 1 n x i j k = i = 1 m x i j k = 0.

A generalization of Lemma 3 is

Lemma 4: (8-equal parts property) The tensor norm

X ( , ) 1 = m a x k K j J i I w k v j u i x i j k subjectto u × v × w { 1, + 1 } m × n × t = 8 k W o p t j T o p t i S o p t x i j k 8 k W j T i S x i j k ,

where W K .

The proof is similar to the proof of Lemma 3.

Lemma 4 can easily be generalized to higher-way arrays.

5. Conclusions

This essay is an attempt to emphasize the following two points.

First, we showed the optimality and robustness of the mean absolute deviations about the mean, its interpretation, and its generalization to higher-way arrays. A key notion in describing its robustness is that the relative contribution of a point is bounded by 50%.

Second, within the framework of TCA, we showed that the following three identities δ 1 = P 1 1 = 4 P 1 reveal three different but related aspects of TCA: 1) δ 1 , computed in (17) and (18), by (19) represents the mean absolute deviations about the mean statistic; 2) The taxicab norm P 1 1 , via (15) and (16), shows that uniform weights are affected to the columns and the rows; 3) The cut norm 4 P 1 shows that the computation of each principal dimension of TCA corresponds to balanced 2-blocks seriation, with equality of the cut norm in the 4 associated blocks.

A list of the principal used variables is provided in Appendix B.

Acknowledgements

We thank the Editor and the referee for their comments. Research of V. Choulakian is funded by the National Science and Engineering Research Council of Canada grant RGPIN-2017-05092. This support is greatly appreciated. The authors thank William Alexander Digout for help in computations.

Appendix A: List of Western Hemisphere Organizations

1) Association of Caribbean States (ACS): Trade group sponsored by the Caribbean Commnnity and Common Market (CARlCOM).

2) Latin American Integration Association (ALADI): Free trade organization.

3) Amazon Pact: Promotes development of Amazonian territories.

4) Andean Pact: Promotes development of members through economic and social integration.

5) Caribbean Commnnity and Common Market (CARICOM): Caribbean trade organization; promotes economic development of members.

6) Group of Latin American and Caribbean Sugar Exporting Countries (GEPLACEA): Sugar-producing and exporting countries.

7) Group of Rio: Organization for joint political action.

8) Group of Three (G-3): Trade organization.

9) Inter-American Development Bank (IDB): Promotes development of member nations.

10) South American Common Market (MERCOSUR): Increases economic cooperation in the region.

11) North American Free Trade Agreement (NAFTA): Free trade organization.

12) Organization of American States (OAS): Promotes peace, security, economic, and social development in the Western Hemisphere.

13) Central American Parliament (PARLACÉN): Works for the political integration of Central America.

14) San José Group: Promotes regional economic integration.

15) Latin American Economie System (SELA): Promotes economic and social development of member nations.

Appendix B: A List of principal used variables

Mean absolute deviations about the mean of a sample d = i = 1 n | y i y ¯ | n

Mean absolute deviations of a sample about the median L A D = i = 1 n | y i m e d i a n | n

Variance of a sample s 2 = i = 1 n ( y i y ¯ ) 2 n

Cut norm of a centered sample y y ¯ 1 n = m a x S i S ( y i y ¯ ) , where S I = { 1,2, , n }

Taxicab operator norm of a double centered matrix δ α = P α 1 = max u m P α u 1 u

Cut norm of a double centered matrix P α = max S , T j T i S P α ( i , j ) , where T J = { 1,2, , m }

δ α is the dispersion value of αth taxicab principal axis

f α ( i ) is taxicab principal factor score of row i on the αth principal axis and δ α = i = 1 n p i | f α ( i ) |

g α ( j ) is taxicab principal factor score of column j on αth principal axis and δ α = j = 1 m p j | g α ( j ) |

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Pham-Gia, T. and Hung, T.L. (2001) The Mean and Median Absolute Deviations. Mathematical and Computer Modelling, 34, 921-936.
https://doi.org/10.1016/S0895-7177(01)00109-1
[2] Gorard, S. (2015) Introducing the Mean Absolute Deviation ‘Effect’ Size. International Journal of Research & Method in Education, 38, 105-114.
https://doi.org/10.1080/1743727X.2014.920810
[3] Yitzhaki, S. and Lambert, P.J. (2013) The Relationship between the Absolute Deviation from a Quantile and Gini’s Mean Difference. Metron, 71, 97-104.
https://doi.org/10.1007/s40300-013-0015-y
[4] Rothagi, V.K. (1976) An Introduction to Probability Theory and Mathematical Statistics. John Wiley and Sons, New York.
[5] Choulakian, V. (2006) Taxicab Correspondence Analysis. Psychometrika, 71, 333-345.
https://doi.org/10.1007/s11336-004-1231-4
[6] Choulakian, V. (2008) Taxicab Correspondence Analysis of Contingency Tables with One Heavyweight Column. Psychometrika, 73, 309-319.
https://doi.org/10.1007/s11336-007-9041-0
[7] Choulakian, V. (2008) Multiple Taxicab Correspondence Analysis. Advances in Data Analysis and Classification, 2, 177-206.
https://doi.org/10.1007/s11634-008-0023-6
[8] Choulakian, V. and de Tibeiro, J. (2013) Graph Partitioning by Correspondence Analysis and Taxicab Correspondence Analysis. Journal of Classification, 30, 397-427.
https://doi.org/10.1007/s00357-013-9145-4
[9] Choulakian, V., Allard, J. and Simonetti, B. (2013) Multiple Taxicab Correspondence Analysis of a Survey Related to Health Services. Journal of Data Science, 11, 205-229.
[10] Choulakian, V., Simonetti, B. and Gia, T.P. (2014) Some Further Aspects of Taxicab Correspondence Analysis. Statistical Methods and Applications, 23, 401-416.
https://doi.org/10.1007/s10260-014-0259-6
[11] Choulakian, V. (2016) Matrix Factorizations Based on Induced Norms. Statistics, Optimization and Information Computing, 4, 1-14.
https://doi.org/10.19139/soic.v4i1.160
[12] Choulakian, V. (2017) Taxicab Correspondence Analysis of Sparse Two-Way Contingency Tables. Statistica Applicata-Italian Journal of Applied Statistics, 29, 153-179.
[13] Jensen, S.T. (1999) The Laguerre-Samuelson Inequality with Extensions and Applications in Statistics and Matrix Theory. McGill University, Quebec.
https://doi.org/10.1007/978-94-011-4577-0_10
[14] Khot, S. and Naor, A. (2012) Grothendieck-Type Inequalities in Combinatorial Optimization. Communications on Pure and Applied Mathematics, 65, 992-1035.
https://doi.org/10.1002/cpa.21398
[15] Liiv, I. (2010) Seriation and Matrix Reordering Methods: An Historical Overview. Statistical Analysis and Data Mining, 3, 70-91.
https://doi.org/10.1002/sam.10071
[16] Nishisato, S. (1984) Forced Classification: A Simple Application of a Quantification Method. Psychometrika, 49, 25-36.
https://doi.org/10.1007/BF02294203
[17] Benzécri, J.P. (1973) L’Analyse des Données: Vol. 2: L’Analyse des Correspondances. Dunod, Paris.
[18] Beh, E. and Lombardo, R. (2014) Correspondence Analysis: Theory, Practice and New Strategies. Wiley, New York.
https://doi.org/10.1002/9781118762875
[19] Faust, K. (2005) Using Correspondence Analysis for Joint Displays of Affiliation Networks. In: Carrington, P.J., Scott, J. and Wasserman, S., Eds., Models and Methods in Social Network Analysis, Cambridge University Press, Cambridge, 117-147.
https://doi.org/10.1017/CBO9780511811395.007
[20] Benzécri, J.P. (1992) Correspondence Analysis Handbook. Marcel Dekker, New York.
https://doi.org/10.1201/9780585363035
[21] Greenacre, M. (1984) Theory and Applications of Correspondence Analysis. Academic Press, London.
[22] Gifi, A. (1990) Nonlinear Multivariate Analysis. Wiley, New York.
[23] Le Roux, B. and Rouanet, H. (2004) Geometric Data Analysis. From Correspondence Analysis to Structured Data Analysis. Kluwer-Springer, Dordrecht.
https://doi.org/10.1007/1-4020-2236-0
[24] Murtagh, F. (2005) Correspondence Analysis and Data Coding with Java and R. Chapman & Hall/CRC, Boca Raton, FL.
https://doi.org/10.1201/9781420034943
[25] Nishisato, S. (2007) Mutidimensional Nonlinear Descriptive Analysis. Chapman & Hall/CRC, Boca Raton, FL.
https://doi.org/10.1201/9781420011203
[26] Schepers, J., Bock, H.-H. and Van Mechelen, I. (2017) Maximal Interaction Two-Mode Clustering. Journal of Classification, 34, 49-75.
https://doi.org/10.1007/s00357-017-9226-x

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.