Mean absolute deviations about the mean, the cut norm and taxicab correspondence analysis

Optimization has two faces, minimization of a loss function or maximization of a gain function. We show that the mean absolute deviations about the mean, d, maximizes a gain function based on the power set of the individuals, and it is equal to twice the value of its cut-norm. This property is generalized to double-centered and triple-centered data sets. Furthermore, we show that among the three well known dispersion measures, standard deviation, least absolute deviation and d, d is the most robust based on the relative contribution criterion. More importantly, we show that the computation of each principal dimension of taxicab correspondence analysis corresponds to balanced 2-blocks seriation. Examples are provided.


Introduction
Optimization has two faces, minimization of a loss function or maximization of a gain function. The following two well known dispersion measures, the variance (s 2 ) and mean absolute deviations about the median (LAD), are 1 arXiv:2003.02906v1 [stat.ME] 5 Mar 2020 optimal because each minimizes a different loss function and where y 1 , y 2, ..., y n and c represent a sample of (n + 1) values. To our knowledge, no optimality property is known for the mean absolute deviations about the mean defined by even though it has been studied in several papers for modeling purposes by, see among others, Pham-Gia and Hung (2001), Gorard (2015), Yitzhaki and Lambert (2013). Pham-Gia and Hung (2001) and Gorard (2015) essentially compare the dispersion measures d and s in the statistical litterature, with their preference clearly oriented towards d for its simple interpretability. While Yitzhaki and Lambert (2013) compare the statistics d and LAD with the Gini dispersion measure and conclude that "The downside of using (d and LAD) is that robustness is achieved by omitting the information on the intra-group variability". The following inequality LAD ≤ d ≤ s is well known: it follows from (2) and the fact that ns 2 = n i=1 w 2 i − nw 2 ≥ 0, where w i = |y i − y|. d is the measure of dispersion used in taxicab correspondence analysis (TCA), an L 1 variant of correspondence analysis (CA), see Choulakian (2006). An explanation for the robustness of d is the boundedness of the relative contribution of a point, see Choulakian (2008aChoulakian ( , 2008bChoulakian ( , 2017 and Choulakian et al. (2013aChoulakian et al. ( , 2013bChoulakian et al. ( , 2014. However, this paper provides further details on d, relating it to cut-norm and balanced 2-blocks seriation for double-centered data. Choulakian (2017) argued that often sparse contingency tables are better visualized by TCA; here, we present an analysis of a 0-1 affinity matrix, where TCA produces a much more interpretable map than CA. We see that repetition and experience play an indispensable and illuminating role in data analysis. This paper is organized as follows: In section 2, we show the optimality of the d, s 2 and LAD statistics based on maximizing gain functions, but d beats s 2 and LAD with respect to the property of relative contribution of a point (a robustness measure used in french data analysis circles based on geometry): this results from Lemma 1, which states the fact that for a centered vector d equals twice its cut-norm; sections 3 and 4 generalize the optimality result of the d to double-centered and triple-centered arrays; and we conclude in section 5. Balanced 2-blocks seriation of a matrix with application to TCA is discussed in section 3.

Optimality of d
We construct the centered vector x = y − y 1 n , where 1 n is composed of n ones. Let I = {1, 2, ..., n} and I = S ∪ S a binary partition of I. We have i∈I We define the cut-norm of a centered vector x to be ||x|| = max S i∈S x i = i∈Sopt x i , where S opt = {i : x i ≥ 0 for i = 1, 2, ..., n} . By casting the computation of d as a combinatorial maximization problem, we have the following main result describing the optimality of the d-statistic over all elements of the power set of I.
Proof: By defining u opt (i) = 1 if i ∈ S opt and u opt (i) = −1 if i ∈ S opt , we get d = x u opt /n ≥ x u/n.
Corollary 2 shows that LAD has a second optimality property. We emphasize the fact that the optimizing function in (2) is a univariate loss function of c ∈ R; while the optimizing function in Corollary 2 is a multivariate gain function of u ∈ {−1, 1} n .
There is a similar result also for the variance in (1), based on Cauchy-Schwarz inequality.
Definition 1: We define the relative contribution of an element y i to d, LAD and s 2 , respectively, to be Then the following inequalities are true from which we conclude that the most robust dispersion measure among the three dispersion measures, based on the relative contribution criterion, is d. We note that the inequality, 0 ≤ RC s 2 (y i ) < 1, is a weaker variant of Laguerre-Samuelson inequality; see for instance, Jensen (1999), whose MS thesis presents nine different proofs.
We have We note that a heavyweight element attains the upper bound of RC d (y i ), but it never attains the upper bound of RC s 2 (y i ) and RC LAD (y i ).

2-way interactions of a correspondence matrix
Let P = (p ij ) be a correspondence matrix; that is, p ij ≥ 0 for i ∈ I and j ∈ J = {1, 2, ..., m} and m j=1 n i=1 p ij = 1. As usual, we define p i * = m j=1 p ij and p * j = n i=1 p ij . Let P 1 = (x ij = p ij − p i * p * j ) for i ∈ I and j ∈ J; then P 1 represents the residual matrix of P with respect to the independence model (p i * p * j ). In the jargon of statistics, the cell x ij represents the multiplicative 2-way interaction of the cell (i, j) ∈ I × J. P 1 is double-centered for T ⊂ J. From (6) and (7), we get j∈T i∈S We define the cut-norm of P 1 to be The cut-norm ||P 1 || is a well known quantity in theoretical computer science, because of its relationship to the famous Grothendieck inequality, which is based on ||P 1 || ∞→1 , see among others Khot and Naor (2012). The matrix P 1 can be considered as the starting point in taxicab correspondence analysis, an L 1 variant of correspondence analysis , see Choulakian (2006). The optimization criterion in TCA of P or P 1 is based on taxicab matrix norm, which is a combinatorial optimization problem In data analysis, the vectors v 1 and u 1 are interpreted as taxicab principal axes and δ 1 as first taxicab dispersion. So we can compute the projection of the rows (resp. the columns) of P 1 on the taxicab principal axis u 1 (resp. v 1 ) to be Equation (12) implies named transition formulas, see Choulakian (2006Choulakian ( , 2016. We also note the following identities 1 n a 1 = 0 and δ 1 = ||a 1 || 1 , (17) 1 m b 1 = 0 and δ 1 = ||b 1 || 1 .
Using the above results, we get the following Lemma 3: (4-equal parts property) The norm In data analysis, Lemma 3 implies balanced 2-blocks seriation of P 1 ; see example 1. The subsets T opt and S opt are positively associated and ||P 1 || = j∈Topt i∈Sopt x ij ; similarly the subsets T opt and S opt are positively associated and ||P 1 || = j∈T opt i∈Sopt x ij . While the subsets T opt and S opt are negatively associated and ||P 1 || = − j∈T opt i∈Sopt x ij ; similarly the subsets T opt and S opt are negatively associated and ||P 1 || = − j∈Topt i∈Sopt x ij . Liiv (2010) presents an interesting overview of seriation.
Using Definition 2, we get Definition 3: The relative contribution of the row i to δ 1 (respectively of the column j to δ 1 ) is We have 0 ≤ RC 1 (row i) and RC 1 (col j) ≤ 0.5.
Definition 4: a) On the first taxicab principlal axis the row i is heavyweight if RC 1 (row i) = 0.5, and the column j is heavyweight if RC 1 (col j) = 0.5. b) On the first taxicab principlal axis the cell (i, j) is heavyweight if and only if both row i and column j are heavyweights; and in this case For an application of Definitions 3 and 4 see Choulakian (2008a). Using Wedderburn's rank-1 reduction rule, see Choulakian (2016), we construct the 2nd residual matrix ), and repeat the above procedure. After k = rank(P 1 ) iterations, we decompose the correspondence matrix P into (k + 1) bilinear parts named taxicab singular value decomposition; which can be rewritten, similar to data reconstruction formula in correspondence analysis (CA), as where f α (i) = a α (i)/p i * and g α (j) = b α (j)/p * j . CA and TCA satisfy an important invariance property: columns (or rows) with identical profiles (conditional probabilities) receive identical factor scores g α (j) (or f α (i)). The factor scores are used in the graphical displays. Moreover, merging of identical profiles does not change the results of the data analysis: This is named the principle of equivalent partitioning by Nishisato (1984); it includes the famous invariance property named principle of distributional equivalence, on which Benzécri (1973) developed CA.
In the next subsections we shall present two examples, where taxicab correspondence analysis (TCA) is applied. The first data set is a small contingency table taken from Beh and Lombardo (2014), for which we present the details of the computation explaing the contents of section 3. The second data set is a networks affinity matrix from Faust (2005). For both data sets we compare CA and TCA maps.
The theory of CA can be found, among others, in Benzécri (1973Benzécri ( , 1992, Greenacre (1984), Gifi (1990), Le Roux andRouanet (2004), Murtagh (2005), and Nishisato (2007); the recent book, authored by Beh and Lombardi (2014), presents a panoramic review of CA and related methods.  (2014), is a contingency table Y of size 5×4 cross-classifying 1117 New York workers with occupational exposure to asbestos; the workers are classified according to the number of exposure in years (five categories) and the asbestos grade diagnosed (four categories). Figures 1 and 2 display the maps obtained by CA and TCA: almost no difference between them. Here, we present the details of the computation for TCA. Table 2 presents the residual correspondence table P 1 with respect to the independence model, where we see diagonal 2-blocks seriation of P 1 with: S opt = {row1, row2} is positively associated with T opt = {column1} and the cut-norm ||P 1 || = (0.1181 + 0.0151) = 0.1332; similarly, S opt = {row3, row4, row5} is positively associated with T opt = {column2, column3, column4} and ||P 1 || = j∈T opt i∈Sopt x ij = 0.0087 + ... + 0.0202 = 0.1332. Note that the elements in the positively associated diagonal blocks have in majority positive values; while the elements in the negatively associated diagonal blocks have in majority negative values. The last three columns and the last three rows of Table 2 display principal axes (v 1 and u 1 ), coordinates of the projected points (a 1 and b 1 ) and coordinates of TCA factor scores (f 1 and g 1 ). Table 3 shows the 2nd residual correspondence matrix P 2 , where we note that its first column is zero, because column 1 is heavyweight in P 1 : the RC T CA 1 (G0) = 0.5, see Choulakian (2008a). We see that columns (3 and 4) are positively associated with rows (1 and 5); similarly column 2 is positively associated with rows 2 to 4. It is difficult to interpret the diagonal balanced 2-blocks seriation in Table 3; however, the map in Figure 2 is interpretable, it shows a Guttman effect known as horseshoe or parabola.     Table 4 presents a two-mode affiliation network matrix Z = (z ij ) of size 22 × 15 taken from Faust (2005). The 22 rows represent 22 countries and the 15 columns the regional trade and treaty organizations, described in Appendix A. The country i is a member of the organization j if z ij = 1;  and z ij = 0 means the country i is not a member of the organization j. Faust (2005) visualized this data by correspondence analysis, see Figure 3, which is quite cluttered. She interpreted the first two principal dimensions by examining the factor scores of the countries and summarized the results in 3 points: a) The first dimension contrasts South American countries and organizations on the one hand, and Central American countries and organizations on the other hand.

Western Hemisphere countries and their memberships in trade and treaty organizations
b) The second dimension clearly distinguishes Canada and the United States (both North American countries) along with NAFTA from other countries and organizations. In CA, the relative contribution of Canada (resp. US) to the second axis is RC CA 2 (Canada) = RC CA 2 (U S) = 0.409, and RC CA 2 (N AF T A) = 0.821. c) Organizations (SELA, OAS, and IDB) are in the center because they have membership profiles that are similar to the marginal profile: almost all countries belong to (SELA, OAS, and IDB), see Table 4. Figure 4 provides the TCA map, which is much more interpretable than the corresponding CA map in Figure 3; where we see that, additionally to the three points mentioned by Faust (2005), the south american countries are divided into two groups, northern (Venezuela, Bolivia, Peru and Ecuador) and southern countries (Brazil, Uruguay, Argentina, Paraguay and Chile). Furthermore, the contributions of the points Canada, the United States, and NAFTA to the second axis are not substantial compared to CA: It is well known that, CA is very sensitive to some particularities of a data set; further, how to identify and handle these is an open unresolved problem. However, for contingency tables Choulakian (2017) enumerated three under the umbrella of sparse contingency tables: rare observations, zero-block structure and relatively high-valued cells. It is evident that this data set has specially three rare observations (NAFTA, CANADA and USA), which determine the 2nd dimension of CA. A row or a column is considered rare, if its marginal probablity is quite small.    3.3 Maximal interaction two-mode clustering of continuous data Schepers, Bock and Van Mechelen (2017) discussed maximum interaction two-mode clustering of continuous data. By generalizing their objective function, we want to show that the results of this section can be considered a particular robust L 1 variant of their approach. Let Y = (y ij ) be a 2-way array for i ∈ I, j ∈ J. As usual, we define, for instance, y * j = n i=1 y ij n and y * * = m j=1 n i=1 y ij mn . Let X = (x ij ) be the additive double-centered array, where x ij = y ij − y i * − y * j + y * * .
In the jargon of statistics, the cell x ijk represents the additive 3-way interaction of the cell (i, j, k) ∈ I × J × K. The tensor X is triple-centered; that is, The proof is similar to the proof of Lemma 3. Lemma 4 can easily be generalized to higher-way arrays.

Conclusion
This essay is an attempt to emphasize the following two points. First, we showed the optimality and robustness of the mean absolute deviations about the mean, its interpretation, and its generalization to higherway arrays. A key notion in describing its robustness is that the relative contribution of a point is bounded by 50%.
Second, within the framework of TCA, we showed that the following three identities δ 1 = ||P 1 || ∞→1 = 4||P 1 || reveal three different but related aspects of TCA: a) δ 1 , computed in (17) and (18), represents the mean of absolute deviations about the mean statistic; b) The taxicab norm ||P 1 || ∞→1 , via (15) and (16), shows that uniform weights are affected to the columns and the rows; c) The cut norm 4||P 1 || shows that the computation of each principal dimension of TCA corresponds to balanced 2-blocks seriation, with equality of the cut norm in the 4 associated blocks.