Mean Absolute Deviations about the Mean, the Cut Norm and Taxicab Correspondence Analysis ()
1. Introduction
Optimization has two faces, minimization of a loss function or maximization of a gain function. The following two well known dispersion measures, the variance (s2) and mean absolute deviations about the median (LAD), are optimal because each minimizes a different loss function
(1)
and
(2)
where
and c represent a sample of (
) values. To our knowledge, no optimality property is known for the mean absolute deviations about the mean defined by
(3)
even though it has been studied in several papers for modeling purposes by, see among others, [1] [2] [3]. [1] [2] essentially compare the dispersion measures d and s in the statistical literature, with their preference clearly oriented towards d for its simple interpretability. While the authors in [3] compare the statistics d and LAD with the Gini dispersion measure and conclude that “The downside of using (d and LAD) is that robustness is achieved by omitting the information on the intra-group variability”.
The following inequality
is well known and is a corollary to Lyapounov inequality, see for instance [4] : the first part
follows from (2) and the second part
follows from
, where
.
d is the measure of dispersion used in taxicab correspondence analysis (TCA), an L1 variant of correspondence analysis (CA), see [5]. An explanation for the robustness of d is the boundedness of the relative contribution of a point, see [6] - [11]. However, this paper provides further details on d, relating it to cut-norm and balanced 2-blocks seriation for double-centered data. [11] argued that often sparse contingency tables are better visualized by TCA; here, we present an analysis of a 0-1 affinity matrix, where TCA produces a much more interpretable map than CA, by comparing Figure 3 and Figure 4. We see that repetition and experience play an indispensable and illuminating role in data analysis.
This paper is organized as follows: In Section 2, we show the optimality of the d, s2 and LAD statistics based on maximizing gain functions, but d beats s2 and LAD with respect to the property of relative contribution of a point (a robustness measure used in French data analysis circles based on geometry): this results from Lemma 1, which states the fact that for a centered vector nd equals twice its cut-norm; Sections 3 and 4 generalize the optimality result of the d to double-centered and triple-centered arrays; and we conclude in Section 5. Balanced 2-blocks seriation of a matrix with application to TCA is discussed in Section 3.
2. Optimality of d
We consider the centered vector
, where
is composed of n ones. Let
and
a binary partition of I. We have
from which we deduce
(4)
We define the cut-norm of a centered vector
to be
by
where
. By casting the computation of d as a combinatorial maximization problem, we have the following main result describing the optimality of the d-statistic over all elements of the power set of I.
Lemma 1: (2-equal parts property)
for all
.
Proof:
Corollary 1:
for
.
Proof: By defining
if
and
if
, we get
.
Corollary 2:
for
.
Corollary 2 shows that LAD has a second optimality property. We emphasize the fact that the optimizing function in (2) is a univariate loss function of
; while the optimizing function in Corollary 2 is a multivariate gain function of
.
There is a similar result also for the variance in (1), based on Cauchy-Schwarz inequality stated in Lemma 2.
Lemma 2:
for
.
We note that Corollaries 1 and 2 and Lemma 2 represent particular cases of Hölder inequality, see [11].
Definition 1: We define the relative contribution of an element
to d, LAD and s2, respectively, to be
Then the following inequalities are true
from which we conclude that the most robust dispersion measure among the three dispersion measures, based on the relative contribution criterion, is d because it is bounded above by 0.5.
We note that the inequality,
, is a weaker variant of Laguerre-Samuelson inequality; see for instance [13], whose MS thesis presents nine different proofs.
We have
Definition 2: An element
is a heavyweight if
; that is,
.
We note that a heavyweight element attains the upper bound of
, but it never attains the upper bound of
and
.
3. 2-Way Interactions of a Correspondence Matrix
Let
be a correspondence matrix; that is,
for
and
and
. As usual, we define
and
. Let
for
and
; then
represents the residual matrix of
with respect to the independence model
. In the jargon of statistics, the cell
represents the multiplicative 2-way interaction of the cell
.
is double-centered
(5)
From (5) we get
(6)
(7)
for
. From (6) and (7), we get
(8)
(9)
(10)
We define the cut-norm of
to be
The cut-norm
is a well known quantity in theoretical computer science, because of its relationship to the famous Grothendieck inequality, which is based on
, see among others [14].
The matrix
can be considered as the starting point in taxicab correspondence analysis, an L1 variant of correspondence analysis, see [5]. The optimization criterion in TCA of
or
is based on taxicab matrix norm, which is a combinatorial optimization problem
(11)
(12)
In data analysis, the vectors
and
are interpreted as first taxicab principal axes and
as first taxicab dispersion. So we can compute the projection of the rows (resp. the columns) of
on the first taxicab principal axis
(resp.
) to be
(13)
(14)
Equation (12) implies
(15)
(16)
named transition formulas, see [5] and [11]. We also note the following identities
(17)
(18)
Using the above results, we get the following
Lemma 3: (4-equal parts property) The norm
.
In data analysis, Lemma 3 implies balanced 2-blocks seriation of
; see example 1. The subsets
and
are positively associated and
; similarly the subsets
and
are positively associated and
. While the subsets
and
are negatively associated and
; similarly the subsets
and
are negatively associated and
. [15] presents an interesting overview of seriation and block seriation.
Using Definition 2, we get
Definition 3: The relative contribution of the row i to
(respectively of the column j to
) is
We have
Definition 4: 1) On the first taxicab principlal axis the row i is heavyweight if
, and the column j is heavyweight if
.
2) On the first taxicab principlal axis the cell
is heavyweight if and only if both row i and column j are heavyweights; and in this case
.
For an application of Definitions 3 and 4 see [6].
Using Wedderburn’s rank-1 reduction rule, see [11], we construct the 2nd residual matrix
, which is also
double-centered, and repeat the above procedure. After
iterations, we decompose the correspondence matrix
into (
) bilinear parts
named taxicab singular value decomposition; which can be rewritten, similar to data reconstruction formula in correspondence analysis (CA), as
where in TCA
(19)
We note that Equations (5) through (18) are valid for higher residual correspondence matrices
for
.
CA and TCA satisfy an important invariance property: columns (or rows) with identical profiles (conditional probabilities) receive identical factor scores
(or
). The factor scores are used in the graphical displays. Moreover, merging of identical profiles does not change the results of the data analysis: This is named the principle of equivalent partitioning by [16]; it includes the famous invariance property named principle of distributional equivalence, on which [17] developed CA.
In the next subsections we shall present two data sets, where taxicab correspondence analysis (TCA) is applied. The first data set is a small contingency table taken from [18], for which we present the details of the computation of the first two principal dimensions; in particular we highlight the balanced 2-blocks seriation of the residual data sets
for
during the computation of each principal dimension. The second data set is a networks affinity matrix from [19] who applied CA to visually explore it; on this data set we compare CA and TCA maps highlighting the robustness of the TCA map to rare observations on the second principal dimension.
The theory of CA can be found, among others, in [17] [20] - [25]; the recent book, authored by [18], presents a panoramic review of CA and related methods.
3.1. Selikoff’s Asbestos Data Set
Table 1, taken from [18], is a contingency table Y of size
cross-classifying 1117 New York workers with occupational exposure to asbestos; the workers are classified according to the number of exposure in years (five categories) and the asbestos grade diagnosed (four categories). Figure 1 and Figure 2 display the maps obtained by CA and TCA: almost no difference between them. Here, we present the details of the computation for TCA. Table 2 presents the residual correspondence table
with respect to the independence model, where we see diagonal 2-blocks seriation of
with:
is positively associated with
and the cut-norm
; similarly,
is positively associated with
and
. Note that the elements in the positively associated diagonal blocks have in majority positive values; while the elements in the negatively associated diagonal blocks have in majority negative values. The last three columns and the last three rows of Table 2 display principal axes (
and
), coordinates of the projected points (
and
) and coordinates of TCA factor scores (
and
).
Table 3 shows the 2nd residual correspondence matrix
, where we note that its first column is zero, because by Definition 4a column 1 is heavyweight in
:
, see [6]. We see that columns (3 and 4) are positively associated with rows (1 and 5); similarly column 2 is positively associated with rows (2, 3 and 4). It is difficult to interpret the diagonal balanced 2-blocks seriation in Table 3; however, the map in Figure 2 is interpretable, it shows a Guttman effect known as horseshoe or parabola.
Figure 1. CA map of asbestos exposure data.
Table 1. Selikoff’s Asbestos contingency table of size
.
Table 2. Balanced 2-blocks seriation of
of size
.
Table 3. Balanced 2-blocks seriation of
of size
.
Figure 2. TCA map of asbestos exposure data.
3.2. Western Hemisphere Countries and Their Memberships in Trade and Treaty Organizations
Table 4 presents a two-mode affiliation network matrix
of size
taken from [19]. The 22 rows represent 22 countries and the 15 columns the regional trade and treaty organizations, described in Appendix A. The country i is a member of the organization j if
; and
means the country i is not a member of the organization j. [19] visualized this data by correspondence analysis, see Figure 3, which is quite cluttered. She interpreted the first two principal dimensions by examining the factor scores of the countries and summarized the results in 3 points:
1) The first dimension contrasts South American countries and organizations on the one hand, and Central American countries and organizations on the other hand.
2) The second dimension clearly distinguishes Canada and the United States (both North American countries) along with NAFTA from other countries and organizations. In CA, the relative contribution of Canada (resp. US) to the second axis is
, and
, where
is the variance, also named inertia, of the second principal dimension.
3) Organizations (SELA, OAS, and IDB) are in the center because they have membership profiles that are similar to the marginal profile: almost all countries belong to (SELA, OAS, and IDB), see Table 4.
Figure 4 provides the TCA map, which is much more interpretable than the corresponding CA map in Figure 3; where we see that, additionally to the three points mentioned by [19], the south american countries are divided into two
Figure 3. CA map of Western Hemisphere affinity network.
Table 4. Sociomatrix of American countries and their memberships.
Figure 4. TCA map of Western Hemisphere affinity network.
groups, northern (Venezuela, Bolivia, Peru and Ecuador) and southern countries (Brazil, Uruguay, Argentina, Paraguay and Chile). Furthermore, the contributions of the points Canada, the United States, and NAFTA to the second axis are not substantial compared to CA:
, and
. This shows the robustness of TCA due to the robustness of the
statistic following Definition 1.
It is well known that, CA is very sensitive to some particularities of a data set; further, how to identify and handle these is an open unresolved problem. However, for contingency tables [12] enumerated three under the umbrella of sparse contingency tables: rare observations, zero-block structure and relatively high-valued cells. It is evident that this data set has specially three rare observations (NAFTA, CANADA and USA), which determine the 2nd dimension of CA. A row or a column category is considered rare, if its marginal probablity is quite small.
3.3. Maximal Interaction Two-Mode Clustering of Continuous Data
[26] discussed maximum interaction two-mode clustering of continuous data. By generalizing their objective function, we want to show that the results of this section can be considered a particular robust L1 variant of their approach. Let
be a 2-way array for
. As usual, we define, for instance,
and
. Let
be the additive double-centered array, where
In the jargon of statistics, the cell
represents the additive 2-way interaction of the cell
. The matrix
is double-centered, and it satisfies Equations (6) through (10). Let
be an r-partition of I and
be a c-partition of J. We consider the following maximization of the overall interaction problem for
where
is the cardinality of the set
and
When
, then maximizing
, named maximal overall interaction, is the criterion computed in [26]. When
, then maximizing
by Lemma 3, which is the criterion computed in TCA.
4. Triple-Centered Arrays
To motivate our subject, we start with an example. Let
be a 3-way array for
and
. As usual, we define, for instance,
,
and
. Let
be the triple-centered array, where
In the jargon of statistics, the cell
represents the additive 3-way interaction of the cell
. The tensor
is triple-centered; that is,
A generalization of Lemma 3 is
Lemma 4: (8-equal parts property) The tensor norm
where
.
The proof is similar to the proof of Lemma 3.
Lemma 4 can easily be generalized to higher-way arrays.
5. Conclusions
This essay is an attempt to emphasize the following two points.
First, we showed the optimality and robustness of the mean absolute deviations about the mean, its interpretation, and its generalization to higher-way arrays. A key notion in describing its robustness is that the relative contribution of a point is bounded by 50%.
Second, within the framework of TCA, we showed that the following three identities
reveal three different but related aspects of TCA: 1)
, computed in (17) and (18), by (19) represents the mean absolute deviations about the mean statistic; 2) The taxicab norm
, via (15) and (16), shows that uniform weights are affected to the columns and the rows; 3) The cut norm
shows that the computation of each principal dimension of TCA corresponds to balanced 2-blocks seriation, with equality of the cut norm in the 4 associated blocks.
A list of the principal used variables is provided in Appendix B.
Acknowledgements
We thank the Editor and the referee for their comments. Research of V. Choulakian is funded by the National Science and Engineering Research Council of Canada grant RGPIN-2017-05092. This support is greatly appreciated. The authors thank William Alexander Digout for help in computations.
Appendix A: List of Western Hemisphere Organizations
1) Association of Caribbean States (ACS): Trade group sponsored by the Caribbean Commnnity and Common Market (CARlCOM).
2) Latin American Integration Association (ALADI): Free trade organization.
3) Amazon Pact: Promotes development of Amazonian territories.
4) Andean Pact: Promotes development of members through economic and social integration.
5) Caribbean Commnnity and Common Market (CARICOM): Caribbean trade organization; promotes economic development of members.
6) Group of Latin American and Caribbean Sugar Exporting Countries (GEPLACEA): Sugar-producing and exporting countries.
7) Group of Rio: Organization for joint political action.
8) Group of Three (G-3): Trade organization.
9) Inter-American Development Bank (IDB): Promotes development of member nations.
10) South American Common Market (MERCOSUR): Increases economic cooperation in the region.
11) North American Free Trade Agreement (NAFTA): Free trade organization.
12) Organization of American States (OAS): Promotes peace, security, economic, and social development in the Western Hemisphere.
13) Central American Parliament (PARLACÉN): Works for the political integration of Central America.
14) San José Group: Promotes regional economic integration.
15) Latin American Economie System (SELA): Promotes economic and social development of member nations.
Appendix B: A List of principal used variables
Mean absolute deviations about the mean of a sample
Mean absolute deviations of a sample about the median
Variance of a sample
Cut norm of a centered sample
, where
Taxicab operator norm of a double centered matrix
Cut norm of a double centered matrix
, where
is the dispersion value of αth taxicab principal axis
is taxicab principal factor score of row i on the αth principal axis and
is taxicab principal factor score of column j on αth principal axis and