Proposal and Pilot Study: A Generalization of the W or W' Statistic for Multivariate Normality ()

José Moral-De La Rubia^{}

School of Psychology, Universidad Autónoma de Nuevo León, Nuevo León, Monterrey, México.

**DOI: **10.4236/ojs.2023.131008
PDF
HTML XML
211
Downloads
728
Views
Citations

School of Psychology, Universidad Autónoma de Nuevo León, Nuevo León, Monterrey, México.

The aim of this paper is to present a generalization of the Shapiro-Wilk
W-test or Shapiro-Francia W'-test for application to two or more variables. It consists of
calculating all the unweighted linear combinations of the variables and their
W- or W'-statistics with the Royston’s log-transformation and standardization, *z*_{ln(1-W)} or *z*_{ln(1-W}_{'}_{)}. Because the calculation of the probability of *z*_{ln(1-W)} or *z*_{ln(1-W}_{'}_{)} is to the right tail, negative values are truncated to 0 before doing
their sum of squares. Independence in the sequence of these half-normally
distributed values is required for the test statistic to follow a chi-square
distribution. This assumption is checked using the robust Ljung-Box test. One
degree of freedom is lost for each cancelled value. Defined the new test with
its two variants (Q-test or Q'-test),
50 random samples with 4 variables and 20 participants were generated, 20%
following a multivariate normal distribution and 80% deviating from this
distribution. The new test was compared with Mardia’s, runs, and Royston’s
tests. Central tendency differences in type II error and statistical power were
tested using the Friedman’s test and pairwise comparisons using the Wilcoxon’s
test. Differences in the frequency of successes in statistical decision making
were compared using the Cochran’s Q test and pairwise comparisons using the
McNemar’s test. Sensitivity, specificity and efficiency proportions were
compared using the McNemar’s Z test. The generated 50 samples were classified
into five ordered categories of deviation from multivariate normality, the
correlation between this variable and p-value of each test was calculated using
the Spearman’s coefficient and these correlations were compared. Family-wise
error rate corrections were applied. The new test and the Royston’s test were
the best choices, with a very slight advantage Q-test over Q'-test.
Based on these promising results, further study and use of this new sensitive,
specific and effective test are suggested.

Share and Cite:

Rubia, J. (2023) Proposal and Pilot Study: A Generalization of the W or W' Statistic for Multivariate Normality. *Open Journal of Statistics*, **13**, 119-169. doi: 10.4236/ojs.2023.131008.

1. Introduction

The present article aims to: 1) to present a new multivariate normality test as a generalization through a sum of squares of the Shapiro-Wilk W statistic [1] or the Shapiro-Francia W' statistic [2] with the Royston’s log-transformations and standardizations [3] [4] ; as well as 2) to compare the central tendency of type II error or statistical power; 3) the frequency of successes and errors in making the decision on the null hypothesis, 4) the sensitivity (proportion of detection of multivariate normality cases), specificity (proportion of rejection of non-cases) and efficiency (proportion of correctness in classifying), and 5) the correlation between the critical level or probability value and the deviation from multivariate normality (variable of five ordered categories defined when classifying the generated samples). The two versions of the new test are compared with the Mardia’s K^{2}-test based on multivariate skewness and kurtosis [5] [6] [7] , the Friedman-Rafsky multivariate runs test [8] applied to the multivariate normality test by Smith and Jain [9] and the Royston’s H-test [10] , either from the W-statistics [3] or W'-statistics [4] . It is a very simple test both in its rationale and application, which is very easy to learn and teach. It can be run with the Excel program and could be included in the multivariate normality library of the R program, making it accessible to undergraduate students of social sciences, biosciences, and other empirical sciences with a stronger mathematical emphasis.

Why a new statistical test when there are already several for this purpose? Because it constitutes a good option despite the simplicity of its rationale, being a Q-type test of sum of variables with standard half-normal distribution (standard normal truncated from quantile 0.5 to quantile 1) that assumes independence in the sequence of summed values to use the approximation to the chi-squared distribution in the calculation of the critical level or probability value [11] [12] [13] , and checks this assumption of independency in a specified sequence that allows estimating autocorrelation in case of non-compliance for use in the calculation of bootstrap-based critical values.

2. The Proposed Q Test from the W Statistics or Q' Test from the W' Statistics

The test is based on the lemma or proven proposition that, if a set of *k* variables comes from a multivariate normal distribution, any linear combination among the variables follows a univariate normal distribution [14] . The lemma that the sum of squares of *k* independent variables with standard normal distribution follows a chi-square distribution with *k* degrees of freedom was initially considered [15] . In addition, a random sample of size *k* is defined as a succession of *k* random, independent and identically distributed variables [11] .

2.1. Formulation of Statistical Hypotheses

Let there be a random sample of *n* participants or elements to whom *k* measurements have been made with the same variable *X* or with *k* different variables. It is assumed that the variables are continuous quantitative or admit a continuous generalization so that they can fit a normal distribution model.

The null hypothesis (H_{0}) states that the random vector
$\stackrel{\to}{x}$ composed of *k* random variables follows a multivariate normal distribution of unknown parameters:
$\stackrel{\to}{\mu}$ (vector of population means) and Σ (population covariance matrix).

${\text{H}}_{0}:\stackrel{\to}{x}~N\left(\stackrel{\to}{\mu},\Sigma \right)$

$\stackrel{\to}{x},\stackrel{\to}{\mu}\in {\mathbb{R}}^{k}$ y $\Sigma \in {\mathbb{R}}^{k\times k}$

The alternative hypothesis (H_{1}) posits that the random vector
$\stackrel{\to}{x}$ does not follow a multivariate normal distribution of unknown parameters
$\stackrel{\to}{\mu}$ and Σ.

${\text{H}}_{1}:\stackrel{\to}{x}\nsim N\left(\stackrel{\to}{\mu},\Sigma \right)$

2.2. Assumptions

• *n* independent samples of *k* tuples, that is, a random sample of *n* participants with scores on *k* continuous quantitative variables that may be correlated or independent.

• Large sample size *n*, at least 20 participants.

• Serial independence between the 2* ^{k}* − 1 values of the log-transformed, standardized and truncated W or W' statistics, considered as a succession of identically distributed random variables. The sequence is specified by the subscript of the variable (

2.3. Test Statistic

To obtain the test statistic Q (from the Shapiro-Wilk W statistics) or Q' (from the Shapiro-Francia W' statistics), the following steps are followed:

1) All unweighted linear combinations (by simple summation) among the *k* variables are obtained and the variables of each combination are summed, which results in *k* combinations of one variable, *k*(*k* − 1)/2 sums of combinations of two variables, * _{k}*C

$\underset{j=1}{\overset{k}{{\displaystyle \sum}}}\left(\begin{array}{c}k\\ j\end{array}\right)=\left(\begin{array}{c}k\\ 1\end{array}\right)+\left(\begin{array}{c}k\\ 2\end{array}\right)+\cdots +\left(\begin{array}{c}k\\ k\end{array}\right)={2}^{k}-1$

2) A first alternative is to calculate the Q-statistic that starts from the standardized values of the log-transformed W-statistics [3] of each of the 2* ^{k}* − 1 sums of linear combinations of the

${W}_{l}={r}_{{x}_{\left(i\right)l},a}^{2}=\frac{{\left[{{\displaystyle \sum}}_{i=1}^{n}\left({x}_{\left(i\right)l}-{\stackrel{\xaf}{x}}_{l}\right)\left({a}_{i}-\stackrel{\xaf}{a}\right)\right]}^{2}}{{{\displaystyle \sum}}_{i=1}^{n}{\left({x}_{\left(i\right)l}-{\stackrel{\xaf}{x}}_{l}\right)}^{2}{{\displaystyle \sum}}_{i=1}^{n}{\left({a}_{i}-\stackrel{\xaf}{a}\right)}^{2}}=\frac{{\left[{{\displaystyle \sum}}_{i=1}^{n}\left({x}_{\left(i\right)l}-{\stackrel{\xaf}{x}}_{l}\right){a}_{i}\right]}^{2}}{{{\displaystyle \sum}}_{i=1}^{n}{\left({x}_{\left(i\right)l}-{\stackrel{\xaf}{x}}_{l}\right)}^{2}}$

$i=1,2,\cdots ,n$ y $l=1,2,\cdots ,{2}^{k}-1$

*x*_{(i)l} = empirical quantiles or scores of the variable *X _{l}* (

x_{ (i)l}: *x*_{(1)l} *x*_{(2)l} *…* *x*_{(n)l}

(*i*): 1 2 … *n*

${\stackrel{\xaf}{x}}_{l}={\displaystyle {\sum}_{i=1}^{n}{x}_{li}}/n$ = the arithmetic mean of variable *X _{l}*.

The *n* expected values under a standard normal distribution, denoted by *m _{i}*, are common for the 2

${m}_{i}={\Phi}^{-1}\left(\frac{\left(i\right)-a}{n+1-a-b}\right)={\Phi}^{-1}\left(\frac{\left(i\right)-0.375}{n+0.25}\right)$

A value of 3/8 is given to the *a* and *b* values. It is based on the fact that the *i* order statistic of a variable with a standard uniform distribution follows a beta distribution of parameters: *α* = *i* and *β* = *n* + 1 − *i*, whose expected value is *α*/(*α* + *β*) = *i*/(*n* + 1). The values *a* and *b* can vary from 0 to 1 and 3/8 achieves the estimates with the best approximation to the quantiles of a normal distribution for many different sample sizes [16] .

$m={\displaystyle {\sum}_{i=1}^{n}{m}_{i}^{2}}$

$u=1/\sqrt{n}$

$\begin{array}{c}{a}_{n}={m}_{n}/\sqrt{m}+0.221157u-0.147981{u}^{2}-2.071190{u}^{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}+4.434685{u}^{4}-2.706056{u}^{5}\end{array}$

$\begin{array}{c}{a}_{n-1}={m}_{n-1}/\sqrt{m}+0.042981u-0.293762{u}^{2}-1.752461{u}^{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}+5.682633{u}^{4}-3.582663{u}^{5}\end{array}$

${a}_{1}=-{a}_{n}$

${a}_{2}=-{a}_{n-1}$

$\u03f5=\frac{m-2{m}_{n}^{2}-2{m}_{n-1}^{2}}{1-2{a}_{n}^{2}-2{a}_{n-1}^{2}}$

${a}_{i}={m}_{i}/\sqrt{\u03f5};\text{\hspace{0.17em}}i=3,4,\cdots ,n-2$

$\underset{i=1}{\overset{n}{\sum}}{a}_{i}}=0;\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\stackrel{\xaf}{a}={\displaystyle \underset{i=1}{\overset{n}{\sum}}{a}_{i}/n}=0;\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{a}_{i}^{2}}=1;\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{\left({a}_{i}-\stackrel{\xaf}{a}\right)}^{2}}=1$

The standardized value of the logarithmically transformed statistic for sample sizes from 12 to 5000 (*n* ≥ 20) is obtained using the following formulas:

${Z}_{\mathrm{ln}\left(1-{W}_{l}\right)}=\frac{\mathrm{ln}\left(1-{W}_{l}\right)-{\mu}_{\mathrm{ln}\left(1-{W}_{l}\right)}}{{\sigma}_{\mathrm{ln}\left(1-{W}_{l}\right)}}~N\left(0,1\right)$

${\mu}_{\mathrm{ln}\left(1-{W}_{l}\right)}=-1.5861-0.31082\mathrm{ln}\left(n\right)-0.083751{\left[\mathrm{ln}\left(n\right)\right]}^{2}+0.0038915{\left[\mathrm{ln}\left(n\right)\right]}^{3}$

${\sigma}_{\mathrm{ln}\left(1-{W}_{l}\right)}={\text{e}}^{-0.4803-0.082676\mathrm{ln}\left(n\right)+0.0030302{\left[\mathrm{ln}\left(n\right)\right]}^{2}}$

If
$P\left(Z\ge {z}_{\mathrm{ln}\left(1-{W}_{l}\right)}\right)=1-P\left(Z<{z}_{\mathrm{ln}\left(1-{W}_{l}\right)}\right)\ge \alpha $ , H_{0} is accepted

If
$P\left(Z\ge {z}_{\mathrm{ln}\left(1-{W}_{l}\right)}\right)=1-P\left(Z<{z}_{\mathrm{ln}\left(1-{W}_{l}\right)}\right)<\alpha $ , H_{0} is rejected.

The other alternative is to calculate the Q' statistic using the standardized values of the logarithmically transformed Shapiro-Francia W' statistics [4] of the 2* ^{k}*−1 sums of linear combinations among the

${{W}^{\prime}}_{l}={r}_{{x}_{\left(i\right)l}m}^{2}=\frac{{\left[{{\displaystyle \sum}}_{i=1}^{n}\left({x}_{\left(i\right)l}-{\stackrel{\xaf}{x}}_{l}\right)\left({m}_{i}-\stackrel{\xaf}{m}\right)\right]}^{2}}{{{\displaystyle \sum}}_{i=1}^{n}{\left({x}_{il}-{\stackrel{\xaf}{x}}_{l}\right)}^{2}{{\displaystyle \sum}}_{i=1}^{n}{\left({a}_{i}-\stackrel{\xaf}{a}\right)}^{2}}=\frac{{\left[{{\displaystyle \sum}}_{i=1}^{n}\left({x}_{\left(i\right)l}-{\stackrel{\xaf}{x}}_{l}\right){m}_{i}\right]}^{2}}{{{\displaystyle \sum}}_{i=1}^{n}{\left({x}_{il}-{\stackrel{\xaf}{x}}_{l}\right)}^{2}{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.17em}}{m}_{i}^{2}}$

*m _{i}* = theoretical quantiles that are the same for the

${m}_{i}={\Phi}^{-1}\left(\frac{\left(i\right)-0.375}{n+0.25}\right)$

$\underset{i=1}{\overset{n}{\sum}}{m}_{i}}=0;\text{\hspace{0.17em}}\text{\hspace{0.05em}}\stackrel{\xaf}{m}={\displaystyle \underset{i=1}{\overset{n}{\sum}}{m}_{i}/n}=0$

The standardization of the logarithmically transformed W' statistic requires calculating its mean and standard deviation that depend on the *u* and *v* values, respectively. The *u* and *v* values only depend on the sample size *n*, so they are common to the 2* ^{k}* − 1 transformed statistics, as are the means and standard deviations. This standardization was developed for sample sizes from 5 to 5000.

$\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)$

$u=\mathrm{ln}\left[\mathrm{ln}\left(n\right)\right]-\mathrm{ln}(\; n\; )$

${\mu}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}=1.0528u-1.2725$

$v=\mathrm{ln}\left[\mathrm{ln}\left(n\right)\right]+2/\mathrm{ln}(\; n\; )$

${\sigma}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}=-0.26758v+1.0308$

The standardized value of the logarithmic transformation of W' statistic follows a standard normal distribution [4] .

${Z}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}=\frac{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)-{\mu}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}}{{\sigma}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}}~N\left(0,1\right)$

If
$P\left(Z\ge {z}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}\right)=1-P\left(Z<{z}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}\right)\ge \alpha $ , H_{0}:
${X}_{l}~N\left({\mu}_{{X}_{l}},{\sigma}_{{X}_{l}}^{2}\right)$ is accepted.

If
$P\left(Z\ge {z}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}\right)=1-P\left(Z<{z}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}\right)<\alpha $ , rechaza H_{0} is rejected.

3) Finally, the sum of the squares of the 2* ^{k}* − 1 z-statistics is calculated, thus obtaining the value of the test statistic Q or Q'. Since the calculation of the probability is one-sided towards the right tail, a negative value implies a good fit to normality. The more negative it is, the better the fit. Therefore, it is necessary to truncate any negative value; otherwise, the value of Q or Q' would be wrongly inflated. The number of canceled values is denoted by

${z}_{l}=\{\begin{array}{ll}0\hfill & {z}_{\mathrm{ln}\left(1-{W}_{l}\right)}^{2}<0\hfill \\ {z}_{\mathrm{ln}\left(1-{W}_{l}\right)}^{2}\hfill & {z}_{\mathrm{ln}\left(1-{W}_{l}\right)}^{2}\ge 0\hfill \end{array}$ ${{z}^{\prime}}_{l}=\{\begin{array}{ll}0\hfill & {z}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}^{2}<0\hfill \\ {z}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}^{2}\hfill & {z}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{l}\right)}^{2}\ge 0\hfill \end{array}$

$Q={\displaystyle {\sum}_{l=1}^{{2}^{k}-1}{z}_{l}^{2}}$ ${Q}^{\prime}={\displaystyle {\sum}_{l=1}^{{2}^{k}-1}{\left({{z}^{\prime}}_{l}\right)}^{2}}$

2.4. Sampling Distribution

Initially, there were 2* ^{k}* − 1 identically distributed (with standard normal distribution) random variables with samples of size 1 (inferential reconceptualization of sample values:

If *X* follows a half-normal distribution, the ratio between the square of the variable and *σ*^{2} follows a chi-square distribution with one degree of freedom:
${X}^{2}/{\sigma}^{2}~{\chi}_{\left(1\right)}^{2}$ . In turn, the sum of 2* ^{k} *− 1 independent variables with chi-squared distribution with one degree of freedom follows a chi-squared distribution with 2

${}_{p}\chi {}_{1}^{2}\equiv H{Z}_{p}^{2}$ ${}_{p}\chi {}_{1}^{2}\equiv {Z}_{{p}^{\prime}=0.5+p/2}^{2}$

Table 1. Correspondence among the quantiles of a chi-square distribution with one degree of freedom, a squared standard half-normal distribution, and squared standard normal distribution.

*Note*: *p* = quantile order,
${}_{p}\chi {}_{\left(1\right)}^{2}$ = the pth quantile of a chi-square distribution with one degree of freedom, *HZ _{p}* = the pth quantile of a standard half-normal distribution,
$H{Z}_{p}^{2}$ = the pth quantile of a standard half-normal distribution,

The test statistic Q or Q' is the sum of squares of 2* ^{k}* − 1 random variables with a chi-square distribution with one degree of freedom. In case the 2

$Q={\displaystyle {\sum}_{i=1}^{{2}^{k}-1}{z}_{l}}$ ó ${Q}^{\prime}={\displaystyle {\sum}_{i=1}^{{2}^{k}-1}{{z}^{\prime}}_{l}}~{\chi}_{{2}^{k}-1-a}^{2}$

In case the independence assumption is not met, it would be a generalized chi-square distribution [19] .

In these cases, critical values can be obtained using a repetitive sampling method with replacement (Monte Carlo simulation). For the simulation, 2* ^{k}* − 1 −

*X*_{1}, *X*_{2}, and* X*_{3} ~ HN (*σ* = 1).

Correlation *X*_{1} *X*_{2} *X*_{3}

*X*_{1} 1 −0.7 0

*X*_{2} −0.7 1 −0.7

*X*_{3} 0 −0.7 1

Table 2 shows the quantiles for the case of three half-normally distributed variables, as well as the quantiles of the sampling distribution of the Q-statistic when the variables are independent or when they are correlated (with a first-order lag and different values of positive or negative autocorrelation). Compared to the quantiles of a chi-square distribution with three degrees of freedom, the bootstrap-based quantiles with independent variables are slightly higher. However, they increase quite a bit as the variables have higher positive correlations, and decrease quite a bit as the variables have lower negative correlations. These could be cases of bivariate normality testing with two variables, as three simple linear combinations are generated with these two variables (*l* = 2^{2} − 1 = 3).

Table 3 shows the bootstrap-based quantiles for seven independent or dependent (with an autocorrelation of −0.5, −0.25, 0.25 or 0.5 for a first-order lag) half-normally distributed variables and the quantiles corresponding to a chi-square distribution with seven degrees of freedom. It is also observed that the bootstrap-based quantiles with independent variables are slightly higher than those of the chi-square distribution with seven degrees of freedom. However, they increase greatly when the variables are positively correlated and decrease greatly when they are negatively correlated. These could be cases of the multivariate normality test with three variables, since seven simple linear combinations are generated (*l* = 2^{3} − 1 = 7).

The test of the assumption of serial independence is performed after truncating the standardized values of the log-transformed W or W' statistics, and is therefore performed on the generative sequence of the 2* ^{k}* − 1 values

Table 2. Bootstrap-based quantiles (Monte Carlo simulation) and quantiles of chi-square distribution with 3 degrees of freedom.

*Note*: *p* = quantile order. Three truncated normal distributions (between the quantile of 0.50 order and the quantile of 0.9999 order), *i.e.*, three standard half-normal distributions (*σ*^{2} = 1), Ind Q = bootstrap-based quantiles for the test statistic or sum of squares of the three half-normally distributed variables when they are independent, and Rel Q when they are correlated (a first-order lag),
${\chi}_{\left(3\right)}^{2}$ = the pth quantiles of a chi-squared distribution with three degrees of freedom. In each simulation, the number of bootstrap samples was 1000. The sampling method was Latin hypercubes (number of sections = 500), and the correlation type was Spearman. Monte Carlo simulations were performed with XLSTAT version 24 [20] .

Table 3. Bootstrap-based quantiles and quantiles of chi-square distribution with 7 degrees of freedom.

*Note*: *p* = quantile order. Seven truncated normal distributions (between the quantile of 0.50 order and the quantile of 0.9999 order), *i.e.*, seven standard half-normal distributions (*σ*^{2} = 1), Ind Q = bootstrap-based quantiles for the test statistic or sum of squares of the three half-normally distributed variables when they are independent, and Rel Q when they are correlated (a first-order lag),
${\chi}_{\left(3\right)}^{2}$ = the pth quantiles of a chi-squared distribution with three degrees of freedom. In each simulation, the number of bootstrap samples was 1000. The sampling method was Latin hypercubes (number of sections = 500), and the correlation type was Spearman. Monte Carlo simulations were performed with XLSTAT version 24 [20] .

a cumulative sequence (*k* individual variables: *z _{1}*,

Usually, the assumption of random independence within a sequence of data is tested with a first-order lag, as does the Wald-Wolfowitz nonparametric test [20] . Another more comprehensive option to check serial independence is to specify lags from 1 to *h* and test for significance using the Ljung-Box Q-test [22] , as well as the correlogram [23] .

When considering that the autocorrelation values are required for the simulation, it is recommended to use the Ljung-Box Q-test; all the more so, when it is the most powerful test of serial dependence in comparative studies [24] . This test has an assumption of bivariate normality in each autocorrelation, so its robust variant can be used, which consists of transforming the variable into ranks, using average ranks in the case of ties [25] [26] . Another option is to use the series without truncation. The maximum number of lags (*h*) for the Ljung-Box test can be estimated using the Hyndman-Athanasopoulos rule for nonstationary series [27] : *h* = min (10, *n*/5), where *n* = 2* ^{k}* − 1. Another option would be the Schwert’s rule:

In case of significance, the analysis is repeated with the simplified series (without zeros) from the ordinary Ljung-Box test (without transforming to ranks). If this second test is significant, the correlogram is used to obtain the significantly non-zero autocorrelations. In case the significance of the full sequence is not confirmed in the simplified sequence (used in the simulation), there is a situation of ambiguity, which is resolved in favor of non-significance. Because the bootstrap-based quantiles with medium or high negative correlations go down a lot with respect to independent samples or up a lot with medium or high positive correlations, it is not recommended to consider only the reduced sequence (without zeros).

2.5. Statistical Decision with an Alpha Significance Level

To make the statistical decision, tested the assumption of independence in the sequence of *z _{l}* values, the critical level or probability value of the test statistic value conditional on the null hypothesis of multivariate normality is calculated from the point to infinity (right tail) in a chi-square distribution with 2

If the assumption does not hold, a simulation would be used to obtain the quantiles or critical values (percentiles) by generating at least 1000 bootstrap random samples. The sampling distribution of the 2* ^{k}* − 1 −

2.6. Effect Size

The effect size can be calculated using the squared eta coefficient (*η*^{2}) or its square root which is the eta coefficient (*η*). Both vary from 0 (no effect or independence) to 1 (total effect or dependence). Values of *η*^{2} less than 0.01 are usually considered trivial. Cohen [30] suggests interpreting *η*^{2} values between 0.01 and 0.059 as a small effect size, between 0.06 and 0.139 medium, and greater than or equal to 0.14 large for analysis of variance. For multiple linear regression, the cut-off points suggested by Cohen [30] are: 0.02 (small), 0.13 (medium), and 0.26 (large). A larger effect implies a larger deviation from the multivariate normality model in the test being proposed.

${\eta}^{2}=\frac{{Q}^{\prime}}{n\times gl}=\frac{{Q}^{\prime}}{n\left({2}^{k}-1-a\right)}$ o ${\eta}^{2}=\frac{Q}{n\left({2}^{k}-1-a\right)}$

2.7. A Posteriori Type II Error and Statistical Power

Type II error (*β*) and statistical power (*ϕ* = 1 − *β*) are calculated using the cumulative distribution function of the noncentral chi-square distribution (*NCχ*^{2}). Its degrees of freedom are the *k*-th power of 2 subtracting 1 and the number of null values, *ν* = 2* ^{k}* − 1 −

$\beta =NC{\chi}_{\left(\upsilon ={2}^{k}-1-a,\lambda ={\eta}^{2}\times n\times \left({2}^{k}-1-a\right)=q\right)}^{2}\left({}_{1-\alpha}\chi {}_{{2}^{k}-1-a}^{2}\right)$

$\varphi =1-\beta $

*β* = *P*(hold H_{0}|H_{1} true) = the type II error or false negative error = the probability of holding the null hypothesis conditional on a true alternative hypothesis. When the null hypothesis is rejected due to a probability value less than the significance level, *p* < *α*, the probability *β* should be less than 0.5 and preferably equal to or less than 0.2, evidencing low probability of a mistaken rejection. When the null hypothesis holds due to a probability value greater than or equal to the significance level *α*, *p* ≥ *α*, the probability *β* should be greater than 0.5, indicating the low probability of the alternative hypothesis.

*ϕ* = *P*(reject H_{0}|H_{1} true) = statistical power or success in rejecting the null hypothesis = probability of rejecting the null hypothesis conditional on a true alternative hypothesis. When the null hypothesis is rejected due to a probability value less than the significance level, *p* < *α*, the statistical power should be greater than 0.5 and preferably equal to or greater than 0.8 (strong power) or 0.9 (very strong power). When the null hypothesis holds due to a probability value greater than or equal to the significance level, *p* ≥ *α*, the statistical power should be less than 0.5, revealing the low probability of the alternative hypothesis.

*α* = *P*(reject false H_{0}|H_{1}) = type I error or false positive error, which is also called significance level = the probability of rejecting the null hypothesis conditional on a false alternative hypothesis and hence true null hypothesis. It is fixed a priori, usually with a value of 0.05. With small samples, it can be raised to 0.1 and with very large samples can be lowered to 0.01.

1 −*α* = *P*(hold H_{0}|H_{1} false) = success in holding the null hypothesis or confidence level = the probability of holding the null hypothesis conditional on a false alternative hypothesis and hence true null hypothesis. If *α* = 0.05, then 1 −*α* = 0.95.

2.8. Example of Calculation of the Proposed Test

Let be a random sample of 20 participants who were measured on four different variables (Table 4). Check whether the sample was drawn from a population with multivariate normal distribution of unknown parameters *μ* and Σ.

${\text{H}}_{0}:\stackrel{\to}{x}=\left(\begin{array}{cccc}{X}_{1}& {X}_{2}& {X}_{3}& {X}_{4}\end{array}\right)~N\left(\stackrel{\to}{\mu},\Sigma \right)$

${\text{H}}_{1}:\stackrel{\to}{x}=\left(\begin{array}{cccc}{X}_{1}& {X}_{2}& {X}_{3}& {X}_{4}\end{array}\right)\nsim N\left(\stackrel{\to}{\mu},\text{\Sigma}\right)$

The 15 unweighted linear combinations (by simple summation) of the four variables are calculated (*l* = 2^{4} − 1 = 15). The 20 data of each linear combination of variables are sorted in ascending order (empirical quantiles). These data are

Table 4. Data of the four variables in their random order.

assigned ranks *i* from 1 to 20. The orders of the theoretical quantiles, denoted by *p _{i}*, are obtained as a function of the order

On the one hand, the correlations between the empirical quantiles (of the 15 linear combinations) and the standardized and normalized theoretical quantiles *a _{i}* are calculated. It should be recalled that theoretical quantiles are the same for all 15 linear combinations. These squared correlation coefficients constitute the W test statistics of the Shapiro-Wilk univariate normality test [1] . A logarithmic transformation is applied to the W statistics, the mean and standard deviation of the transformation are calculated as a function of the sample size (

Table 5. Linear combinations (of one and two variables) among the 4 variables with their values sorted in ascending order or empirical quantiles and expected quantiles.* *

*Note*: *i* = the order of the data when sorted in ascending order, *p _{i}* = (

Table 6. Statistics W and W' with their logarithmic transformation, standardization and truncation.

*Note*: *l* = the order of the simple linear combination in its generative sequence, *w _{l}* = the Shapiro-Wilk W-test statistic of the lth combination, ln(1 −

${w}_{1}={r}_{{X}_{1}a}^{2}=\frac{{\left({{\displaystyle \sum}}_{i=1}^{20}\left({x}_{i1}-{\stackrel{\xaf}{x}}_{1}\right){a}_{i}\right)}^{2}}{{{\displaystyle \sum}}_{i=1}^{n}{\left({x}_{i1}-{\stackrel{\xaf}{x}}_{1}\right)}^{2}}=\frac{{4.0262}^{2}}{17.2680}=0.9387$

$\begin{array}{c}{\mu}_{\mathrm{ln}\left(1-W\right)|n=20}=-1.5861-0.31082\mathrm{ln}\left(n\right)-0.083751{\left[\mathrm{ln}\left(n\right)\right]}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}+0.0038915{\left[\mathrm{ln}\left(n\right)\right]}^{3}\\ =-1.5861-0.31082\mathrm{ln}\left(20\right)-0.083751{\left[\mathrm{ln}\left(20\right)\right]}^{2}\\ \text{\hspace{0.05em}}\text{\hspace{0.17em}}+0.0038915{\left[\mathrm{ln}\left(20\right)\right]}^{3}\\ =-3.1642\end{array}$

$\begin{array}{c}{\sigma}_{\mathrm{ln}\left(1-W\right)|n=20}={\text{e}}^{-0.4803-0.082676\mathrm{ln}\left(n\right)+0.0030302{\left[\mathrm{ln}\left(n\right)\right]}^{2}}\\ ={\text{e}}^{-0.4803-0.082676\mathrm{ln}\left(20\right)+0.0030302{\left[\mathrm{ln}\left(20\right)\right]}^{2}}\\ =0.4962\end{array}$

$\begin{array}{c}{Z}_{\mathrm{ln}\left(1-{W}_{1}\right)}=\frac{\mathrm{ln}\left(1-{W}_{1}\right)-{\mu}_{\mathrm{ln}\left(1-W\right)}}{{\sigma}_{\mathrm{ln}\left(1-W\right)}}=\frac{\mathrm{ln}\left(1-0.9387\right)-\left(-3.1642\right)}{0.4962}\\ =\frac{-2.7927+3.1642}{0.4962}=0.7488\end{array}$

On the other hand, the correlations between the empirical quantiles of the 15 linear combinations and the theoretical quantiles *m _{i}* that are common to the 15 combinations are calculated. These squared correlation coefficients constitute the W' test statistics of the Shapiro-Francia univariate normality test [2] . A logarithmic transformation is applied to the W' statistics, the mean and standard deviation of the transformation are calculated as a function of the sample size (

${{w}^{\prime}}_{1}={r}_{{X}_{1}m}^{2}=\frac{{\left[{{\displaystyle \sum}}_{i=1}^{20}\left({x}_{i1}-{\stackrel{\xaf}{x}}_{1}\right){m}_{i}\right]}^{2}}{{{\displaystyle \sum}}_{i=1}^{20}{\left({x}_{i1}-{\stackrel{\xaf}{x}}_{1}\right)}^{2}{{\displaystyle \sum}}_{i=1}^{20}\text{\hspace{0.17em}}{m}_{i}^{2}}=\frac{{16.9117}^{2}}{17.2680\times 17.6336}=\frac{286.0060}{304.4959}=0.9393$

$u=\mathrm{ln}\left[\mathrm{ln}\left(n\right)\right]-\mathrm{ln}\left(n\right)=\mathrm{ln}\left[\mathrm{ln}\left(20\right)\right]-\mathrm{ln}\left(20\right)=-1.8985$

${\mu}_{\mathrm{ln}\left(1-{W}^{\prime}\right)}=1.0528u-1.2725=1.0528\times \left(-1.8985\right)-1.2725=-3.26996$

$v=\mathrm{ln}\left(\mathrm{ln}\left(n\right)\right)+2/\mathrm{ln}\left(n\right)=\mathrm{ln}\left(\mathrm{ln}\left(20\right)\right)+2/\mathrm{ln}\left(20\right)=1.7648$

${\sigma}_{\mathrm{ln}\left(1-{W}^{\prime}\right)}=-0.26758v+1.0308=-0.26758\times 1.7648+1.0308=0.5586$

$\begin{array}{c}{Z}_{\mathrm{ln}\left(1-{{w}^{\prime}}_{1}\right)}=\frac{\mathrm{ln}\left(1-{{w}^{\prime}}_{1}\right)-{\mu}_{\mathrm{ln}\left(1-{W}^{\prime}\right)}}{{\sigma}_{\mathrm{ln}\left(1-{W}^{\prime}\right)}}=\frac{\mathrm{ln}\left(1-0.9393\right)-\left(-3.26996\right)}{0.5586}\\ =\frac{-2.8014+3.26996}{0.5586}=0.8388\end{array}$

The assumption of independence between the 15 values of *z _{l}* (standardized and truncated values of the log-transformed W statistics) and
${{z}^{\prime}}_{l}$ (standardized and truncated values of the log-transformed W' statistics) is tested using the Wald-Wolfowitz runs test from the exact probabilities at a 10% significance level due to the small sizes of the sequences (15 elements). The hypothesis holds at the 10% significance level in a two-tailed test for both sequences (in its generative order). The exact probabilities are the same for both sequence of values

In turn, the independence assumption is tested using the Ljung-Box test. The lag order was determined using the Hyndman-Athanasopoulos criterion for non-stationary series: *h* = min(10, *n*/5) = min(10, 15/5) = 3. The test is not significant at the 10% significance level for either sequence with a third-order lag.

A visual inspection of the two correlograms with a maximum lag of 7 (Schwert’s criterion) reveals no significant autocorrelation, since the bars lie in the space between the upper and lower limits of 90% confidence intervals of null autocorrelations with lags from 1 to 7. See Figure 1.

$h=12\times {\left(n/100\right)}^{1/4}=12\times {\left(15/100\right)}^{1/4}=7.46=7$

Lung-Box test applied to the *z*_{ln(1−W)} sequence (from the Shapiro-Wilk W statistics)

${Q}_{LB}=n\left(n+2\right)\underset{i=1}{\overset{h}{{\displaystyle \sum}}}\frac{a{r}_{i}^{2}}{n-i}=15\times 17\times \left(\frac{-{0.0391}^{2}}{14}+\frac{{0.0457}^{2}}{13}+\frac{{0.0989}^{2}}{12}\right)=0.0279$

${q}_{LB}=0.0279<{}_{1-\alpha}\chi {}_{h}^{2}={}_{0.90}\chi {}_{3}^{2}=6.2514$

y
$P\left({\chi}_{3}^{2}\ge {q}_{LB}=0.0279\right)=0.9988>\alpha =0.10$ , se mantiene H_{0}:
${\rho}_{1}={\rho}_{2}={\rho}_{3}=0$ .

Figure 1. Correlograms with interpretation limits with a 90% confidence level.

Lung-Box test applied to the *z*_{ln(1−W')} sequence (from the Shapiro-Francia W' statistics).

${Q}_{LB}=n\left(n+2\right)\underset{i=1}{\overset{h}{{\displaystyle \sum}}}\frac{a{r}_{i}^{2}}{n-i}=15\times 17\times \left(\frac{{0.0357}^{2}}{14}+\frac{-{0.0489}^{2}}{13}+\frac{{0.1135}^{2}}{12}\right)=0.0232$

${q}_{LB}=0.0232<{}_{1-\alpha}\chi {}_{h}^{2}={}_{0.90}\chi {}_{3}^{2}=6.2514$

y
$P\left({\chi}_{3}^{2}\ge {q}_{LB}=0.0232\right)=0.9991>\alpha =0.10$ , se mantiene H_{0}:
${\rho}_{1}={\rho}_{2}={\rho}_{3}=0$ .

With the robust Ljung-Box test [25] [26] , the null hypothesis of independence also holds for the sequence *z _{l}* (

Next, the sums of squares of *z _{l}* and
${{z}^{\prime}}_{l}$ are performed, yielding the test statistics

$\begin{array}{c}Q=\underset{l=1}{\overset{{2}^{k}-1}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{z}_{l}^{2}=\underset{l=1}{\overset{15}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{z}_{l}^{2}\\ ={0.7488}^{2}+{0.6686}^{2}+0+{0.4778}^{2}+0+0+{0.7162}^{2}+{0.6148}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}+{1.5879}^{2}+{0.2825}^{2}+{0.6828}^{2}+{0.7544}^{2}+0+0+0\\ =5.7636\end{array}$

$df={2}^{k}-1-a={2}^{4}-1-6=16-1-6=9$

$q=5.7636<{}_{1-\alpha}\chi {}_{{2}^{k}-1-a}^{2}={}_{0.95}\chi {}_{9}^{2}=16.9190$

$P\left({\chi}_{9}^{2}\ge q=5.7636\right)=0.7633>\alpha =0.05$

$\begin{array}{c}{Q}^{\prime}=\underset{l=1}{\overset{{2}^{k}-1}{{\displaystyle \sum}}}{\left({{z}^{\prime}}_{l}\right)}^{2}=\underset{l=1}{\overset{15}{{\displaystyle \sum}}}{\left({{z}^{\prime}}_{l}\right)}^{2}\\ ={0.8388}^{2}+{0.5081}^{2}+0+{0.1805}^{2}+{0.4418}^{2}+0+{0.9239}^{2}+{0.6576}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}+{1.4918}^{2}+{0.1916}^{2}+{1.2416}^{2}+{0.8499}^{2}+{0.1262}^{2}+0+0\\ =7.0174\end{array}$

$df={2}^{k}-1-a={2}^{4}-1-4=16-1=11$

${q}^{\prime}=7.0174<{}_{1-\alpha}\chi {}_{{2}^{k}-1-4}^{2}={}_{0.95}\chi {}_{11}^{2}=19.6751$

$P\left({\chi}_{11}^{2}\ge {q}^{\prime}=7.0174\right)=0.7977>\alpha =0.05$

The effect size is calculated, which is small in both versions of the proposed test.

${\eta}^{2}=\frac{Q}{n\times gl}=\frac{Q}{n\left({2}^{k}-1-a\right)}=\frac{5.7636}{20\times 9}=0.0320$

${\eta}^{2}=\frac{{Q}^{\prime}}{n\times gl}=\frac{{Q}^{\prime}}{n\left({2}^{k}-1-a\right)}=\frac{7.0174}{20\times 11}=0.0319$

The type II error is very high (*β* > 0.5) and the statistical power very low (*ϕ* = 1 − *β* < 0.5), which indicates that the alternative hypothesis is very unlikely and the null hypothesis of multivariate normality should be maintained with both versions of the proposed test. The version with the Shapiro-Wilk W statistic [3] seems slightly better than with the Shapiro-Francia W' statistic [4] in terms of type II error and statistical power.

Calculation of type II error and statistical power for Q-test (from Shapiro-Wilk W statistics).

$\beta =NC{\chi}_{\left(\upsilon ={2}^{k}-1-a,\lambda ={\eta}^{2}\times n\times \left({2}^{k}-1-a\right)=q\right)}^{2}\left({}_{1-\alpha}\chi {}_{{2}^{k}-1-a}^{2}\right)$

$\beta =NC{\chi}_{\left(\upsilon =9,\lambda =5.7636\right)}^{2}\left({}_{0.95}\chi {}_{9}^{2}=16.9190\right)=0.6750$

$\varphi =1-\beta =1-0.6750=0.3250$

Calculation of type II error and statistical power for Q'-test (from Shapiro-Francia W' statistics).

$\beta =NC{\chi}_{\left(\upsilon ={2}^{k}-1,\lambda ={\eta}^{2}\times n\times ({2}^{k}-1)={q}^{\prime}\right)}^{2}\left({}_{1-\alpha}\chi {}_{{2}^{k}-1}^{2}\right)$

$\beta =NC{\chi}_{\left(\upsilon =11,\lambda =7.0174\right)}^{2}\left({}_{0.95}\chi {}_{11}^{2}=16.9190\right)=0.6360$

$\varphi =1-\beta =1-0.6360=0.3640$

3. Method

3.1. Sample Generation and Statistical Analysis

In the present simulation study, only samples of 20 elements for four variables were analyzed, so it is a pilot study of the proposal. The small size of 20 was chosen as it is the minimum recommended for normality tests and with which the statistical power and discriminative capacity of the tests can be more compromised. Four variables were chosen as this is an easily manageable small number.

Fifty samples of 20 elements and 4 variables were generated. On the one hand, 40 samples were created. Four random variables with standard continuous uniform distribution U[0, 1] were used as a starting point to obtain variables with a standard normal distribution: *x _{i}* = Φ

It was considered that the normality hypothesis should hold in the following situations (10 samples, 20%):

• Seven samples with normal variables (3 samples with the four independent variables and 4 samples with the four correlated variables).

• Three samples with a good convergence to multivariate normality: four variables following a Student’s t-distribution with 100 degrees of freedom, four variables following a chi-square distribution with 100 degrees of freedom, and four variables following a binomial distribution B(*n* = 20, *p* = 0.5).

Conversely, it should be rejected in the following situations (40 samples, 80%): N = normal, E = exponential, LogN = log normal, Logist = logistic, Lap = Laplace, C = Cauchy, B(*n*, *p*) = binomial with *n* independent trials and constant probability of success *p*, N^{−1} = 1/N = inverse normal and
${\chi}_{\left(\nu \right)}^{2}$ = chi-square with *ν* degrees of freedom.

• Three variables with normal distribution and one without normal distribution (the preceding number indicates the number of variables, but if there is only one variable, the number is omitted): 3NE, 3NC, 3NLogN, 3NN^{−1},
$\text{3N}{\chi}_{\left(2\right)}^{2}$ ,
$\text{3N}{\chi}_{\left(1\right)}^{2}$ , and 3NCauchy.

• Two independent variables with normal distribution and two without normal distribution (if the subscripts match, they are related, and if they do not match or have no subscripts, they are independent): 2NE_{1}C_{1}, 2NE_{1}LogN_{1}, 2NC_{1}LogN_{1}, 2NE_{2}C_{2}, 2NE_{2}Log_{2}, 2NC_{2}LogN_{2}, 2NE_{3}C_{3}, 2NE_{3}LogN_{3}. 2NC_{3}LogN_{3}, 2NE_{4}C_{4}, 2NE_{4}LogN_{4}, 2NC_{4}LogN_{4}, 2NE_{1}C_{3}, 2NE_{1}LogN_{3}, 2NC_{1}LogN_{3}, 2NE_{2}C_{1}, 2NE_{2}LogN_{1}, 2NC_{2}LogN_{1},
$\text{2N2}{\chi}_{\left(2\right)}^{2}$ ,
$\text{2N2}{\chi}_{\left(4\right)}^{2}$ ,
$\text{2N2}{\chi}_{\left(1\right)}^{2}$ , and 2N2C.

• One variable with normal distribution and three without normal distribution: N_{1}E_{1}C_{1}LogN_{1}, N_{2}E_{2}C_{2}LogN_{2}, N_{3}E_{3}C_{3}LogN_{3}, N_{4}E_{4}C_{4}LogN_{4}, and N_{1}E_{2}C_{3}LogN_{4}.

• Four independent variables with no normal distribution: 4E, 4C, 4LogN, 4Logist, 4Lap, and 4B (*n* = 10, *p* = 0.1).

On the one hand, the probability of the correct decision conditional on the alternative hypothesis is calculated. When the null hypothesis must be retained, this probability is the type II error or probability *β*, and when it must be rejected, it is the statistical power or complement of the beta probability: *ϕ* = 1 − *β*. The comparison of measures of central tendency in *β* or *ϕ* values among six samples is performed using the Friedman’s test [31] . Effect size is calculated using the Kendall’s W coefficient of concordance [32] . Pairwise comparisons are performed using the Wilcoxon’s signed-rank test [33] , estimating the effect size by the Rosenthal’s r coefficient [34] . The Sidak’s correction is applied with the Holm’s procedure to control for the family error rate [35] [36] . These comparisons are made with the 50 multivariate samples (*β* and *ϕ*), with 40 multivariate samples that deviate from multivariate normality (*β*), and with the 10 multivariate samples that follow or have a with good convergence to multivariate normality (*ϕ*).

On the other hand, the frequency of successes among the six tests was compared using the Cochran’s Q test [37] with the 50 multivariate samples. Effect size was estimated by the eta-squared coefficient [38] . Pairwise comparisons were performed using the McNemar’s test [39] , and the effect size was calculated by the Cohen’s q statistic (1988). The Sidak’s correction with the Holm’s procedure was applied to control the error rate by family [35] [36] . In addition, 2 × 2 tables were computed for each of the six tests with respect to the decision on null hypothesis. From these tables, point and interval estimates of sensitivity, specificity and efficacy were calculated for each of the 50 multivariate samples generated, using the Wilson’s confidence interval score with the Newcombe’s continuity correction [40] [41] . Pairwise comparisons between these correlated proportions were performed using the McNemar’s Z-test [39] .

A final analysis involved creating a variable of ordered categories by classifying the 50 multivariate samples generated by their deviation from multivariate normality. Five levels of deviation were defined. The ordinal variable was correlated with the critical level or probability value of each of the six multivariate normality tests, that is, with the probability associated with a critical region bounded by the observed value of the test statistic under the assumption that the null hypothesis is true within a known distribution (normal for the multivariate runs test and chi-square for the other tests). The more negative this correlation is (closer to −1), the better the multivariate normality test. Correlations were calculated by the Spearman’s rank coefficient [42] . The confidence interval was estimated with the Fisher’s transformation [43] and the Bonett-Wright standard error [44] . Comparisons were made using the Steiger’s Z-test [45] , following the suggestion of Myers and Sirois for the Spearman’s correlation [46] .

3.2. Multivariate Normality Tests with Which the New Proposal Is Compared

The calculation of the test statistic, the probability value for the statistical decision at the *α* level of significance, the effect size, the type II error, and the statistical power of the four tests with which the new proposal is compared are shown below. It starts with the omnibus test based on the multivariate skewness and kurtosis [5] [6] [7] . It continues with the runs Z-test for multivariate normality [8] [9] , and ends with Royston’s test [10] based on the H statistic (with the transformation and standardization of the W statistic of Royston [3] ) and the H' statistic (with the transformation and standardization of the W' statistic of Royston [4] ).

3.2.1. Mardia’s Omnibus Test of Multivariate Normality

The Mardia’s test starts from the square matrix of order *n* of standardized distances or moments, denoted by M. This is obtained by the product among the rectangular matrix of order *n* × *k* of differential scores (D), the square matrix of order *k* of the inverse of sample covariance matrix (S^{−1}), and the rectangular matrix of order *k* × *n* of the transpose of the matrix of the differential scores (D^{T}), where *n* is the number of participants and *k* the number of variables.

$D=X-{\stackrel{\to}{1}}^{\text{T}}\stackrel{\to}{\stackrel{\xaf}{x}}=\left(\begin{array}{ccc}{x}_{11}& \cdots & {x}_{1k}\\ \vdots & \ddots & \vdots \\ {x}_{n1}& \cdots & {x}_{nk}\end{array}\right)-\left(\begin{array}{c}1\\ \vdots \\ 1\end{array}\right)\left(\begin{array}{ccc}{\stackrel{\xaf}{x}}_{1}& \cdots & {\stackrel{\xaf}{x}}_{k}\end{array}\right)$

$S=\frac{1}{n-1}{D}^{\text{T}}D=\left(\begin{array}{ccc}{s}_{{x}_{1}}^{2}& \cdots & {s}_{{x}_{1}{x}_{k}}\\ \vdots & \ddots & \vdots \\ {s}_{{x}_{k}{x}_{1}}& \cdots & {s}_{{x}_{k}}^{2}\end{array}\right)$

$M=D\times {S}^{-1}\times {D}^{\text{T}}$

$\begin{array}{c}M=\left(\begin{array}{ccc}{x}_{11}-{\stackrel{\xaf}{x}}_{1}& \cdots & {x}_{1k}-{\stackrel{\xaf}{x}}_{k}\\ \vdots & \ddots & \vdots \\ {x}_{n1}-{\stackrel{\xaf}{x}}_{1}& \cdots & {x}_{nk}-{\stackrel{\xaf}{x}}_{k}\end{array}\right){\left(\begin{array}{ccc}{s}_{{x}_{1}}^{2}& \cdots & {s}_{{x}_{1}{x}_{k}}\\ \vdots & \ddots & \vdots \\ {s}_{{x}_{k}{x}_{1}}& \cdots & {s}_{{x}_{k}}^{2}\end{array}\right)}^{-1}\left(\begin{array}{ccc}{x}_{11}-{\stackrel{\xaf}{x}}_{1}& \cdots & {x}_{n1}-{\stackrel{\xaf}{x}}_{1}\\ \vdots & \ddots & \vdots \\ {x}_{1k}-{\stackrel{\xaf}{x}}_{k}& \cdots & {x}_{nk}-{\stackrel{\xaf}{x}}_{k}\end{array}\right)\\ =\left(\begin{array}{ccc}{m}_{11}& \cdots & {m}_{1n}\\ \vdots & \ddots & \vdots \\ {m}_{n1}& \cdots & {m}_{nn}\end{array}\right)\end{array}$

Mardia’s multivariate skewness (*b _{1M}*) with a correction for sampling bias (
${b}_{1M}^{*}$ ).

${b}_{1M}=\frac{{\displaystyle {\sum}_{i=1}^{n}{\displaystyle {\sum}_{j=1}^{n}{m}_{ij}^{3}}}}{{n}^{2}}$

${b}_{1M}^{*}={\left(\frac{n}{n-1}\right)}^{3}{b}_{1M}={\left(\frac{n}{n-1}\right)}^{3}\frac{{\displaystyle {\sum}_{i=1}^{n}{\displaystyle {\sum}_{j=1}^{n}{m}_{ij}^{3}}}}{{n}^{2}}$

$Q=\frac{n}{6}{b}_{1M}^{*}$

${Q}_{c}~{\chi}_{\frac{k\left(k+1\right)\left(k+2\right)}{6}}^{2}$

Mardia’s multivariate kurtosis (*b _{2M}*) with a correction for sampling bias (
${b}_{2M}^{*}$ ).

${b}_{2M}=\frac{{\displaystyle {\sum}_{i=j=1}^{n}{m}_{ij}^{2}}}{n}$

${b}_{2M}^{*}={\left(\frac{n}{n-1}\right)}^{2}{b}_{2M}={\left(\frac{n}{n-1}\right)}^{2}\frac{{\displaystyle {\sum}_{i=j=1}^{n}{m}_{ij}^{2}}}{n}$

$Z=\frac{{b}_{2M}^{*}-E\left({b}_{2M}\right)}{SD\left({b}_{2M}\right)}=\frac{{b}_{2M}^{*}-k\left(k+2\right)}{\sqrt{\frac{8k\left(k+2\right)}{n}}}~\text{}N\left(0,1\right)$

Test statistics and sampling distribution

${K}_{MVN}^{2}=Q+{Z}^{2}$

${K}_{MVN}^{2}~{\chi}_{\left(\frac{k\left(k+1\right)\left(k+2\right)}{6}+1\right)}^{2}$

Statistical decision under the null hypothesis of multivariate normality with a given level of significance (alpha).

$P\left({\chi}_{\left(\frac{k\left(k+1\right)\left(k+2\right)}{6}+1\right)}^{2}\ge {K}_{MVN}^{2}\right)\ge \alpha $ , H_{0} is accepted; and
$<\alpha $ , H_{0} is rejected.

The effect size estimated using the eta-squared coefficient.

${\eta}^{2}=\frac{{K}_{MVN}^{2}}{n\times df}=\frac{{K}_{MVN}^{2}}{n\left(\frac{k\left(k+1\right)\left(k+2\right)}{6}+1\right)}$

Type II error and statistical power calculated using the cumulative distribution function of a noncentral chi-squared distribution.

$\beta =NC{\chi}_{df=1+\frac{k\left(k+1\right)\left(k+2\right)}{6},NCP={\eta}^{2}\times n\times df={K}_{MVN}^{2}}^{2}\left({}_{1-\alpha}\chi {}_{\frac{k\left(k+1\right)\left(k+2\right)}{6}+1}^{2}\right)$

$\varphi =1-\beta $

3.2.2. Royston’s Multivariate Normality Test and Its Two Variants

To obtain the test statistic H (from Shapiro-Wilk W statistics) or H' (from Shapiro-Francia W' statistics), the following formulas are used [10] :

$H=\frac{e{\displaystyle {\sum}_{j=1}^{k}{\psi}_{j}}}{k}$

${\psi}_{j}={\left[{\Phi}^{-1}\left(\frac{\Phi \left(-{z}_{j}\right)}{z}\right)\right]}^{2}={\left[{\Phi}^{-1}\left(0.5\times \Phi \left(-{z}_{j}\right)\right)\right]}^{2},j=1,2,\cdots ,k$

Φ^{−1} = the probit function or quantile function associated with a standard normal distribution, and Φ = the cumulative distribution function of a standard normal variable.

${Z}_{j}=\frac{\mathrm{ln}\left(1-{W}_{j}\right)-{\mu}_{\mathrm{ln}\left(1-{W}_{j}\right)}}{{\sigma}_{\mathrm{ln}\left(1-{W}_{j}\right)}}$

${H}^{\prime}=\frac{e{\displaystyle {\sum}_{j=1}^{k}{{\psi}^{\prime}}_{j}}}{k}$

${{\psi}^{\prime}}_{j}={\left[{\Phi}^{-1}\left(1/2\times \Phi \left(-{{z}^{\prime}}_{j}\right)\right)\right]}^{2}$

${{Z}^{\prime}}_{j}=\frac{\mathrm{ln}\left(1-{{W}^{\prime}}_{j}\right)-{\mu}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{j}\right)}}{{\sigma}_{\mathrm{ln}\left(1-{{W}^{\prime}}_{j}\right)}}$

The formulas for obtaining the mean (*μ*) and standard error (*σ*) for ln(1 − *W _{j}*) are given in Royston [3] and for
$\mathrm{ln}\left(1-{{W}^{\prime}}_{j}\right)$ in Royston [4] , and are shown in Section 2.3.

$e=\frac{k}{1+\left(k-1\right)\stackrel{\xaf}{c}}$

$\stackrel{\xaf}{c}={\displaystyle \underset{i=1}{\overset{k}{\sum}}{\displaystyle \underset{j\ne i=2}{\overset{k}{\sum}}\frac{{c}_{ij}}{k\left(k-1\right)/2}}}=\frac{2{\displaystyle {\sum}_{i=1}^{k}{\displaystyle {\sum}_{j\ne i=2}^{k}{c}_{ij}}}}{k\left(k-1\right)}=\frac{2\left({c}_{12}+{c}_{13}+\cdots +{c}_{k-1,k}\right)}{k\left(k-1\right)}$

The *k*-order square matrix of sample correlations is required.

$R=\left(\begin{array}{ccccc}{r}_{11}& \cdots & {r}_{1j}& \cdots & {r}_{1k}\\ \vdots & \ddots & \vdots & \ddots & \vdots \\ {r}_{i1}& \cdots & {r}_{ij}& \cdots & {r}_{ik}\\ \vdots & \ddots & \vdots & \ddots & \vdots \\ {r}_{k1}& \cdots & {r}_{kj}& \cdots & {r}_{kk}\end{array}\right)$

${c}_{ij}={r}_{ij}^{\lambda}\left[1-\frac{\mu}{{\upsilon}_{n}}{\left(1-{r}_{ij}\right)}^{\mu}\right]$

If $10\le n\le 2000$ , then $\lambda =5$ , $\mu =0.715$ , and ${\upsilon}_{n}=0.21364+0.015124{\left(\mathrm{ln}n\right)}^{2}-0.0018034{\left(\mathrm{ln}n\right)}^{3}$ .

The sampling distribution of the test statistic H or H' is a chi-square distribution with *e* degrees of freedom:
$H~{\chi}_{e}^{2}$ and
${H}^{\prime}~{\chi}_{e}^{2}$

The statistical decision under the null hypothesis of multivariate normality at a given significance level is taken one-sided to the right tail.

If
$P\left({\chi}_{e}^{2}\ge H\right)\ge \alpha $ , H_{0} is accepted, and
$<\alpha $ , is rejected.

If
$P\left({\chi}_{e}^{2}\ge {H}^{\prime}\right)\ge \alpha $ , H_{0} is accepted, and
$<\alpha $ , is rejected.

The size of the effect is estimated using the eta-squared coefficient.

${\eta}^{2}=\frac{H}{n\times gl}=\frac{H}{n\times e}$ ${\eta}^{2}=\frac{{H}^{\prime}}{n\times gl}=\frac{{H}^{\prime}}{n\times e}$

Type-II error or probability *β* and statistical power *ϕ* are calculated using the cumulative distribution function of a non-central chi-square distribution (*NC**χ*^{2}): degrees of freedom (*df*) = *e*, and non-centrality parameter (*NCP*) = *η*^{2} × *n* × *e* = *H*. Its argument is the critical value for the statistics *h* or *h*':
${}_{1-\alpha}\chi {}_{e}^{2}$ .

$\beta =NC{\chi}_{df=e=\frac{k}{1+\left(k-1\right)\stackrel{\xaf}{c}},NCP={\eta}^{2}\times n\times e}^{2}\left({}_{1-\alpha}\chi {}_{e}^{2}\right)$

$\varphi =1-\beta $

3.2.3. Runs Test for Multivariate Normality

A *network* *diagram* is a set of points connected by non-directional lines. In this type of diagram, the points are called *nodes* and the lines connecting the points are called *edges*. Each edge has a weight or value corresponding to the distance between the two nodes it connects, and is denoted by *w _{i}*. In the present work, such weight is calculated through the Euclidean distance formula. Let the random vectors
$\stackrel{\to}{x}$ and
$\stackrel{\to}{y}$ .

$\stackrel{\to}{x}=\left(\begin{array}{cccc}{x}_{1}& {x}_{2}& \cdots & {x}_{k}\end{array}\right)$

$\stackrel{\to}{y}=\left(\begin{array}{cccc}{y}_{1}& {y}_{2}& \cdots & {y}_{k}\end{array}\right)$

${w}_{i}=\sqrt{{\displaystyle {\sum}_{i=1}^{k}{\left({x}_{i}-{y}_{i}\right)}^{2}}}=\sqrt{{\left({x}_{1}-{y}_{1}\right)}^{2}+{\left({x}_{2}-{y}_{2}\right)}^{2}+\cdots +{\left({x}_{k}-{y}_{k}\right)}^{2}}$

A *spanning* *tree* is a subset of nodes connected without the chain of connections returning to the origin. When the chain of connections is closed, returning to the origin, it is referred to as a *cycle*. As the nodes are numbered from 1 to *n* (random sampling order of the *n* score vectors or *n* data tuples), the *root* of the spanning tree is the node that receives no connections. A *minimum* *spanning* *tree* is the one that has the smallest sum of the weights of its edges. This may be unique or there may be more than one in a network diagram [8] .

In the multivariate runs test applied to multivariate normality [8] [9] , the random vector
$\stackrel{\to}{x}$ corresponds to sample data and the random vector
$\stackrel{\to}{y}$ is generated from a multivariate normal distribution, whose location parameter
$\stackrel{\to}{\mu}$ is estimated with the vector of sample means
$\stackrel{\to}{x}$ and its scale parameter Σ is estimated with the sample covariance matrix *S* of the vector
$\stackrel{\to}{x}$ .

$Z=\left(R-{\mu}_{R}\right)/{\sigma}_{R}$

${\mu}_{R}=\frac{{n}_{0}{n}_{1}}{n}+1$

${\sigma}_{R}=\frac{2{n}_{0}{n}_{1}}{n\left(n-1\right)}\left[\frac{2{n}_{0}{n}_{1}-n}{n}+\frac{c-n+2}{\left(n-2\right)\left(n-3\right)}\left(n\left(n-1\right)-4{n}_{0}{n}_{1}+2\right)\right]$

*n _{0}* = the number of empirical

*n _{1}* = the number of theoretical or generated tuples under a multivariate normal distribution model with the sample mean vector
$\stackrel{\to}{x}$ as the estimator of the location parameter
$\stackrel{\to}{\mu}$ and the sample covariance matrix of the vector
$\stackrel{\to}{x}$ as the estimator of the scale parameter Σ.

$n={n}_{0}+{n}_{1}=\text{2}\times {n}_{0}$

*r* = the number of separate trees that result when any edge (line) in the minimum spanning tree between nodes of different (empirical vs. theoretical) samples is removed.

*c* = the number of pairs of edges (links or lines) sharing a common node (point) in the network diagram (points joined by lines).

$c=\frac{1}{2}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{d}_{i}}\left({d}_{i}-1\right)$

*d _{i}* = the degree of node

The *n _{1}* theoretical tuples are generated from the vector of sample means, the sample covariance matrix and the lower triangular matrix of the Cholesky’s decomposition of the sample covariance matrix.

Vector of sample means: $\stackrel{\to}{m}=\left(\begin{array}{cccc}{\stackrel{\xaf}{x}}_{1}& {\stackrel{\xaf}{x}}_{2}& \cdots & {\stackrel{\xaf}{x}}_{k}\end{array}\right)\in {\mathbb{R}}^{k}$

Sample covariance matrix:

$S=\left(\begin{array}{ccc}{s}_{1}^{2}& \cdots & {s}_{1k}\\ \vdots & \ddots & \vdots \\ {s}_{k1}& \cdots & {s}_{k}^{2}\end{array}\right)\in {\mathbb{R}}^{k\times k}$

Lower triangular matrix of the Cholesky’s decomposition of the sample covariance matrix:

$C=\left(\begin{array}{cccc}{c}_{11}& 0& \cdots & 0\\ {c}_{21}& {c}_{22}& \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ {c}_{k1}& {c}_{k2}& \cdots & {c}_{kk}\end{array}\right)\in {\mathbb{R}}^{k\times k}$

$S={C}^{\text{T}}C$

$\stackrel{\to}{z}=\left(\begin{array}{cccc}{z}_{1}& {z}_{2}& \cdots & {z}_{k}\end{array}\right)\in {\mathbb{R}}^{k},\forall {z}_{i}~N\left(0,1\right)$

Vector of scores generated from multivariate normal distribution: $\stackrel{\to}{y}=\stackrel{\to}{m}+C\stackrel{\to}{z}\in {\mathbb{R}}^{k}~N\left(\stackrel{\to}{\mu},\Sigma \right)$

The test statistic and its sampling distribution which is a standard normal distribution.

$Z=\left(R-{\mu}_{R}\right)/{\sigma}_{R}~N\left(0,1\right)$

The statistical decision under the null hypothesis of multivariate normality at a given significance level is made for a left-tailed test. If *P*(*Z* ≤ *z*) ≥ *α*, H_{0} is accepted; and <*α*, H_{0} is rejected.

The effect size can be estimated using Rosenthal’s r correlation coefficient.

$r=\left|z\right|/\sqrt{n}=\left|z\right|/\sqrt{{n}_{1}+{n}_{2}}$

The calculation of the type II error or probability *β* and the statistical power *ϕ* is left-tailed in accordance with the calculation of the probability value or critical level.

${R}_{1}={\mu}_{R}+{z}_{\alpha}{\sigma}_{R}$

$\beta =P\left(Z\ge {z}_{1}=\frac{{R}_{1}-R}{{\sigma}_{R}}\right)=1-P\left(Z\le {z}_{1}=\frac{{R}_{1}-R}{{\sigma}_{R}}\right)$

$\varphi =1-\beta =P\left(Z\le {z}_{1}\right)$

4. Results

4.1. Comparison of the Probability of the Correct Decision Conditional on the Alternative Hypothesis among the Six Multivariate Normality Tests

Table 7 shows the statistical power values *ϕ* when the generated samples are drawn from a population without multivariate normality, corresponding to 40 random samples. The tests are expected to reject the null hypothesis of multivariate normality. Probability *β* appears when the generated samples have been drawn from a population with multivariate normality (sample #6 = 4N independent, sample #7 = 4N related, sample #32 = 4N related, sample #33 = 4N related, sample #34 = 4N related, sample #41 = 4N independent, and sample #46 = 4N independent) or with a good convergence to multivariate normality (sample #38 = 4t[*ν* = 100] independent, sample #39 = 4*χ*^{2}[*ν* = 100] independent, and

Table 7. Probability of the correct decision conditional on the alternative hypothesis of the six multivariate normality tests.* *

*Note*: Multivariate samples extracted from: N = standard normal distribution, E = exponential distribution with inverse scale parameter *λ* = ½, C = standard Cauchy distribution, L = standard LogNormal distribution, B(*n*, *π*_{succes}) = binomial distribution with parameters *n* (number of trials) and *p* (probability of success), *χ*^{2}[*ν*] = chi-square distribution with *ν* degrees of freedom, t[*ν*] = Student’s t distribution with *ν* degrees of freedom, Lap = standard Laplace distribution, N^{−1} = inverse normal distribution, and Logist = standard logistic distribution; ind = independent variables, rel = correlated variables. The preceding number is the number of variables with the same type of distribution. When the number of the variable subscripts matches, the corresponding variables are correlated. Pr = the probability of the correct decision conditional on the alternative hypothesis; when the null hypothesis of multivariate normality must be accepted, this probability is the type II error or beta probability (*β*), and when the null hypothesis must be rejected, it is the complement of the beta probability or statistical power (*ϕ*). Tests: the proposed Q-test from the Shapiro-Wilk W-statistics [3] and the proposed Q' test from the Shapiro-Francia W' statistics [4] , Mardia: K^{2} = Q + Z^{2} = multivariate normality omnibus statistic [7] , SJ = multivariate runs test applied to the multivariate normality by Smith and Jain [8] [9] , which is a left-tailed Z-test, and the Royston’s H-test from the Shapiro-Wilk W-statistics [3] [10] and the H'-test from the Shapiro-Francia W' statistics [4] [10] . Probability values were rounded to 4 decimal places, so the 1’s are an artifact of rounding.

sample #40 = 4B[*n* = 20, *π*_{success} = 0. 5] independent). With these 10 random samples, the tests are expected to sustain the null hypothesis of multivariate normality.

None of the six distributions of the *β* or *ϕ* probabilities corresponding to the six multivariate normality tests applied to the 50 generated multivariate samples followed a univariate normal distribution checked using the Shapiro-Wilk W-test (Royston, 1992) and D’Agostino-Berlanger-D’Agostino K^{2}-test at a 5% significance level. The distributions showed a U-shaped profile with elevations at both ends and a depression in the center. All of them had negative skewness (left tail longer than right tail), three had positive excess kurtosis or heavy tails (Q, Q' and H'), one had negative excess kurtosis or shortened tails (Smith-Jain), and two had zero excess kurtosis (Mardia and H). See Table 8 and Table 9.

Nor did any of the *ϕ* probabilities of the six multivariate normality tests applied to the 40 multivariate samples drawn from populations without multivariate normality followed a univariate normal distribution either by the Shapiro-Wilk W-test [3] or D’Agostino-Berlanger-D’Agostino K^{2}-test at the 5% significance level [47] . Five of their distributions showed negative skewness with an elongated profile of increasing staircase slope to the right, except for the distribution of *ϕ* probabilities from the runs test which showed symmetry. The profile of five of the six distributions was leptokurtic or thick-tailed, except for the profile of the runs test, which was platykurtic or thin-tailed. See Table 8 and Table 9.

However, the distributions of the *β* probabilities of four of the six multivariate normality tests applied to the 10 multivariate samples (drawn from populations with multivariate normality or with good convergence to it) were fitted to a

Table 8. Check of univariate normality using the Shapiro-Wilk test.

*Note*. *n* = sample size, *W* = the Shapiro-Wilk test statistic, *Z* = the standardized value of log-transformed W statistics using the Royston’s formulas [3] , *p* = right-tailed probability value under standard normal distribution, and Normality: yes when *p* < *α* = 0. 05 for *n* = 50 and 40 and 0.1 for *n* = 10, no when *p* ≥ *α*. Multivariate normality tests: the proposed Q-test from the Shapiro-Wilk W-statistics [3] and the *Q*'-test from the Shapiro-Francia *W*' statistics [4] , Mardia: K^{2} = Q + Z^{2} = multivariate normality omnibus statistic [7] , SJ = multivariate runs test applied to the multivariate normality by Smith and Jain [8] [9] , which is a left-tailed Z-test, and the Royston’s H-test from the Shapiro-Wilk W-statistics [3] [10] and the *H*'-test from the Shapiro-Francia *W*' statistics [4] [10] .

Table 9. Tests of univariate skewness, kurtosis and normality based on standardized central moments.

*Note*. *n* = sample size, *z*(
$\sqrt{{b}_{1}}$ ) = standardized value of skewness measure based on standardized third central moment [50] , *p* = two-tailed probability under a standard normal distribution, z (b_{2}) = standardized kurtosis measure value based on standardized fourth central moment [51] , *p* = two-tailed probability value under a standard normal distribution, *K*^{2} = *z*(
$\sqrt{{b}_{1}}$ )^{2} + *z*(*b*_{2})^{2} = test statistic from the D’Agostino-Belanger-D’Agostino omnibus test of normality [47] , *p* = probability to the right tail under a chi-square distribution with two degrees of freedom. N(*μ*, *σ*) = fit to normality: yes when *p* < *α* = 0. 05 for *n* = 50 and 40 and 0.1 for *n* = 10, no when *p* < 0.05, either due to skewness (+positive or −negative) or kurtosis (↓ b2 < 3 leptokurtosis or ↑ b_{2} > 3 platykurtosis).

model a univariate normality distribution checked using W- and K^{2}-tests at the 10% significance level (Q, Q', H and H'). The distribution of *β* probabilities from the runs test deviated from normality when tested using the W-test and showed negative skewness. The distribution of *β* probabilities from the Mardia’s test did not follow a normal distribution according to the K^{2}-test, as they showed a positive skew and a leptokurtic profile. See Table 8 and Table 9. Except for the two versions of the Royston test, no profile of *β* probabilities was bell-shaped in the histogram. The profiles were ladder-shaped with their steps increasing with Q-, Q'-, and runs z-test and decreasing with Mardia’s test. Nor does the dotted line clearly lined up at 45 on the normal quantile-quantile plot with these last four tests.

When comparing probabilities *β* or *ϕ*, the choice was made to use nonparametric tests with the total of 50 generated multivariate samples and the 40 samples drawn from populations without multivariate normality. Even a non-parametric analysis approach was adopted with the 10 samples drawn from populations with multivariate normality or a good convergence to it due to the small sample size [48] , consistency with previous analyses, and lack of evidence of normality in the plots [49] , particularly in four of them.

Using the Friedman’s omnibus test for comparing values *ϕ* or *β*, there was a significant difference both in the sample of 50 tuples and in the sample of 40 tuples from a population without multivariate normality and in the sample of 10 tuples from a population with multivariate normality or a good convergence to it (*Q* > 11.071, *p* < 0.05). The statistical power was very high in the three tests (*ϕ* > 0.9). Following the cut-off points suggested by Kendall and Gibbons [32] , the effect size, estimated by the Kendall’s coefficient of agreement, was small in the 50-tuple and 10-tuple samples, with values in the interval (0.1, 0.3), and reached a medium level in the sample of 40 tuples (See Table 10).

In the random sample of 50 tuples, the highest median probability of the correct decision conditional on the alternative hypothesis (*ϕ* or *β* values) appeared with the version from the Shapiro-Francia statistics of the proposed test, *Mdn* (*ϕ *or *β*) = 0.9989, followed by the same test from the Shapiro-Wilk W'-statistics, *Mdn* (*ϕ* or *β*) = 0.9974. Thirdly, the Royston’s H-test from the Shapiro-Wilk W statistics was located, *Mdn* (*ϕ* or *β*) = 0.9734, fourthly, the Mardia’s K^{2}-test, *Mdn* (*ϕ* or *β*) = 0.9684, fifthly, the Royston’s H-test from the Shapiro-Francia W' statistic, *Mdn* (*ϕ* or *β*) = 0.9591, and lastly the runs Z-test, *Mdn* (*ϕ* or *β*) = 0.8337.

When making the pairwise comparisons through the Wilcoxon’s signed-rank test from the asymptotic probability with the Sidak’s correction using Holm’s procedure to control for the family rate error [35] [36] , the two versions of the proposed test were equivalent in *ϕ* or *β* values and both were superior to the runs Z-test, Mardia’s K^{2}-test, and Royston H'-test from Shapiro-Francia W' statistics. The version of the Royston test from the Shapiro-Wilk W-statistics was superior to the version from the Shapiro-Francia W'-statistics and was also superior to the runs Z-test and Mardia’s K^{2}-test. In turn, the Royston H'-test from Shapiro and Francia W' statistic was superior to the runs Z-test. Following the cut-off points suggested by Cohen (1988), the effect sizes estimated using Rosenthal’s r coefficient varied from small (0.1 to 0.29) to medium (0.3 to 0.49). See Table 11.

Table 10. Friedman test, effect size, and statistical power.

*Note*: Compared variable: *ϕ* or *β* = the probability of the correct decision conditional on the alternative hypothesis of the six multivariate normality tests (*ϕ* with the 40 samples drawn from distributions without multivariate normality and *β* with the 10 samples from distributions with multivariate normality or a good convergency to it), *ϕ* = statistical power or probability of rejecting the null hypothesis when the alternative hypothesis is true, *β* = type II error or probability of maintaining the null hypothesis when the alternative hypothesis is true, *n* = the number of tuples, Q = the value of the Friedman’s test statistic, *df* = the degrees of freedom, *p* = right-tailed probability in a chi-square distribution with five degrees of freedom, Kendall’s *W* = the coefficient of agreement Kendall’s W as a measure of effect size,
${}_{0.95}\chi {}_{\left(5\right)}^{2}$ (5) = critical value or quantile of order 0.95 of a chi-square distribution with five degrees of freedom, *β* = type II error, and *Φ* = statistical power.

Table 11. Pairwise comparisons between the six multivariate normality tests using the Wilcoxon signed-rank test in the sample of 50 tuples.

*Note*: G_{1} = group 1 and G_{2} = group 2. Multivariate normality tests: the Q = proposed Q-test from the Shapiro-Wilk W statistics and the Q'-test from the Shapiro-Francia W' statistics, M = the Mardia’s K^{2}-test, SJ = the Smith-Jain runs Z-test, H = the Royston H-test from the Shapiro-Wilk W statistics, and the H'-test from the Shapiro-Francia W' statistics, *Mdn* (*ϕ* or *β*) = the sample median of *ϕ* or *β* values, *up* = the number of unequal pairs (non-zero differences), z = the Wilcoxon’s rank-signed test z-statistic from its approximation to the normal distribution with the correction for ties and the continuity correction, *p* = two-tailed probability under a standard normal distribution, *r* =
$\left|z\right|/\sqrt{100}$ = the Rosenthal’s r coefficient as a measure of effect size, *i* = range in ascending order of probability value with average ranges in case of ties, *α*_{c} = 1 − 0.95^{i/15} = significance level with the Sidak’s correction using the Holm’s procedure [35] [36] , Sig = significance: no when *p* ≥ *α*_{c} and yes when *p* < *α*_{c}, Diff = the group with the highest median in each pairwise comparison.

In the random sample of 40 tuples, the highest median statistical power (*ϕ*) appeared with the version from the Shapiro-Francia W' statistics of the proposed test, *Mdn* (*ϕ*) = 0.9999, followed by its version from Shapiro-Wilk W-statistics, *Mdn* (*ϕ*) = 0.9993. Thirdly, the median of the Mardia’s K^{2}-test was located, *Mdn* (*ϕ*) = 0.9868, fourthly, that of the Royston’s H-test from the Shapiro-Wilk W statistics, *Mdn* (*ϕ*) = 0.9847), fifthly, that of the Royston’s H'-test from the Shapiro-Francia W' statistics, *Mdn* (*ϕ*) = 0.9781, and lastly that of runs Z-test, *Mdn* (*ϕ*) = 0.7419. When making the pairwise comparisons using the Wilcoxon signed-rank test from the asymptotic probability with the Sidak’s correction with the Holm’s procedure [35] [36] , the two versions of the proposed test were equivalent in *ϕ* values and both were superior to the other tests. Again, the version of the Royston’s test from the Shapiro-Wilk-statistics was superior to the version from the Shapiro-Francia W' statistics with a large effect size (Rosenthal’s *r* = 0.506) and both versions were superior to the runs Z-test. In turn, the Mardia’s K^{2}-test was superior to the runs Z-test. The effect sizes estimated using Rosenthal’s r coefficient varied from small (from 0.1 to 0.29) to medium (from 0.3 to 0.49), reaching a large effect size on the statistical power when choosing between the proposed Q'-test and the runs Z-test. See Table 12.

In the random sample of 10 tuples, the highest median type II error (*β*) appeared with the runs Z-test, *Mdn* (*β*) = 0.9755. Secondly, the Royston’s H-test from the Shapiro-Wilk W-statistics was located, *Mdn* (*β*) = 0.8064, thirdly, this same test from the Shapiro-Francia W'-statistics, *Mdn* (*β*) = 0.8013, fourthly, the proposed test from the Shapiro-Francia W' statistics, *Mdn* (*β*) = 0.7355, fifthly, this same test from the Shapiro-Wilk W-statistics, *Mdn* (*β*) = 0.6904, and lastly the Mardia’s K^{2}-test, *Mdn* (*β*) = 0.3922. When pairwise comparisons were made using the Wilcoxon signed-rank test from the exact probability with Sidak’s correction with Holm’s procedure with a nominal significance level of 0.1, the two versions of the proposed test were equivalent in *β* values as were the two versions of Royston’s test. The Mardia’s K^{2}-test had a significantly lower *β* probability values than the two versions of the Royston test and the runs Z-test. The effect sizes estimated using the rank biserial correlation were very large. See Table 13.

4.2. Difference in the Number of Successes among the Six Multivariate Normality Tests

The proposed Q-test from the Shapiro-Wilk W statistics and the Royston’s H-test also based on the Shapiro-Wilk W statistics had the highest proportion of correct classifications or probability of success (*p _{s}* = 0.94). Secondly, Royston’s H'-test from the Shapiro-Francia W'-statistics was located (

Using Cochran’s omnibus Q test, there was significant difference in the sample of 50 tuples (*Q* = 44.593, *df* = 5, *p* < 0.001). The statistical power of the test was very high (*ϕ* = 0.9999). Following the cut-off points suggested by Cohen

Table 12. Pairwise comparisons between the six multivariate normality tests using the Wilcoxon signed-rank test on the sample of 40 tuples.

*Note*: G_{1} = group 1 and G_{2} = group 2. Multivariate normality tests: Q = the proposed Q-test from the Shapiro-Wilk W statistics and the Q'-test from the Shapiro-Francia W' statistics, M = the Mardia’s K^{2}-test, SJ = the Smith-Jain runs Z-test, H = the Royston H-test from the Shapiro-Wilk W statistics, and the H'-test from the Shapiro-Francia W' statistics, *Mdn* (*ϕ*) = the sample median of *ϕ* values, *up* = the number of unequal pairs (non-zero differences), z = the Wilcoxon’s rank-signed test z-statistic from its approximation to the normal distribution with the correction for ties and the continuity correction, *p* = two-tailed probability under a standard normal distribution, *r* =
$\left|z\right|/\sqrt{80}$ = the Rosenthal’s r coefficient as a measure of effect size, *i* = range in ascending order of probability value with average ranges in case of ties, *α*_{c} = 1 − 0.95^{i/15} = significance level with the Sidak’s correction using the Holm’s procedure [35] [36] , Sig = significance: no when *p* ≥ *α*_{c} and yes when *p* < *α*_{c}, Diff = the group with the highest median in each pairwise comparison.

Table 13. Pairwise comparisons between the six multivariate normality tests using the Wilcoxon signed-rank test on the sample of 10 tuples.

*Note*: G_{1} = group 1 and G_{2} = group 2. Multivariate normality tests: Q = the proposed Q-test from the Shapiro-Wilk W statistics and the Q'-test from the Shapiro-Francia W' statistics, M = the Mardia’s K^{2}-test, SJ = the Smith-Jain runs Z-test, H = the Royston H-test from the Shapiro-Wilk W statistics, and the H'-test from the Shapiro-Francia W' statistics, *Mdn* (*β*) = the sample median of *β* values. Wilcoxon’s signed-rank test: *up* = the number of unequal pairs (non-zero differences), *SR*− = the sum of negative ranks (G_{1} < G_{2}), *SR* + = the sum of positive ranks (G_{1} > G_{2}), T = test statistics or minor of *SR* + and *SR*− statistics, *p* = two-tailed exact probability, *r _{bp}* = (

[30] , the effect size estimated using the eta-squared coefficient was small (*η*^{2} = 0.032), with a value in the interval [0.01, 0.6).

When pairwise comparisons were made using the McNemar’s test from the exact probability (binomial distribution) applying the Sidak’s correction using the Holm’s procedure to control for the family error rate, there was no significant difference in successes between the two versions of the proposed test and the two versions of the Royston’s test, these four tests being statistically equivalent in number of successes. Both versions of the proposed test and the Royston test were more correct than the Mardia’s K^{2}-test and the runs Z-test, the latter two being equivalent to each other. When the effect size was estimated by the odds ratio (*OR*) or Cohen’s g statistic and, following the cutoff points suggested by the author, the effect size was large in 6 of 8 significant differences (*OR* > 4.25 and |*g*| > 0.25). In the other two comparisons, the *OR* remained undefined and, therefore, so did the *g*-statistic. When the effect size in these two comparisons was calculated using the eta-squared coefficient, it was also large: *η*^{2} = *Q*/(*n*_{01} + *n*_{10}) > 0.25, where *Q* = (|*n*_{01} + *n*_{10}| − 1)^{2}/(*n*_{01} + *n*_{10}). See Table 14.

Although the difference between the two versions of the proposed test was not significant (exact probability: point value and one-tailed value of 0.125 and two-tailed value of 0.25 > *α* = 0.05), the effect size was large (*OR* = 0, *g* = −0.5, *η*^{2} = 0.44), the type II error was null and the unit statistical power, which supports the alternative hypothesis of difference, where Q-test would have a higher proportion of successes than Q'-test (0.94 versus 0.88). Consequently, there is a situation of ambiguity with respect to this difference.

Table 14. Pairwise comparisons of the number of successes using the McNemar’s test.

*Note*: T_{1} = first test, T_{2} = second test, *n _{00}* = the number of concordant pairs of non-normality for both tests,

4.3. Sensitivity, Specificity and Efficiency of the Six Multivariate Normality Tests

The six tests presented a sensitivity of 100%, so the confidence intervals were calculated using the rule of three [52] : [1 − ln(0.05)/50, 1], as can be seen in Table 15. The specificity or ability to successfully reject the null hypothesis in case of deviation from multivariate normality varied from a minimum of 0.825 with the runs Z-tests to a maximum of 0.925 with the proposed Q-test from the Shapiro-Wilk W statistics. When making interval estimates at the 95% confidence level, the Wilson’s score intervals with the continuity correction [40] [41] for the specificity values of the six tests overlapped (Table 15). When making comparisons using the McNemar’s Z-test for two paired samples, the null hypothesis of

Table 15. Point estimates and 95% confidence intervals for the sensitivity, specificity and efficiency of the six multivariate normality tests.

*Note*: MN Test = Multivariate normality tests: Q = the proposed Q-test from the Shapiro-Wilk W statistics and the Q' test from the Shapiro-Francia W' statistics, M = the Mardia’s K^{2}-test, SJ = the Smith-Jain runs Z-test, H = the Royston H-test from the Shapiro-Wilk W statistics, and the H' test from the Shapiro-Francia W' statistics. Joint frequencies: *n _{00}* = the frequency of successes when classifying as the sample as coming from the population without multivariate normal distribution,

no difference was maintained at the 5% significance level, even without considering any correction for family error rate in all 15 comparisons. The mean and median of specificity values were high, 0.867 and 0.863, respectively.

The efficiency or successes ratio varied from a minimum of 0.86 with the runs Z-test to a maximum of 0.94 with the proposed Q' test from the Shapiro-Wilk W statistics. The confidence intervals of the six tests overlapped. When comparisons were made using the Z-test for two paired samples, the null hypothesis of equivalence was maintained at the 5% significance level for the 15 differences, even without considering any correction for family error rate. The mean and median of efficiency values were high, 0.893 and 0.890, respectively.

As the lowest values of specificity and efficiency were observed in the Mardia’s K^{2}-test and runs Z-test, but did not reveal to be significant in the comparisons of proportions with the other four tests, it was chosen to test in each of the six tests whether the specificity and efficiency values are equal to or greater than 0.90. At a significance level of 5%, the null hypothesis was rejected with the Mardia’s K^{2}-test and runs Z-test with respect to specificity value with low statistical power (0.5 < *ϕ* = 0.54 < 0.80) and small effect size (0.10 < *r* = 0.25 < 0.30). Applying the Sidak’s correction with the Holm’s procedure, the null hypothesis would not be rejected: *p* (the probability value for a left-tailed hypothesis) = 0.039 (*i* = 1.5) > *α*_{c} = 1 − 0.95^{ (1.5/6)} = 0.013. In all other cases, the null hypothesis was hold (Table 16).

Table 16. Test of a value equal or greater than 0.85 for specificity and efficiency using the McNemar Z-test.

*Not**e*: MN test = multivariate normality test: Q = the proposed Q-test from the Shapiro-Wilk W-statistics and the Q' test from the Shapiro-Francia W' statistics, M = Mardia’s K^{2}-test, SJ = the Smith-Jain runs Z-test, H = the Royston’s H-test from the Shapiro-Wilk W statistics and the H' test from the Shapiro-Francia W' statistics. *E* = specificity value, *Ef* = efficiency value, *z* = Z-test statistic for a population proportion, *p* = left-tailed probability in a standard normal distribution under the null hypothesis *E* ≥ 0.9 in the fourth column and *Ef* ≥ 0.9 in the seventh column, *ϕ* = left-tailed statistical power, *r* =
$\left|z\right|/\sqrt{50}$ = effect size measure based on the Rosenthal’s r coefficient.

4.4. Correlation between the Critical Level or Probability Value and the Deviation from Normality

Finally, the deviation from multivariate normality of the 50 generated multivariate samples was classified by ordered categories, as can be seen in Table 17. The higher the level in variable D, the greater the deviation from multivariate normality.

All six correlations were significant and negative. The negative sign of the correlation means that the smaller the critical level or probability value of the test, the greater the deviation from normality according to expectation. The highest absolute correlation of the ordinal variable D (of deviation from multivariate normality) was with the probability value of the proposed Q-test from the Shapiro-Wilk W statistics (*rho*_{QD} = −0.746, 95% *CI* [−0.789, −0.399]), followed by the same test from the Shapiro-Francia W' statistics (*rho*_{Q’D} = −0.740, 95% *CI* [−0.787, −0.395]). In third place was the correlation with the Royston’s H' test from the Shapiro-Francia W' statistics (*rho*_{H’D} = −0.677, 95% *CI* [−0.759, −0.345]), in fourth place, with the same test from the Shapiro-Wilk W statistics (*rho*_{HD} = −0.664, 95% *CI* [−0.753, −0.335]), and in fifth place, with the Mardia’s K^{2}-test (*rho*_{MD} = −0.645, 95% *CI* [−0.744, −0.319]). These five correlations were statistically equivalent using the Steiger’s Z-test and, following the cut-off points suggested by Cohen (1988), their strength of association was high (*r _{s}* > 0.50). In sixth place was the correlation with the Smith-Jain runs test (

Table 17. Classification of the 50 generated multivariate samples in five ordered categories of deviation from multivariate normality.

*Note*: N = standard normal distribution, E = exponential distribution with inverse scale parameter *λ* = ½, C = standard Cauchy distribution, L = standard LogNormal distribution, B (*n*, *p*) = binomial distribution with parameters *n* (number of trials) and *p* (probability of success), *χ*^{2}(*ν*) = chi-square distribution with *ν* degrees of freedom, Lap = standard Laplace distribution, N^{−1} = inverse normal distribution, Logist = standard logistic distribution, ind = independent variables, rel = correlated variables. The preceding number is the number of variables of the same type of distribution. When the number of subscripts of variables matches, the corresponding variables are correlated.

Table 18. Correlation between the critical level or probability value of each test and the level of deviation from multivariate normality.* *

*Note*: Q = the proposed Q-test from the Shapiro-Wilk W-statistics and the Q' test from the Shapiro-Francia W' statistics, M = the Mardia’s K^{2}-test, SJ = the Smith-Jain runs Z-test, H = the Royston’s H-test from the Shapiro-Wilk W statistics and the H' test from the Shapiro-Francia W' statistics, and D = the ordinal variable of deviation from multivariate normality, *r _{s}* = the value of Spearman’s rank-order correlation or rho coefficient,

Table 19. Comparison of correlations between the six tests using the Steiger’s Z-test.

*Note*: *rho _{T1D}* = the Spearman’s rank correlation coefficient between the first test (T

4.5. On the Assumption of Independence in the 50 Tuples

Due to the small size of the sequences, a significance level of 10% was used. Among the 50 sequences of the 15 *z*_{ln(1−W’)} values (from the Shapiro-Francia W' statistics), the ordinary Ljung-Box test detected serial dependence in multivariate samples 6, 12, 15, and 46. The robust Ljung-Box test from the sequences of
${{z}^{\prime}}_{l}$ values (transformed into ranks) detected serial dependence in multivariate samples 15, 22, 25, 38, and 46. Both tests agreed in samples 15 and 46. However, the ordinary Ljung-Box test applied to the reduced sequences (without zeros) did not confirm serial dependence in any of the seven cases. From the reduced sequences, there was serial dependence in the multivariate samples 39 (from four independent samples with chi-square distribution with 100 degrees of freedom) with a significant first-order lag autocorrelation (*ar*_{1} = −0.684 < *LB _{90%}* = −0.672) and 44 (from two independent samples with normal distribution and two independent samples with chi-square distribution with one degree of freedom) with a significant second-order lag autocorrelation (

Reviewing the 50 sequences of *z*_{ln(1−W)} values (from the Shapiro-Wilk W statistic), the assumption of independence was rejected by the ordinary Ljung-Box Q-test in multivariate samples 15 and 46. The robust Ljung-Box test from the sequences of *z _{l}* values (transformed into ranks) detected serial dependence in multivariate samples 6, 15, 22, 26, and 46. Once again, both tests coincided in the significance of samples 15 and 46. However, the ordinary Ljung-Box test did not confirm serial dependence in any of the five cases in the reduced sequences (without zeros). With the reduced sequences, the Ljung-Box Q-test was significant in the multivariate sample 29 of three correlated variables with normal distribution and one independent variable with exponential distribution. Its highest autocorrelation was that of first-order lag (

Critical values were obtained by bootstrapping (Monte Carlo simulation) for multivariate sample 29 (proposed Q test) and multivariate samples 39 and 44 (proposed Q' test). Three, six, and thirteen standard half-normal distributions (truncated standard normal distribution between the 0.5 and 0.9999 quantiles) were defined, respectively. The outcome variable was the sum of squares of the 3, 6 or 13 variables with standard half-normal distributions. Three correlation matrices were defined with ones in the main diagonal, the value of the autocorrelation in the corresponding variables and zeros in the remaining cells. The correlation was −0.666 (first-order lag autocorrelation) between *z*_{1} - *z*_{2} and *z*_{2} - *z*_{3} in sample 29. The correlation was −0.684 (first-order lag autocorrelation) between *z*_{1} - *z*_{2}, *z*_{2} - *z*_{3}, *z*_{3} - *z*_{4}, *z*_{4} - *z*_{5}, and *z*_{5} - *z*_{6} in sample 39. The correlation was −0.462 (second-order lag autocorrelation) between *z*_{1} - *z*_{3}, *z*_{2} - *z*_{4}, *z*_{3} - *z*_{5}, *z*_{4} - *z*_{6}, *z*_{5} - *z*_{7}, *z*_{6} - *z*_{8}, *z*_{7} - *z*_{9}, *z*_{8} - *z*_{10}, *z*_{9} - *z*_{11}, *z*_{10} - *z*_{12}, and *z*_{11} - *z*_{13} in sample 44. Correlations were estimated using Spearman’s rank-order coefficient and calculations were performed with the XLSTAT software version 24 [20] . Percentiles were obtained from 1000 bootstrap samples for statistical decision making: if *q*' or *q* > (simulated) 95th percentile, H_{0} is rejected. As the autocorrelations were negative, the

Table 20. Testing the assumption of independence using the Ljung-Box Q test.* *

*Note*: Complete sequence (sequence of 15 values): *z*_{ln(1−W}_{'}_{)} = the standardized and log-transformed Shapiro-Francia W' statistics [4] ,
${{z}^{\prime}}_{l}$ = the truncated, standardized and log-transformed W' statistics, R(
${{z}^{\prime}}_{l}$ ) = the range of the value
${{z}^{\prime}}_{l}$ with average ranks in case of a tie, *z*_{ln(1−W)} = the standardized and log-transformed Shapiro-Wilk W statistics [3] , z_{l} = the truncated, standardized and log-transformed W statistics, R(*z _{l}*) = the range of the value

critical values or simulated percentiles were lower compared to the simulated percentiles with the independent variables or the percentiles of the chi-square distribution. See Table 21.

In samples 29 and 44, the null hypothesis of multivariate normality is rejected according to expectation (*q* = 6.922 < simulated P_{95} = 0.223 and *q*' = 45.9 < simulated P_{95} = 11.052, respectively). In sample 39, the null hypothesis is also rejected (*q*' = 5.322 < simulated P_{95} = 3.958), when the expectation is that it holds,

Table 21. Monte Carlo simulation quantiles.* *

*Not**e*: Sampling method: Latin hypercubes (number of sections = 500), number of intervals: 50, number of simulations: 1000, type of correlation: Spearman.
${}_{p}\chi {}_{df}^{2}$ = the *p*-order quantile of a chi-square distribution with *df* degrees of freedom, Ind = simulation with independent variables, and Rel = simulation with correlated variables.

so the test seems to perform better from the asymptotic approach by forcing the assumption of independence as is done in Mardia’s K^{2}-test and multivariate runs Z-tests. Hence, it is advisable to use the reduced sequence only in case of significance in the full sequence. If the ordinary Ljung-Box test is used, the sequence must include the negative values. With the truncated sequence, the robust Ljung-Box test is used, which does not require normality and is more sensitive to anomalous tails [25] [26] .

5. Discussion

The first objective of this article was to present a new multivariate normality test. It is based on the lemma or proven proposition that, if a set of correlated or independent variables follow a multivariate normal distribution, any linear combination of them follows a univariate normal distribution [14] [54] . If there are *k* variables, the number of unweighted linear combinations is 2* ^{k}* − 1. Additionally, the lemma that the sum of squares of 2

Another way of solving this problem was sought, and it was considered that the best option was to truncate these values to 0, so that the sampling distribution of the variables changes from a standard normal distribution to standard half-normal distribution, since truncating a standard normal variable between the quantiles 0.5 and 0. 9999… results in a standard half-normal variable: *x* Î *X* ~ SN(*σ* = 1), *f*(*x*) = 2 × *φ*(*x*), *F*(*x*) = 2 × Φ(*x*) − 1, and *F*^{−1}(*x*) = Φ^{−1}[(*p* + 1)/2], where *φ* is the density function, Φ the distribution function and Φ^{−1} the quantile function of a standard normal distribution [17] . Two additional lemmas are added here. The first states that if a variable follows a half-normal distribution, the square of the quotient between the variable and its scale parameter *σ* follows a chi-squared distribution with one degree of freedom [17] . The second posits that the sum of 2^{k} − 1 independent variables with chi-square distribution with one degree of freedom follows a chi-square distribution with 2* ^{k}* − 1 degrees of freedom [18] . As the scale parameter

For the sum of squares of standard normal variables (of mean 0 and unit variance) or standard half-normal variables to follow a chi-square distribution with as many degrees of freedom as variables summed, independence between variables is required [18] . However, the test is based on all possible linear combinations among the *k* variables from which the Shapiro-Wilk W- or Shapiro-Francia W' statistics are calculated. Here one could object that they are dependent variables and that the sampling distribution is generalized chi-square, which is a very complex distribution to calculate [19] . Indeed, the generated variables are linearly dependent, but the *z _{l}* (from Shapiro-Wilk W statistics) or
${{z}^{\prime}}_{l}$ (from Shapiro-Francia W' statistics) values are not necessarily, so the independence assumption shifts to showing that the 2

To improve the specificity of the test, an operational correction is introduced that consists of eliminating one degree of freedom for each canceled variable (negative value converted to zero). Consequently, the simulation is run with a simplified sequence that corresponds to the non-zero values (random variables with sample size 1). Hence, the serial independence test has to be repeated with the simplified sequence, and from there obtain the significant autocorrelation values for the simulation. To obtain these last values, the correlogram is a very useful tool [23] . If there is no serial dependency in this second test, an ambiguous situation is generated that is resolved in favor of independence. One can also go directly to the simplified sequence. However, it seems that it is better to force the assumption of independence, so this second path is not recommended. It is only considered that there is a serial dependency if it appears in both the complete and reduced sequence with some significant autocorrelation.

The second objective of the study sought to compare the central tendency of the type II error or *β*-probability and the statistical power or complement of the *β*-probability. From these central tendency analyses, the new test yields good results without a clear advantage of one of its two versions, being equivalent to the Royston’s test and superior to the Mardia’s K^{2}-test and runs Z-test. The third objective involved to compare the frequency of successes among the six multivariate normality tests. In this analysis, the proposed test together with the Royston’s test are the best, with no difference between their two versions. Although there is a situation of ambiguity in the statistical decision of equivalence between the two versions of the proposed test. The difference is not significant, but the effect size is large, the type II error is null and the unitary statistical power, which supports the alternative hypothesis of difference, in which the Q-test would have a higher proportion of successes than the Q'-test, 0.94 versus 0.88. The fourth objective focused on calculating and comparing the sensitivity, specificity, and efficiency of the six multivariate normality tests. All six tests have unit sensitivity, so they are equivalent in the property of detecting normality. Comparisons in specificity and efficiency among the six tests do not reveal significant differences, even without controlling for family rate error, so it would seem that average values would be valid for all of them and these would be high, namely above 0.85. However, when testing higher-than-average specificity and efficiency values (H_{0}: *E* ≥ 0.90 and H_{0}: *Ef* ≥ 0.90), the runs Z-test and Mardia’s K^{2}-test show significantly lower specificity values than those hypothesized. The fifth objective proposed to classify the samples into ordered categories of deviation from normality, calculate the correlation between this ordinal variable and the critical level or probability value of each of the six tests, and compare these correlations. The highest correlations appear in the proposed test and the lowest appears with the runs Z-test. The latter presents a significant difference only in comparison with the two versions of the proposed test. In this analysis, the proposed test stands out with no difference between its two versions.

6. Conclusions

The new proposal presents a performance very similar to Royston’s test and clearly superior to the Mardia’s K^{2}-test and runs a Z-test with samples of 20 tetra-dimensional tuples. It should be noted that the Q version from the Shapiro-Wilk W statistics reveals a very slight advantage over the Q' version from the Shapiro-Francia W' statistics. In the face of samples of 20 participants and 4 variables, the highest specificity and efficiency values, success ratio, and correlation between the critical level and the ordinal variable of deviation from normality are achieved with the proposed Q-test.

As limitations of the study, it should be noted that the number of simulations is very small, so it is merely a pilot study. With a larger number of simulations, some of the differences between the two versions of the proposed test may be significant, and the version based on the Shapiro-Wilk W statistics may be more specific and efficient. This pilot study only handles one sample size (*n* = 20) and one number of variables (*k* = 4) when the variation of *n* and *k* would allow the definition of power curves to compare the tests. There are also other multivariate normality tests not covered in the present work, such as those of Cox and Small [55] , Henze and Zirkler [56] , and Doornik and Hansen [57] , available in the R program [58] , the Monte Carlo version of multivariate runs test available in Excel [9] [59] or the tests of Arnastauskaite, Ruzgas and Braženas [60] and Kesemen, Tiryaki, Tezel and Özkul [61] , more recently. There is also another generalization of the Shapiro-Wilk test developed by Villaseñor-Alva and González-Estrada [62] , different from that of Royston [10] and the present work. Any of these tests not included would be excellent comparison options for future research, although there is currently no evidence or consensus on which is the best test [58] [60] .

Further study of this new statistical test is suggested. A test based on the principle that, given *k* variables drawn from a multivariate normal population, any linear combination of these variables should follow a univariate normal distribution; additionally, on the principle that the sum of squares of independent standard half-normal variables follows a chi-square distribution with as many degrees of freedom as variables added, with the additional correction of eliminating the number of variables nulled in the degrees of freedom. The independence assumption is tested on the generative sequence of standardized and truncated values. Initially, it is checked with the complete series and, in case of dependence, it is repeated with the reduced series (without zeros). If both series show dependence, the assumption is not fulfilled. In case of discrepancy between the independence of the two series, the decision is in favor of independence, since the simulated quantiles with moderate or high negative correlations go down a lot or with moderate or high positive correlations go up a lot compared to the situation with independent samples, resulting in a less accurate test. If further studies support this test, its computational implementation in programs such as R or Excel is very simple. Precisely, this article details an example, executed from the Excel program.

Acknowledgements

The author thanks the reviewers and editor for their helpful comments.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
Shapiro, S.S. and Wilk, M.B. (1965) An Analysis of Variance Test for Normality (Complete Samples). Biometrika, 52, 591-611.
https://doi.org/10.1093/biomet/52.3-4.591 |

[2] |
Shapiro, S.S. and Francia, R.S. (1972) An Approximate Analysis of Variance Test for Normality. Journal of the American Statistical Association, 67, 215-216.
https://doi.org/10.1080/01621459.1972.10481232 |

[3] |
Royston, J.P. (1992) Approximating the Shapiro-Wilk W-Test for Non-Normality. Statistics and Computing, 2, 117-119. https://doi.org/10.1007/BF01891203 |

[4] |
Royston, J.P. (1993) A Tool Kit for Testing for Normality in Incomplete and Censored Samples. Journal of the Royal Statistical Society. Series D (the Statistician), 42, 37-43. https://doi.org/10.2307/2348109 |

[5] |
Mardia, K.V. (1970) Measures of Multivariate Skewness and Kurtosis with Applications. Biometrika, 57, 519-530. https://doi.org/10.1093/biomet/57.3.519 |

[6] |
Mardia, K.V. (1974) Applications of Some Measures of Multivariate Skewness and Kurtosis in Testing Normality and Robustness Studies. Sankhya: The Indian Journal of Statistics, Series B (1960-2002), 36, 115-128.
https://www.jstor.org/stable/25051892 |

[7] |
Mardia, K.V. (1980) Tests of Univariate and Multivariate Normality. In Krishnaiah, P.R., Ed., Handbook of Statistics 1: Analysis of Variance, North-Holland, Amsterdam, 279-320. https://doi.org/10.1016/S0169-7161(80)01011-5 |

[8] |
Friedman, J.H. and Rafsky, L.C. (1979) Multivariate Generalizations of the WaldWolfowitz and Smirnov Two-Sample Tests. The Annals of Statistics, 7, 697-717.
https://doi.org/10.1214/aos/1176344722 |

[9] |
Smith, S.P. and Jain, A.K. (1988) A Test to Determine the Multivariate Normality of a Data Set. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10, 757-761. https://doi.org/10.1109/34.6789 |

[10] |
Royston, J.P. (1983) Some Techniques for Assessing Multivariate Normality Based on the Shapiro-Wilk W. Journal of the Royal Statistical Society. Series C (Applied Statistics), 32, 121-133. https://doi.org/10.2307/2347291 |

[11] |
Wald, A. (1939) Contributions to the Theory of Statistical Estimation and Testing Hypotheses. Annals of Mathematical Statistics, 10, 299-326.
https://doi.org/10.1214/aoms/1177732144 |

[12] |
Steiger, J.H., Shapiro, A. and Browne, M.W. (1985) On the Multivariate Asymptotic Distribution of Sequential Chi-Square Statistics. Psychometrika, 50, 253-263.
https://doi.org/10.1007/BF02294104 |

[13] |
Hoaglin, D.C. (2016) Misunderstandings about Q and Cochran’s Q Test. In Meta-Analysis. Statistics in Medicine, 35, 485-495. https://doi.org/10.1002/sim.6632 |

[14] |
Ghurye, S.G. and Olkin, I. (1962) A Characterization of the Multivariate Normal Distribution. The Annals of Mathematical Statistics, 33, 533-541.
https://doi.org/10.1214/aoms/1177704579 |

[15] |
Cochran, W.G. (1934) The Distribution of Quadratic Forms in a Normal System, with Applications to the Analysis of Covariance. Mathematical Proceedings of the Cambridge Philosophical Society, 30, 178-191.
https://doi.org/10.1017/S0305004100016595 |

[16] | Blom, G. (1958) Statistical Estimates and Transformed Beta-Variables. John Wiley and Sons, New York. |

[17] | Johnson, N., Kotz, S. and Balakrishnan, N. (1994) Continuous Univariate Distributions. 2nd Edition, John Wiley and Sons, New York. |

[18] |
Coelho, C.A. (2020) On the Distribution of Linear Combinations of Chi-Square Random Variables. In Bekker, A., Chen, D.G. and Ferreira, J.T., Eds., Computational and Methodological Statistics and Biostatistics. Emerging Topics in Statistics and Biostatistics, Springer, Cham, 211-250.
https://doi.org/10.1007/978-3-030-42196-0_9 |

[19] |
Rahman, G., Mubeen, S. and Rehman, A. (2015) Generalization of Chi-Square Distribution. Journal of Statistics Applications and Probability, 4, 119-126.
https://digitalcommons.aaru.edu.jo/jsap/vol4/iss1/12 |

[20] |
Addinsoft (2021) Monte Carlo Simulations. In XL-STAT: Tutorials & Guides.
https://help.xlstat.com/tutorial-guides/monte-carlo-simulations |

[21] |
Wald, A. and Wolfowitz, J. (1943) An Exact Test for Randomness in the Non-Parametric Case Based on Serial Correlation. Annals of Mathematical Statistics, 14, 378-388. https://doi.org/10.1214/aoms/1177731358 |

[22] |
Ljung, G.M. and Box, G.E.P. (1978) On a Measure of a Lack of Fit in Time Series Models. Biometrika, 65, 297-303. https://doi.org/10.1093/biomet/65.2.297 |

[23] | Box, G.E.P., Jenkins, G.M., Reinsel, G.C. and Ljung, G.M. (2015) Time Series Analysis: Forecasting and Control. 5th Edition, John Wiley and Son, New York. |

[24] |
Uyanto, S.S. (2020) Power Comparisons of Five Most Commonly Used Autocorrelation Tests. Pakistan Journal of Statistics and Operation Research, 16, 119-130.
https://doi.org/10.18187/pjsor.v16i1.2691 |

[25] | Chan, W.S. (1994) On Portmanteau Goodness-of-Fit Tests in Robust Time Series Modeling. Computational Statistics, 9, 301-310. |

[26] |
Burns, P.J. (2002) Robustness of the Ljung-Box Test and Its Rank Equivalent. SSRN.
https://doi.org/10.2139/ssrn.443560 |

[27] | Hyndman, R.J. and Athanasopoulos, G. (2021) Forecasting: Principle and Practice. 3rd Edition, OTexts, Melbourne. |

[28] |
Schwert, G.W. (1989) Why Does Stock Market Volatility Change Over Time? Journal of Finance, 44, 1115-1153. https://doi.org/10.1111/j.1540-6261.1989.tb02647.x |

[29] |
Albuquerque, P. (2020) Optimal Time Interval Selection in Long-Run Correlation Estimation. Journal of Quantitative Economics, the Indian Econometric Society, 18, 53-79. https://doi.org/10.1007/s40953-019-00175-x |

[30] | Cohen, J. (1988) Statistical Power Analysis for the Behavioral Sciences. 2nd Edition, Lawrence Erlbaum Associate, Hillsdale. |

[31] |
Friedman, M. (1937) The Use of Ranks to Avoid the Assumption of Normality Implicit in Analysis of Variance. Journal of the American Statistical Association, 32, 675-701. https://doi.org/10.1080/01621459.1937.10503522 |

[32] | Kendall, M.G. and Gibbons, J.D. (1990) Rank Correlation Methods. 5th Edition, A. Charles Griffin, London. |

[33] |
Wilcoxon, F. (1945) Comparison by Ranking Methods. Biometrics Bulletin, 1, 80-83.
https://doi.org/10.2307/3001968 |

[34] | Rosenthal, R. (1991) Metanalytic Procedures for Social Research. Rev. Edition, Sage Publications, Inc., New York. |

[35] |
Sidak, Z. (1967) Rectangular Confidence Regions for the Means of Multivariate Normal Distributions. Journal of the American Statistical Association, 62, 626-633.
https://doi.org/10.2307/2283989 |

[36] | Holm, S. (1979) A Simple Sequentially Rejective Multiple Test Procedure. Scandinavian Statistical Journal, 6, 65-70. |

[37] |
Cochran, W.G. (1950) The Comparison of Percentages in Matched Samples. Biometrika, 37, 256-266. https://doi.org/10.1093/biomet/37.3-4.256 |

[38] |
Serlin, R.C., Carr, J. and Marascuilo, L.A. (1982) A Measure of Association for Selected Nonparametric Procedures. Psychological Bulletin, 92, 786-790.
https://doi.org/10.1037/0033-2909.92.3.786 |

[39] |
McNemar, Q. (1947) Note on the Sampling Error of the Difference between Correlated Proportions and Percentages. Psychometrika, 12, 153-157.
https://doi.org/10.1007/BF02295996 |

[40] |
Wilson, E.B. (1927) Probable Inference, the Law of Succession, and Statistical Inference. Journal of the American Statistical Association, 22, 209-212.
https://doi.org/10.1080/01621459.1927.10502953 |

[41] |
Newcombe, R.G. (1998) Two-Sided Confidence Intervals for the Single Proportion: Comparison of Seven Methods. Statistics in Medicine, 17, 857-872.
https://doi.org/10.1002/(SICI)1097-0258(19980430)17:8<857::AID-SIM777>3.0.CO;2-E |

[42] |
Spearman, C. (1904) The Proof and Measurement of Association between Two Things. The American Journal of Psychology, 15, 72-101.
https://doi.org/10.2307/1412159 |

[43] |
Fisher, R.A. (1915) Frequency Distribution of the Values of the Correlation Coefficient in Samples from an Indefinitely Large Population. Biometrika, 10, 507-521.
https://doi.org/10.1093/biomet/10.4.507 |

[44] |
Bonett, D.G. and Wright, T.A. (2000) Sample Size Requirements for Estimating Pearson, Kendall and Spearman Correlations. Psychometrika, 65, 23-28.
https://doi.org/10.1007/BF02294183 |

[45] |
Steiger, J.H. (1980) Tests for Comparing Elements of a Correlation Matrix. Psychological Bulletin, 87, 245-251. https://doi.org/10.1037/0033-2909.87.2.245 |

[46] |
Myers, L. and Sirois, M.J. (2006) Spearman Correlation Coefficients, Differences Between. In: Kotz, S., Balakrishnan, N., Read, C.B. and Vidakovic, B., Eds., Encyclopedia of Statistical Sciences, 2nd Edition, John Willey and Sons, Hoboken, 7901-1903. https://doi.org/10.1002/0471667196.ess5050.pub2 |

[47] |
D’Agostino, R.B., Belanger, A. and D’Agostino Jr., R.B. (1990) A Suggestion for Using Powerful and Informative Tests of Normality. The American Statistician, 44, 316-321. https://doi.org/10.1080/00031305.1990.10475751 |

[48] |
Verma, J.P. and Abdel-Salam, A.S.G. (2019) Testing Statistical Assumptions in Research. John Wiley and Sons, Newark. https://doi.org/10.1002/9781119528388 |

[49] |
Chakraborti, S. and Graham, M.A. (2019) Nonparametric (Distribution-Free) Control Charts: An Updated Overview and Some Results. Quality Engineering, 31, 523-544. https://doi.org/10.1080/08982112.2018.1549330 |

[50] |
D’Agostino, R.B. (1970) Transformation to Normality of the Null Distribution of g1. Biometrika, 57, 679-681. https://doi.org/10.1093/biomet/57.3.679 |

[51] |
Anscombe, F.J. and Glynn, W.J. (1983) Distribution of the Kurtosis Statistic b2 for Normal Samples. Biometrika, 70, 227-234. https://doi.org/10.1093/biomet/70.1.227 |

[52] |
Jovanovic, B.D. and Levy, P.S. (1997) A Look at the Rule of Three. The American Statistician, 51, 137-139. https://doi.org/10.1080/00031305.1997.10473947 |

[53] |
Universität Düsseldorf, Psychologie (2021) G* Power 3.1 Manual.
https://www.psychologie.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPowerManual.pdf |

[54] |
Rao, C.R. (1973) Linear Statistical Inference and Its Applications. 2nd Edition, John Wiley and Sons, New York. https://doi.org/10.1002/9780470316436 |

[55] |
Cox, D.R. and Small, N.J.H. (1978) Testing Multivariate Normality. Biometrika, 65, 263-272. https://doi.org/10.1093/biomet/65.2.263 |

[56] |
Henze, N. and Zirkler, B. (1990) A Class of Invariant Consistent Tests for Multivariate Normality. Communications in Statistics—Theory and Methods, 19, 3595-3617.
https://doi.org/10.1080/03610929008830400 |

[57] |
Doornik, J.A. and Hansen, H. (2008) An Omnibus Test for Univariate and Multivariate Normality. Oxford Bulletin of Economics and Statistics, 70, 927-939.
https://doi.org/10.1111/j.1468-0084.2008.00537.x |

[58] |
Korkmaz, S. (2022) Package “MVN”. Multivariate Normality Tests.
https://cran.r-project.org/web/packages/MVN/MVN.pdf |

[59] |
Zaiontz, C. (2022) Multivariate Normality Testing (FRSJ). In Real Statistics Using Excel.
https://www.real-statistics.com/multivariate-statistics/multivariate-normal-distribution/multivariate-normality-testing-frsj |

[60] |
Arnastauskaite, J., Ruzgas, T. and Braženas, M.A (2021) New Goodness of Fit Test for Multivariate Normality and Comparative Simulation Study. Mathematics, 9, Article No. 3003. https://doi.org/10.3390/math9233003 |

[61] |
Kesemen, O., Tiryaki, B.K., Tezel, Ö. and Özkul, E. (2021) A New Goodness of Fit Test for Multivariate Normality. Hacettepe Journal of Mathematics and Statistics, 50, 872-894. https://doi.org/10.15672/hujms.644516 |

[62] |
Villaseñor-Alva, J.A. and González-Estrada, E. (2009) A Generalization of Shapiro-Wilk’s Test for Multivariate Normality. Communications in Statistics—Theory and Methods, 38, 1870-1883. https://doi.org/10.1080/03610920802474465 |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.