A Practical Guide to Statistical Tests in the Biomedical Field: From Parametric to Nonparametric, Where and How?

Abstract

Healthcare decisions are based on scientific evidence obtained from medical studies by gathering data and analyzing it to obtain the best results. When analyzing data, biostatistics is a powerful tool, but healthcare professionals lack knowledge in this field. This lack of knowledge can manifest itself in situations such as choosing the wrong statistical test for the right situation or applying a statistical test without checking its assumptions, leading to inaccurate results and misleading conclusions. With the help of this “narrative review”, the aim is to bring biostatistics closer to healthcare professionals by answering certain questions: how to describe the distribution of data? how to assess the normality of data? how to transform data? and how to choose between nonparametric and parametric tests? Through this work, our hope is that the reader will be able to choose the right test for the right situation, in order to obtain the most accurate results.

Share and Cite:

Gourinda, A. , Bouri, H. , Charif, F. , Mahdi, Z. , Bousgheiri, F. , Sammoud, K. , Lemrabett, S. and Najdi, A. (2024) A Practical Guide to Statistical Tests in the Biomedical Field: From Parametric to Nonparametric, Where and How?. Journal of Biosciences and Medicines, 12, 1-14. doi: 10.4236/jbm.2024.1211001.

1. Introduction

Biostatistics is defined by the use of statistics in biomedical field. It has three complementary components: descriptive statistics, inferential statistics and predictive statistics. Descriptive statistics summarize data by using graphics or numbers, while inferential statistics is about finding the statistical relation between two variables by using statistical tests, that are mainly divided between nonparametric and parametric tests.

As known, nonparametric tests are not restricted by assumptions, in opposite to parametric tests, which are restricted by the normality and homoscedasticity assumptions.

Homoscedasticity, also known as “equality of variances”, is only verified if the distribution is normal for each of the groups being compared. It is verified by Fisher’s exact test for two groups and by Bartlett’s test or Levene’s test for more than two groups.

Unfortunately, lots of researchers are confused when choosing between those two groups of statistical tests, which arises the following question: “How to choose between nonparametric and parametric tests?”. The researcher should notice that using the wrong statistical test in a given situation leads to wrong interpretations of data and therefore to false conclusions, and this can have devastating consequences on the research quality.

The purpose of this “narrative review” is to give a guideline to the reader in order to help him to clearly analyze quantitative data as well as to explain differences between the nonparametric and parametric tests, in order to choose the right statistical test for the right situation.

2. Data Distribution

After collecting quantitative data, the researcher starts by summarizing it, then he represents it in many forms, such as tables, graphics or numbers, in order to have an overall picture of the data. It should be mentioned that the histogram (frequency distribution) is the first graphic to use when evaluating the distribution of data.

Data distribution is characterized by: Shape (which is the number of peaks and the presence/or absence of symmetry), Center (which is the middle of the values of a distribution) and Spread (it refers to how far the data points “spread out” from the center [1].

Two types of data distribution are distinguished: normal distribution and nonnormal distribution [2]. The word “normal” has nothing to do with “clinical normal” or its opposite word, “pathological”. It is important to note that most of the variables in the medical field follow the normal distribution [2].

3. Assessment of Normality

Assessment of normality is an essential step in data analysis because it allows us to distinguish between the two types of data distribution (normal/nonnormal distribution).

Before using different methods to assess the normality of a distribution, the researcher has to make a data quality management check and carefully preparing our data for analysis by filling in missing data, correcting data entry errors, checking for outliers, and knowing the nature of the variable itself [3].

To assess the normality of a continuous data, various methods are available [4]-[7] (Table 1):

  • Graphical methods: Histogram (frequency distribution), Stem-and-leaf plot, Boxplot, P-P plot (probability-probability plot) and Q-Q plot (quantile-quantile plot).

  • Numerical methods: Mean with Median, Mean with Standard Deviation (SD), Skewness and Kurtosis.

  • Normality tests: Shapiro-Wilk test and Kolmogorov-Smirnov test.

Each of these methods has its own advantages and inconveniences, which will be explained in detail later in this article [4].

Table 1. Indications of methods to assess normality [4] [8].

Small sample size

(n ≤ 50)

Medium sample size

(50 < n < 300)

Large sample size

(n ≥ 300)

Graphical methods

Not used

Used

Used

Numerical methods

Skewness and kurtosis

Used

Used

Used

Coefficient of variation

Not used

Used

Used

Normality tests

Shapiro-Wilk test

Used

Used

Not used

Kolmogorov-Smirnov test

Not used

Used

Not used

3.1. Graphical Methods

The interpretation of graphical methods requires a good experience of the scientist to avoid misinterpretations [4], otherwise, it is better to rely on numerical and statistical methods [4].

The Stem-and-leaf plot is a sort of mixture between a table and a histogram; it uses individual values for numerical data like in tables, it also shows frequency of observations, and the highest and lowest values of the distribution, like in graphics [5] [9] [10]. For each value, there is a “leaf”, which is the last digit of a value, and each “leaf” is attached to the previous digits called “stem” of the same value [9] [10]. To construct the stem-and-leaf plot, the researcher should follow these steps [9] [10]:

1) Determination of the leaf and the stem for each value.

2) Vertical arrangement of the stems in an ascending or descending order.

3) Separation of the leaves from the stems by drawing a vertical line between them.

4) Grouping the leavers to form bars like histogram for each common stem.

The histogram (frequency distribution) plots the observed values in the horizontal axis against their frequency in the vertical axis [5]. In a normal distribution, the graph is bell-shaped curve and symmetric about the mean, and any departure from the bell-shaped curve is a departure from the normal distribution [4] [6]. Histogram can also give us an insight into data holes and outliers [5].

In a boxplot, the median is a horizontal line inside the box, the Interquartile Range (IQR, range between the first and third quartile) are the length of the box and the whiskers (line extending from the top and bottom of the box) represent the minimum and maximum values [4] [5]. In a normal distribution, the median line is in the center of the box, the whiskers are symmetrical and slightly longer than the parts of the box [4] [5].

The P-P graph (probability-probability plot) is used to evaluate how well the sample’s distribution conforms to a theoretical distribution (e.g. normal distribution) by plotting two sets of cumulative probability (observed and theoretical) against each other [4] [5]. In a normal distribution, the plotted points follow a straight line and any departures from this straight line indicate departures from a normal distribution [4].

The Q-Q graph (quantile-quantile plot) (quantiles are values that split a data set into equal portions) is used to evaluate how well the sample’s distribution conforms to a theoretical distribution (e.g. normal distribution) by plotting two sets of quantiles (observed and theoretical) against each other [4]-[6]. In a normal distribution, the plotted points follow a straight line and any departures from this straight line indicate departures from a normal distribution [4] [6]. More information can be obtained from the angle between the straight line and the horizontal axis: if the angle is 45˚, the two compared distributions are identical, if the angle is inferior to 45˚, the distribution of the horizontal axis is more dispersed than the distribution of the vertical axis. But if the angle is superior to 45˚, the distribution of the vertical axis is more dispersed than the distribution of the horizontal axis [6].

The detrended Q-Q plot shows the differences in values between the sample’s distribution and the theoretical distribution (e.g. normal distribution) [10]. In a normal distribution, the plotted points follow the horizontal line with the value zero on “y” axis and any departures from this horizontal line indicate departures from a normal distribution [10].

3.2. Numerical Methods

According to the central limit theorem, when the sample size is larger than 30 or 40 individuals, the sample’s distribution tends to be a normal distribution, regardless of the shape of the data, and this implies that the researcher can use parametric tests [5] [6].

The equality of values between the Mode, Median and Mean is used to assess normality:

Mode=Median=Mean

In a normal distribution, the mode, median and mean are nearly equal and any departure from the equality between these three values indicates a departure from a normal distribution [4].

The Difference between the Mean and the Median is used to assess normality:

Difference=MeanMedian

In a normal distribution, the difference’s value is close to zero, but in a nonnormal distribution, the difference’s value is far from zero [6]. The higher the difference’s value is, the more nonnormal the distribution is [6]. If the mean is superior to the mode, the distribution is skewed to the right, but if the mean is inferior to the mode, the distribution is skewed to the left [6].

The Mean and Standard Deviation (SD) can be resumed in the Coefficient of Variation [4]:

CV= Standarddeviation Mean ×100

In a normal distribution, the standard deviation is less than half the mean (CV < 50%) but in a nonnormal distribution, the standard deviation is more than half the mean (CV > 50%) [2] [4] [11].

Skewness is a measure of the asymmetry of a sample’s distribution, while Kurtosis is a measure of the peakedness of a sample’s distribution [4] [8]. In calculation, the researcher has to use the “excess kurtosis”, which is “Excess kurtosis = Kurtosis-3” [8]. In a nonnormal distribution, a positive skew value (positive skewness) indicates that the tail on the right side of the distribution is longer than the left side, and a negative skew value (negative skewness) indicates that the tail on the left side of the distribution is longer than the right side [8]. A positive “excess kurtosis” is called leptokurtic distribution (meaning high peak), and a negative “excess kurtosis” is called platykurtic distribution (meaning flat-topped curve) [8] (Table 2).

Table 2. Interpretation of skewness and kurtosis [4] [8].

Sample size

Used method

Nonnormal distribution

Normal distribution

Small

(n 50)

For skewness and excess kurtosis, Z-test is used.

Z (α = 0.05) = ±1.96

|ZSample| > 1.96

|ZSample| ≤ 1.96

Medium

(50 < n < 300)

For skewness and excess kurtosis, Z-test is used.

Z (α = 0.05) = ±3.29

|ZSample| > 3.29

|ZSample| ≤ 3.29

Large

(300 n)

For skewness and excess kurtosis, the absolute value (|value|) is used.

|skewness| > 2

OR

|excess kurtosis| > 4

|skewness| ≤ 2

AND

|excess kurtosis| ≤ 4

Z Skewness = Skewnessvalue SE Skewness , Z Excesskurtosis = Excesskurtosisvalue SE Excesskurtosis .

3.3. Normality Tests

The Kolmogorov-Smirnov test and the Shapiro-Wilt test are Empirical Distribution Functions (EDFs) and evaluate the divergence between the sample distribution and a hypothetical distribution (normal distribution) [10]. The Shapiro-Wilk test is based on the correlation between the data and the corresponding normal scores [5].

Normality tests have a problem of sensitivity to normality [12]. As the sample size decreases, the sample distribution deviates from normality, but the normality test loses its power to reject the null hypothesis [5]. Conversely, as the sample size increases, the sample distribution approaches normality, but the probability that the normality test rejects the null hypothesis increases [12].

The Kolmogorov-Smirnov test alone is sensitive to outliers, to surpass this problem, Lilliefors added a correction of the critical value to reject the null hypothesis, resulting in the Lilliefors test (also known as the “Lilliefors corrected K-S test”), even with that, Lilliefors test remains less powerful than the Shapiro-Wilk test to detect the normality of a distribution [5] [12] [13].

Although the Lilliefors test uses the same formula as the Kolmogorov-Smirnov test, the critical values for rejecting the null hypothesis are different between the two tests due to the Lilliefors correction, so the conclusion regarding the normality of our data will be different [13].

Like the other statistical tests, the researcher should put hypotheses before calculations. Generally, the hypotheses include: the null hypothesis (H0: the sample distribution is normal) which is equivalent to a nonsignificant test, the alternative hypothesis (Ha: the sample distribution is nonnormal) which is equivalent to a significant test [5] [6].

As mentioned above, each of these methods has its own advantages and disadvantages, and Table 3 attempts to give an overview of these methods to help the researcher make the right choice.

Table 3. Comparison between normality assessment methods.

Type

Sample size

Graphical methods

Subjective

Can’t be used for small sample size

Numerical methods

Less subjective

Can’t be used for small sample size

Normality tests

Objective

Can be used for small sample size

: Advantage; ꙱: Between; ꙱: Disadvantage.

4. Data Transformation

For continuous data with a nonnormal distribution, a transformation of data is performed to make the distribution as “normal” as possible, and this makes the data available for parametric tests [14] [15], but it’s not always guaranteed. The selection of the right transformation differs between two situations: either for one group, or in the comparison between two or more groups.

4.1. One Group

Depending on the direction and degree of skewness of the data distribution, various transformation methods can be used: reciprocal, logarithm, cube root, square root and square [2] [15] (Table 4).

Table 4. Data transformations [2] [15] [16].

Ladder of power

xλ

λ < 1

λ = 1

1 < λ

λ = 1

λ = 0

λ = 0.5

λ = 2

λ = 3

Transformation

function

1/x

logbase(x)

x

x

x2

x3

Reciprocal

Logarithmic

Square root

Original data

Square power

Cubic power

Skewness

Positively (right) skewed distribution

Not skewed

Negatively (left) skewed distribution

Transformation function: y = xλ; x: Original data; y: Transformed data.

In logarithmic transformation, it is possible to use either natural logarithms (ln) with “base e (=2.7182818…)” or common logarithms (Log) with “base 10” [17] [18], but the common logarithm has the advantage of being easy to interpret or to check in opposite to natural logarithm [15] [18].

From Table 4, the more the data are right-skewed, the lower is the ladder of power, and the more the data are left-skewed, the higher is the ladder of power [16].

Before applying any transformation on data, some mathematical assumptions should be respected in order to allow the transformation and to have finally a good interpretation (Table 5).

Table 5. Mathematic assumptions for transformations [3] [14] [15] [17]-[19].

Transformation

Reciprocal

Logarithmic

Square root

Mathematic

assumptions

All values should be different from zero (y ≠ 0)

All values must be strictly greater than zero (y > 0)

All values must be superior or equal to zero (y ≥ 0)

Solution

A constant must be added to move the minimum value of the distribution above zero

A constant must be added to move the minimum value of the distribution to zero

Also, mathematically speaking, for these three transformations, in addition to power transformation, number above 0 and less than 1 (]0, 1[) behave differently than 0 and 1 and numbers above 1 [3] [17] (Table 6, Figure 1).

Table 6. Effect of transformation on raw data [3] [17].

Reciprocal

Logarithm

Square root

Power

Data order

Reversing RD’s order

Conserving RD’s order

Interval

0

Not defined

Not defined

0

0

]0, 1[

RD < TrD

Negative values

RD < TrD

RD > TrD

1

1

0

1

1

]1, +[

RD > TrD

Positive values

RD > TrD

Raw data < TrD

RD: Raw Data; TrD: Transformed Data.

Figure 1. Original vs transformed data.

As a solution, the researcher adds a positive constant to all raw data values to bring the minimum value to 1 [15] [17].

After a reciprocal transformation, it is possible to multiply the transformed data by minus one (−1) to preserve the original data’s order, but before the back-transformation, the researcher should again multiply by minus one (−1), so that the back-transformed data and original data will have the same order [16].

After the correct indication and satisfaction of the assumptions, the researcher can transform the data, but there is no guarantee that the data will be closer to the normal distribution [14]. However, in some cases, the transformation makes the distribution more skewed than the original data [14].

If normality is satisfied after transformation, calculation of the mean with its standard deviation and the confidence interval of the mean should be performed only on the transformed data [2]. The results obtained on the transformed data do not make sense, so a back-transformation of the confidence interval of the mean to deduce our results on the original data must be done but only at the end [2]. The Standard Deviation (SD) is used for calculation of the confidence interval of the mean on the transformed data, and a back-transformation of the SD to the original scale has no meaning [15] [19] [20].

Back-transformation is the last step and is very challenging [15]. For back-transformation, the researcher uses the inverse function (not the reciprocal function) and each transformation’s function has its inverse function [21]. To get a proper back-transformation, the researcher follows those steps by taking logarithmic back-transformation (especially common logarithm) as an example [15] [18] [21] [22]:

1) The researcher calculates the mean and the standard deviation of the transformed data.

2) The researcher calculates the confidence interval of the mean (it is preferable to use t values instead of z values for more precision):

CI95%=Mean± t df1,α/2 × SD n

3) At the end, the researcher transforms back the confidence interval using the appropriate inverse function (Table 7).

As an example, for common logarithm the researcher uses the antilogarithms (power to “base 10”):

( 10 mean t df1, α 2 × SD n ; 10 mean+ t df1,α/2 × SD n )

Table 7. The transformation functions and their inverse functions.

Ladder of power

xλ

λ < 1

λ = 1

λ > 1

λ = 1

λ = 0

λ = 0.5

λ = 2

λ = 3

Transformation function

1/x

logbase(x)

x

x

x2

x3

Reciprocal

Logarithmic

Square root

Original data

Square power

Cubic power

Inverse function

Reciprocal

Exponential

Square power

Original data

Square root

Cube root

1/y

basey

y2

y

y

y 3

y1/λ

Transformation function: y = xλ; Inverse function: x = y1⁄λ; x: Original data; y: Transformed data.

4.2. Comparison between Two or More Groups

In this situation, the researcher doesn’t transform the data for each group, but he uses what is known as a “variance-stabilizing transformation”, whose name indicates that it is a transformation with the main objective is to achieve homoscedasticity between groups [19]. To choose the right “variance-stabilizing transformation”, the researcher follows those steps [2] [19] [23]:

1) For each group, the researcher calculates the mean and the variance (or standard deviation).

2) For each group, the researcher plots the variance (or standard deviation) against its mean.

3) Depending on the proportionality between the means and the variances (or standard deviation), the researcher chooses the appropriate transformation from the following table (Table 8).

Table 8. Transformations for comparison between two or more groups [2] [19] [23].

Proportionality

Mean

Mean2

Mean4

Standard deviation

Logarithmic transformation

Reciprocal transformation

Variance

Square root transformation

Logarithmic transformation

Reciprocal transformation

But back-transformation of the difference between means is more complicated and it is not the scope of this article [20].

5. Nonparametric vs Parametric Tests

Nonparametric tests are used for all data that can be ranked, including ordinal data, for which parametric tests are usually inappropriate and quantitative data with nonnormal distribution, however, parametric tests are used only for continuous data that are normally distributed [1] [7] [24] [25].

For quantitative data with nonnormal distribution, data transformations can be applied to approximate a normal distribution and if transformed data satisfy the normality, parametric tests can be applied, but if it doesn’t satisfy the normality, then the researcher has to use nonparametric tests [7] [25] [26].

Before applying any test, the researcher should check its assumptions. It is obvious that nonparametric tests don’t require any assumptions about data distribution, only parametric tests require some assumptions like the assumption of normality, which specifies that the data of the sample are normally distributed, and the assumption of equal variance, which specifies that the variances of the sample are equal [7] [25] [27] (Figure 2).

When calculating data, nonparametric tests use the ranks of the data instead of their actual values, which may cause loss of information, while parametric tests use the data’s values [3] [5]-[7].

Figure 2. Decision tree for nonparametric and parametric tests use.

Nonparametric tests (example: the Mann-Whitney U test) don’t estimate the parameter of interest (median) between groups, and this is true only if the assumption that the distribution has the same shape in both groups and differs only by the location of the parameter (median) is assumed, but parametric tests estimate the parameter of interest (mean) between groups under the assumptions [25].

As a result, nonparametric tests calculate only the p-value but don’t estimate the confidence intervals of the parameter, in opposite of parametric tests that calculate the p-value and also provide the confidence intervals of the parameter [24].

For quantitative variables with well-applied assumptions (normal distribution, equality of variance), using parametric tests will give more power to detect differences between groups, but using nonparametric tests in this situation will give less power to detect differences between groups, nonparametric tests have the advantage of reducing the risk of making incorrect conclusions because these methods do not make assumptions [1] [7] [24] [27] (Table 9).

Table 9. Parametric tests vs nonparametric tests [1] [24] [25] [28] [29].

Distribution

Types

Nonnormal distribution

Normal distribution

Characteristics

Shape

Nonbell-shaped

Bell-shaped

Center

Median

Mean

Spread

Quartiles

Standard deviation

Nonparametric test

Parametric test

Comparison tests

2 independent groups

Mann-Whitney U test

2-sample t test

>2 independent groups

Kruskal-Wallis test

1-way analysis of variance ANOVA

2 paired groups

Wilcoxon test

Paired-simple t test

>2 paired groups

Friedman test

1-way repeated-measures ANOVA

Results

Calculate a p-value

Calculate a p-value

Do not estimate parameter

Estimate parameter

Don’t provide parameter’s confidence intervals

Provide parameter’s confidence intervals

Correlation

Spearman correlation

Person correlation

6. Conclusions

In inferential statistics, the difficulty of choosing between nonparametric and parametric tests is limited to continuous data, since nonparametric tests are the only choice for qualitative data.

For continuous variables, parametric tests are preferred to be used because of their power to detect relation between two variables, but normality is a very important condition that should be satisfied.

Assessing normality is not an easy task, and the researcher concludes that the distribution is normal when the combination of all assessment methods with the respect to their conditions goes toward normality results.

In the case of a nonnormal distribution with a small sample size, it is preferable to use directly nonparametric tests, but in a nonnormal distribution with large sample size, the researcher can try data transformation only if he masters this technique with its back-transformation.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Hopkins, S., Dettori, J.R. and Chapman, J.R. (2018) Parametric and Nonparametric Tests in Spine Research: Why Do They Matter? Global Spine Journal, 8, 652-654.
https://doi.org/10.1177/2192568218782679
[2] Manikandan, S. (2010) Data Transformation. Journal of Pharmacology and Pharmacotherapeutics, 1, 126-127.
https://doi.org/10.4103/0976-500x.72373
[3] Osborne, J.W. (2002) Notes on the Use of Data Transformations. Practical Assessment, Research & Evaluation, 8, Article No. 6.
https://openpublishing.library.umass.edu/pare/article/id/1455/
[4] Gupta, A., Mishra, P., Pandey, C., Singh, U., Sahu, C. and Keshri, A. (2019) Descriptive Statistics and Normality Tests for Statistical Data. Annals of Cardiac Anaesthesia, 22, 67-72.
https://doi.org/10.4103/aca.aca_157_18
[5] Ghasemi, A. and Zahediasl, S. (2012) Normality Tests for Statistical Analysis: A Guide for Non-Statisticians. International Journal of Endocrinology and Metabolism, 10, 486-489.
https://doi.org/10.5812/ijem.3505
[6] Kwak, S.G. and Park, S. (2019) Normality Test in Clinical Research. Journal of Rheumatic Diseases, 26, 5-11.
https://doi.org/10.4078/jrd.2019.26.1.5
[7] Kim, H. (2014) Statistical Notes for Clinical Researchers: Nonparametric Statistical Methods: 1. Nonparametric Methods for Comparing Two Groups. Restorative Dentistry & Endodontics, 39, 235-239.
https://doi.org/10.5395/rde.2014.39.3.235
[8] Kim, H. (2013) Statistical Notes for Clinical Researchers: Assessing Normal Distribution (2) Using Skewness and Kurtosis. Restorative Dentistry & Endodontics, 38, 52-54.
https://doi.org/10.5395/rde.2013.38.1.52
[9] Hazra, A. and Gogtay, N. (2016) Biostatistics Series Module 1: Basics of Biostatistics. Indian Journal of Dermatology, 61, 10-20.
https://doi.org/10.4103/0019-5154.173988
[10] Rani Das, K. (2016) A Brief Review of Tests for Normality. American Journal of Theoretical and Applied Statistics, 5, 5-12.
https://doi.org/10.11648/j.ajtas.20160501.12
[11] Belinda, B. et Jennifer, P. (2014) Medical Statistics: A Guide to SPSS, Data Analysis and Critical Appraisal. 2nd Edition, Wiley.
[12] Kim, H. (2012) Statistical Notes for Clinical Researchers: Assessing Normal Distribution (1). Restorative Dentistry & Endodontics, 37, 245-248.
https://doi.org/10.5395/rde.2012.37.4.245
[13] Razali, N.M. et Wah, Y.B. (2011) Power Comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling Tests. Journal of Statistical Modeling and Analytics, 2, 21-33.
[14] Feng, C., et al. (2014) Log-Transformation and Its Implications for Data Analysis. Shanghai Arch Psychiatry, 26, 105-109.
[15] Lee, D.K. (2020) Data Transformation: A Focus on the Interpretation. Korean Journal of Anesthesiology, 73, 503-508.
https://doi.org/10.4097/kja.20137
[16] Kirchner, J. (2001) Toolkit 3: Tools for Transforming Data.
https://www.envidat.ch/dataset/4652e79e-1ad4-4045-a6c8-1b621ba18b28/resource/bb3d4263-4c88-4886-bc74-ba48f0398144/download/toolkit-03-transforming-distributions.pdf
[17] Osborne, J. (2010) Improving Your Data Transformations: Applying the Box-Cox Transformation. Practical Assessment, Research & Evaluation, 15, Article No. 12.
https://openpublishing.library.umass.edu/pare/article/id/1546/
[18] West, R.M. (2021) Best Practice in Statistics: The Use of Log Transformation. Annals of Clinical Biochemistry: International Journal of Laboratory Medicine, 59, 162-165.
https://doi.org/10.1177/00045632211050531
[19] John Martin, B. (n.d.) Week 5: Transformations.
https://www-users.york.ac.uk/~mb55/msc/clinbio/week5/transfm_gif.pdf
[20] Bland, J.M. and Altman, D.G. (1996) Statistics Notes: Transformations, Means, and Confidence Intervals. British Medical Journal, 312, 1079-1079.
https://doi.org/10.1136/bmj.312.7038.1079
[21] BYJUS (2023) Inverse Function (Definition and Examples).
https://byjus.com/maths/inverse-functions/
[22] Olsson, U. (2005) Confidence Intervals for the Mean of a Log-Normal Distribution. Journal of Statistics Education, 13, 1-9.
https://doi.org/10.1080/10691898.2005.11910638
[23] Bland, J.M. and Altman, D.G. (1996) Statistics Notes: Transforming Data. British Medical Journal, 312, 770-770.
https://doi.org/10.1136/bmj.312.7033.770
[24] Teresa Politi, M., Carvalho Ferreira, J. and María Patino, C. (2021) Nonparametric Statistical Tests: Friend or Foe? Jornal Brasileiro de Pneumologia, 47, e20210292.
https://doi.org/10.36416/1806-3756/e20210292
[25] Schober, P. and Vetter, T.R. (2020) Nonparametric Statistical Methods in Medical Research. Anesthesia & Analgesia, 131, 1862-1863.
https://doi.org/10.1213/ane.0000000000005101
[26] Whitley, E. and Ball, J. (2002) Statistics Review 6: Nonparametric Methods. Critical Care, 6, Article No. 509.
https://doi.org/10.1186/cc1820
[27] Nahm, F.S. (2016) Nonparametric Statistical Tests for the Continuous Data: The Basic Concept and the Practical Use. Korean Journal of Anesthesiology, 69, 8-14.
https://doi.org/10.4097/kjae.2016.69.1.8
[28] Kitchen, C.M.R. (2009) Nonparametric vs Parametric Tests of Location in Biomedical Research. American Journal of Ophthalmology, 147, 571-572.
https://doi.org/10.1016/j.ajo.2008.06.031
[29] Bewick, V., Cheek, L. and Ball, J. (2004) Statistics Review 10: Further Nonparametric Methods. Critical Care, 8, Article No. 196.
https://doi.org/10.1186/cc2857

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.