^{1}

^{1}

^{2}

In the study, a method of solving ANOVA problems based on an unbalanced three-way mixed effects model with interaction for data when factors A and B are fixed, and factor C is random was presented, and the required EMS was derived. Under each of the appropriate null hypotheses, it was observed that none of the derived EMS was unbiased for the other. Unbiased estimators of the mean squares were determined to test hypotheses. With the unbiased estimators, appropriate F-statistics as well as their corresponding pseudo-degrees of freedom were obtained. The theoretical results presented in the paper w ere illustrated using a numerical example.

The role of multi-factor experiments in agriculture, engineering and other fields cannot be overemphasized. Through a multi-factor experiment, it is possible to test the interaction effect of two or more factors. Sometimes, a multi-factor experiment conducted to compare factor levels and factor level combinations results in unbalanced data. Often, in the case of the analysis of variance (ANOVA) for unbalanced data, an exact F-test does not exist. As a remedy to this problem, authors have recommended some methods of testing effects in various multi-factor ANOVA problems. Consequently, [

From the foregoing, it is obvious that a reasonable number of studies have been carried out on the unbalanced two-way fixed effects, random effects and mixed effects models. However, not much attention has been given to the unbalanced three-way analysis of variance problems, especially such problems requiring mixed factor effects. There are basically six cases of the unbalanced three-way mixed effects crossed classification models. This paper deals with hypothesis testing problems arising from one of the six cases, in which two (A and B) of the three factors are fixed and the other factor (C) is random. The remaining parts of this paper are organized in the following manner. Section 2 has to do with the model specification and the necessary notations. In Section 3, theoretical results pertaining to expected mean squares, F-statistics and the corresponding pseudo-degrees of freedom are derived. A numerical example and the conclusion of this paper are presented in Sections 4 and 5 respectively.

The three-way unbalanced mixed effects cross-classification model with interaction terms, in which factors A and B are fixed while factor C is random is given by [

X i j k l = μ + A i + B j + C k + ( A B ) i j + ( A C ) i k + ( B C ) j k + ( A B C ) i j k + e i j k l , { i = 1 , 2 , ⋯ , a j = 1 , 2 , ⋯ , b k = 1 , 2 , ⋯ , c l = 1 , 2 , ⋯ , n i j k (1)

where: X i j k l , denotes the 1st observation at the ith level of factor A, the jth level of factor B, and the kth level of factor C, μ denotes the overall mean, A i denotes the effect of the ith level of factor A. B j denotes the effect of the jth level of factor B, C k denotes the effect of the kth level of factor C, ( A B ) i j , ( A C ) i k , ( B C ) j k denotes the effects of the two-factor interactions A × B , A × C , B × C , respectively, ( A B C ) i j k denotes the effect of the three-factor interaction A × B × C , e i j k l denotes the customary error term.

The Model (1) is called unbalanced three-way mixed effects cross-classification model with interaction if the following assumptions by [

From Equation (1), if factor A and B are fixed while factor C is random.

The model follows the following assumptions as:

i) The effects A ′ i s and B ′ j s are assumed to be fixed subject to the constraint

∑ i a A i = ∑ j b B j = ∑ i j a b ( A B ) i j = 0 ;

ii) C ′ k s are assumed to be randomly and normally distributed with mean

zero and variance σ θ 2 . i.e. C k ~ N ( 0 , σ C 2 ) , ( A C ) i k ~ N ( 0 , σ A C 2 ) ,

( B C ) j k ~ N ( 0 , σ B C 2 ) and ( A B C ) i j k ~ N ( 0 , σ A B C 2 ) .

iii) C ′ k s are uncorrelated with one another C ′ k s and e ′ i j k s that is

E ( C k C ′ k ) = 0 , k ≠ k ′

and

E ( C k e i j k ) = 0 for all ( i , j , k ) ’s;

iv) Error terms are normally distributed with mean zero and variance σ e 2 , they are mutually independent i.e. e i j k l ~ N ( 0 , σ e 2 ) .

Under the assumptions above, we consider the following notations so as to derive the requisite expected mean squares.

Now, let

N = ∑ i a N i = ∑ j b N j = ∑ k c N k = ∑ i a ∑ j b n i j . = ∑ i a ∑ k c n i . k = ∑ j b ∑ k c n . j k = ∑ i a ∑ j b ∑ k c n i j k = ∑ i a ∑ j b ∑ k c n i .. = ∑ i a n i j k = ∑ j b n i j k = ∑ k c n i j k (2)

The mean squares due to the three main effects and four interaction terms for Model (1) are

M S A = [ ∑ i a N i ( X ¯ i ... − X ¯ .... ) 2 ] a − 1 , (3)

M S B = [ ∑ j b N j ( X ¯ . j .. − X ¯ .... ) 2 ] b − 1 , (4)

M S C = [ ∑ k c N k ( X ¯ .. k . − X ¯ .... ) 2 ] c − 1 , (5)

M S A B = ∑ i j a b n i j . ( X ¯ i j .. − X ¯ i ... − X ¯ . j .. + X ¯ .... ) 2 ( a − 1 ) ( b − 1 ) , (6)

M S A C = ∑ i k a c n i . k ( X ¯ i . k . − X ¯ i ... − X ¯ .. k . + X ¯ .... ) 2 ( a − 1 ) ( c − 1 ) , (7)

M S B C = ∑ j k b c n . j k ( X ¯ . j k . − X ¯ . j .. − X ¯ .. k . + X ¯ .... ) 2 ( b − 1 ) ( c − 1 ) , (8)

M S A B C = ∑ i j k a b c n i j k ( X ¯ i j k . − X ¯ i j .. − X ¯ i . k . − X ¯ . j k . + X ¯ i ... + X ¯ . j .. + X ¯ .. k . − X ¯ .... ) 2 ( a − 1 ) ( b − 1 ) ( c − 1 ) , (9)

and

M S E = ∑ i a ∑ j b ∑ k c ∑ l n i j k ( X i j k l − X ¯ i j k . ) 2 N − a b c , (10)

where: MS_{A} is the Mean Square for factor A, MS_{B} is the Mean Square for factor B, MS_{C} is the Mean Square for factor C, MS_{AB} is the Mean Square for the interaction factor A and B, MS_{AC} is the Mean Square for the interaction factor A and C, MS_{BC} is the Mean Square for interaction of factor B and C, MS_{ABC} is the Mean Square for interaction of factor A, B and C, and MS_{E} is the Mean Square for error term.

Using Brute Force Method, the expected mean squares of Equation (1) when factor A and B is fixed while factor C is random the expected mean square are shown in theorem 1.

Theorem 1: Given the model in (1), the expected mean square due to factor A is

E [ M S A ] = ∑ i a N i A i 2 a − 1 + K 1 σ A C 2 + K 2 σ A B C 2 + σ e 2 .

where K 1 = ∑ i a N i − 1 ∑ k c n i j k 2 − N − 1 ∑ i k a c n i j k 2 a − 1 , K 2 = N − 1 ∑ i j k a b c n i j k 2 a − 1 and σ e 2 is the error variance.

Proof:

Using Equation (1), we have

X ¯ i ... = μ + A i + 0 + C ¯ . + 0 + ( A C ¯ ) i . + ( B C ¯ ) .. + 0 + e ¯ i ... (11)

and

X ¯ ..... = μ + 0 + 0 + C ¯ . + 0 + ( A C ¯ ) .. + ( B C ¯ ) .. + ( A B C ¯ ) ... + e ¯ .... (12)

Substituting (11), (12) into (3), and taking expectations, we have

E [ M S A ] = 1 a − 1 E [ ∑ i a N i [ A i + ( A C ¯ ) i . − ( A C ¯ ) .. − ( A B C ¯ ) ... + ( e ¯ i ... − e ¯ .... ) ] 2 ] = 1 a − 1 E [ ∑ i a N i [ A i + [ ( A C ¯ ) i . − ( A C ¯ ) .. ] − ( A B C ¯ ) ... + ( e ¯ i ... − e ¯ .... ) ] 2 ] = 1 a − 1 E [ ∑ i a N i [ A i 2 + [ ( A C ¯ ) i . − ( A C ¯ ) .. ] 2 − ( A B C ¯ ) ... 2 + ( e ¯ i ... − e ¯ .... ) 2 ] ]

where

( A C ¯ ) i . = N ( A C ¯ ) .. ∑ i a N i , E ( A C ¯ ) .. 2 = ∑ i k a c n i j k 2 σ A C 2 N 2 and E ( A B C ¯ ) ... 2 = ∑ i j k a b c n i j k 2 σ A B C 2 N 2 .

= 1 a − 1 E [ ∑ i a N i [ A i 2 + ( A C ¯ ) i . 2 − 2 ( A C ¯ ) i . ( A C ¯ ) .. + ( A C ¯ ) .. 2 + ( A B C ¯ ) ... 2 + ( e ¯ i ... − e ¯ .... ) 2 ] ] = 1 a − 1 E [ ∑ i a N i A i 2 + ∑ i a N i ( A C ¯ ) i . 2 − 2 ∑ i a N i ( A C ¯ ) i . ( A C ¯ ) .. + N ( A C ¯ ) .. 2 + N ( A B C ¯ ) ... 2 + ∑ i a N i ( e ¯ i ... − e ¯ .... ) 2 ] = 1 a − 1 E [ ∑ i a N i A i 2 + ∑ i a N i ( A C ¯ ) i . 2 − N ( A C ¯ ) .. 2 + N ( A B C ¯ ) ... 2 + ∑ i a N i ( e ¯ i ... 2 − 2 e ¯ i ... e ¯ .... + e ¯ .... 2 ) ] = 1 a − 1 [ ∑ i a N i E ( A i 2 ) + ∑ i a N i E ( A C ¯ ) i . 2 − N E ( A C ¯ ) .. 2 + N E ( A B C ¯ ) ... 2 + ∑ i a N i E ( e ¯ i ... 2 ) − 2 ∑ i a N i E ( e ¯ i ... ) E ( e ¯ .... ) + N E ( e ¯ .... 2 ) ] = 1 a − 1 [ ∑ i a N i E ( A i 2 ) + ∑ i a N i − 1 ∑ k c n i j k 2 σ A C 2 − N − 1 ∑ i k a c n i j k 2 σ A C 2 + N − 1 ∑ i j k a b c n i j k 2 σ A B C 2 + ∑ i a N i σ e 2 N i − 2 N σ e 2 N + N σ e 2 N ] = 1 a − 1 [ ∑ i a N i E ( A i 2 ) + ( ∑ i a N i − 1 ∑ k c n i j k 2 − N − 1 ∑ i k a c n i j k 2 ) σ A C 2 + N − 1 ∑ i j k a b c n i j k 2 σ A B C 2 + ( a − 1 ) σ e 2 ]

E [ M S A ] = ∑ i a N i E ( A i 2 ) a − 1 + 1 a − 1 ( ∑ i a N i − 1 ∑ k c n i j k 2 − N − 1 ∑ i k a c n i j k 2 ) σ A C 2 + 1 a − 1 N − 1 ∑ i j k a b c n i j k 2 σ A B C 2 + 1 a − 1 ( a − 1 ) σ e 2

E [ M S A ] = ∑ i a N i A i 2 a − 1 + K 1 σ A C 2 + K 2 σ A B C 2 + σ e 2 .

where e ¯ i ... = N e ¯ .... ∑ i a N i . This completes the proof.

Similarly, if MS_{B} and MS_{C} denote the mean squares due to factor B and factor C respectively. Then

E [ M S B ] = ∑ j b N j E ( B j ) 2 b − 1 + K 3 σ B C 2 + K 4 σ A B C 2 + σ e 2 (13)

E [ M S C ] = K θ σ C 2 + K 5 σ A C 2 + K 6 σ B C 2 + K 7 σ A B C 2 + σ e 2 (14)

E [ M S A B ] = ∑ i a ∑ j b n i j . E ( A B ) i j 2 ( a − 1 ) ( b − 1 ) + K 8 σ A B C 2 + σ e 2 (15)

E [ M S A C ] = K 9 σ A C 2 + K 10 σ A B C 2 + σ e 2 (16)

E [ M S B C ] = K 11 σ B C 2 + K 12 σ A B C 2 + σ e 2 (17)

E [ M S A B C ] = K 13 σ A B C 2 + σ e 2 (18)

and

E [ M S e ] = σ e 2 (19)

where:

K 1 = ∑ i a N i − 1 ∑ k c n i j k 2 − N − 1 ∑ i k a c n i j k 2 a − 1 , K 2 = N − 1 ∑ i j k a b c n i j k 2 a − 1 , K 3 = ∑ j b N j − 1 ∑ k c n i j k 2 − N − 1 ∑ j k b c n i j k 2 b − 1 , K 4 = N − 1 ∑ i j k a b c n i j k 2 b − 1 , K 5 = N − 1 ∑ i k a c n i j k 2 c − 1 , K 6 = N − 1 ∑ j k b c n i j k 2 c − 1 , K 7 = N − 1 ∑ i j k a b c n i j k 2 c − 1 , K 8 = N − 1 ∑ k c n i j k 2 + N − 1 3 ∑ i j k a b n i j k 2 ( a − 1 ) ( b − 1 ) , K 9 = N − ∑ i b N i − 1 ∑ k c n i j k 2 + N − 1 ∑ i k a c n i j k 2 ( a − 1 ) ( c − 1 ) , K 10 = N − 1 ∑ i j k a b c n i j k 2 ( a − 1 ) ( c − 1 ) , K 11 = N − 1 ∑ i j k a b c n i j k 2 ( b − 1 ) ( c − 1 ) , K 12 = N − N ∑ k c n i j k 2 + N − 1 ∑ j k a c n i j k 2 ( b − 1 ) ( c − 1 ) , K 13 = N − 1 ∑ j b n i j k 2 + N − 1 ∑ i j k a b c n i j k 2 ( a − 1 ) ( b − 1 ) ( c − 1 ) (20)

A major step in the derivation of the F-statistics is to find the unbiased estimates of the mean squares due to the main factors and the interactions. Therefore, the unbiased estimates are presented as follows.

Theorem 2: Given the model in (1), if factors A and B are fixed while factor C is random, then M S ϕ = ϕ 1 M S A C + ϕ 2 M S A B C + ( 1 − ϕ 1 − ϕ 2 ) M S E is an unbiased estimate of

M S A = k 1 σ A C 2 + k 3 σ A B C 2 + σ e 2 .

where, M S ϕ is the unbiased estimate of the mean square for factor A.

Proof:

If we assume that M S A C and M S A B C are independent, we take expectations, to have

E ( M S ϕ ) = ϕ 1 E ( M S A C ) + ϕ 2 E ( M S A B C ) + ( 1 − ϕ 1 − ϕ 2 ) E ( M S E )

E ( M S ϕ ) = ϕ 1 ( K 9 σ A C 2 + K 10 σ A B C 2 + σ e 2 ) + ϕ 2 ( k 13 σ A B C 2 + σ e 2 ) + ( 1 − ϕ 1 − ϕ 2 ) σ e 2

E ( M S ϕ ) = ϕ 1 K 9 σ A C 2 + ϕ 1 K 10 σ A B C 2 + ϕ 1 σ e 2 + ϕ 2 K 13 σ A B C 2 + ϕ 2 σ e 2 + σ e 2 − ϕ 1 σ e 2 − ϕ 2 σ e 2

But

ϕ 1 = K 1 K 9 , ϕ 2 = K 2 K 13 − K 1 K 10 K 9 K 13 (21)

E ( M S ϕ ) = K 1 K 9 K 9 σ A C 2 + K 1 K 9 K 10 σ A B C 2 + K 2 K 13 − K 1 K 10 K 9 K 13 ( K 13 σ A B C 2 ) + σ e 2 = K 1 σ A C 2 + ( K 1 K 9 K 10 + K 2 K 13 K 13 − K 1 K 10 K 9 K 13 k 13 ) σ A B C 2 + σ e 2

E ( M S ϕ ) = k 1 σ A C 2 + k 2 σ A B C 2 + σ e 2 as required.

Similarly, it can be shown that M S λ = λ 1 M S B C + λ 2 M S A B C + ( 1 − λ 1 − λ 2 ) M S E is an unbiased estimate.

M S B = k 3 σ A C 2 + k 4 σ A B C 2 + σ e 2 ,

M X γ = γ 1 M S A C + γ 2 M S B C + γ 3 M S A B C + ( 1 − γ 1 − γ 2 − γ 3 ) M S E is an unbiased estimate.

M S C = k 5 σ A C 2 + k 6 σ B C 2 + k 7 σ A B C 2 + σ e 2 ,

M S π = π 2 M S A B C + ( 1 − π 1 ) M S E is an unbiased estimate.

M S A B C = k 8 σ A B C 2 + σ e 2 ,

M S ρ = ρ 1 M S A C + ρ 2 M S A B C + ( 1 − ρ 1 − ρ 2 ) M S E is an unbiased estimate.

M S A C = k 9 σ A C 2 + k 10 σ A B C 2 + σ e 2 ,

and

M S τ = τ 1 M S B C + τ 2 M S A B C + ( 1 − τ 1 − τ 2 ) M S E is an unbiased estimate.

M S B C = k 11 σ B C 2 + k 12 σ A B C 2 + σ e 2 .

The F-statistics for the main effects and interactions effect are given below:

F A = M S A M S ϕ , (22)

F B = M S B M S λ , (23)

F C = M S C M S γ , (24)

F A B = M S A B M S π , (25)

F A C = M S A C M S ρ , (26)

F B C = M S B C M S τ , (27)

and

F A B C = M S A B C M S e , (28)

where: F_{A} is the F-statistic for factor A, F_{B} is the F-statistic for factor B, F_{C} is the F-statistic for factor C, F_{AB} is the F-statistics for the interaction factors A and B, F_{AC} is the F-statistics for the interaction factors A and C, F_{BC} is the F-statistics for the interaction factors B and C, and F_{ABC} is the F-statistics for the interaction factors A, B and C.

Having presented the necessary F-statistic, we also have to determine the pseudo-degree of freedom corresponding to this statistic. Using the [

Theorem 3: Given the model in (1) and Welch Satterthwaite Equation, let f ϕ be the pseudo-degree of freedom for factor A. Then

f ϕ = [ ( M S ϕ ^ ) 2 ] [ ϕ 1 2 ( M S A C ) 2 f A C + ϕ 2 2 ( M S A B C ) f A B C 2 + ( 1 − ϕ 1 − ϕ 2 ) 2 ( M S e ) f e 2 ] − 1 , (29)

Proof:

Recall that

M S ϕ = ϕ 1 M S A C + ϕ 2 M S A B C + ( 1 − ϕ 1 − ϕ 2 ) M S e , (30)

If we assumed that M S A C and M S A B C are independent, we obtain

var ( M S ϕ ) = ϕ 1 2 var ( M S A C ) + ϕ 2 2 var ( M S A B C ) + ( 1 − ϕ 1 − ϕ 2 ) 2 var ( M S e ) , (31)

Recall that f e = N − a b c

Since

M S e = ∑ i j k l a b c n i j k ( X i j k l − X ¯ i j k . ) N − a b c = S S e f e and S S e σ e 2 ~ χ f e 2 ,

we have

var ( M S e ) = var ( S S e ) f e 2 , (32)

var ( M S e ) = 2 ( σ e 2 ) 2 f e f e 2 = 2 ( σ e 2 ) 2 f e , var ( M S ϕ ) = 2 ( σ ϕ 2 ) 2 f ϕ , var ( M S A C ) = 2 ( σ A C 2 ) 2 f A C , var ( M S B C ) = 2 ( σ B C 2 ) 2 f B C , var ( M S A B C ) = 2 ( σ A B C 2 ) 2 f A B C (33)

Extending our idea of (33) into (31), leads to

2 ( σ ϕ 2 ) 2 f ϕ = ϕ 1 2 ( 2 ( σ A C 2 ) 2 f A C ) + ϕ 2 2 ( 2 ( σ A B C 2 ) 2 f A B C ) + ( 1 − ϕ 1 − ϕ 2 ) 2 ( ( 2 σ e 2 ) 2 f e )

where:

σ e 2 = M S e , σ ϕ 2 = M S ϕ , σ A C 2 = M S A C , σ B C 2 = M S B C and σ A B C 2 = M S A B C .

Consequently

2 ( M S ϕ ) 2 f ϕ − 1 = 2 ϕ 1 2 ( M S A C ) 2 f A C − 1 + 2 ϕ 2 2 ( M S A B C ) 2 f A B C − 1 + 2 ( 1 − ϕ 1 − ϕ 2 ) 2 ( M S e ) 2 f e − 1

f ϕ = [ ( M S ϕ ) 2 ] [ ϕ 1 2 ( M S A C ) 2 f A C + ϕ 2 2 ( M S A B C ) f A B C 2 + ( 1 − ϕ 1 − ϕ 2 ) 2 ( M S e ) f e 2 ] − 1 (34)

The degree of freedom associated with F_{A} is

f f A , f ϕ 0.05

Similarly,

f λ = [ ( M S λ ) 2 ] [ λ 1 2 ( M S B C ) 2 f B C + λ 2 2 ( M S A B C ) 2 f A B C + ( 1 − λ 1 − λ 2 ) 2 ( M S e ) 2 f e ] − 1 (35)

The degree of freedom associated with F_{B} is

f f B , f λ 0.05

f γ = [ ( M S γ ) 2 ] ⋅ [ γ 1 2 ( M S A C ) 2 f A C + γ 2 2 ( M S B C ) 2 f B C + γ 3 2 ( M S A B C ) f A B C 2 + ( 1 − γ 1 − γ 2 − γ 3 ) 2 ( M S e ) f e 2 ] − 1 (36)

The degree of freedom associated with F_{C} is

f f C , f γ 0.05

f π = [ ( M S π ) 2 ] [ π 1 2 ( M S A B C ) f A B C 2 + ( 1 − π 1 ) 2 ( M S e ) f e 2 ] − 1 (37)

The degree of freedom associated with interaction factor A and B is

f f A B , f π 0.05

f ρ = [ ( M S ρ ) 2 ] [ ρ 1 2 ( M S A C ) 2 f A C + ρ 2 2 ( M S A B C ) f A B C 2 + ( 1 − ρ 1 − ρ 2 ) 2 ( M S e ) f e 2 ] − 1 (38)

The degree of freedom associated with interaction factor A and C is

f f A C , f ρ 0.05

f τ = [ ( M S τ ) 2 ] [ τ 1 2 ( M S B C ) 2 f B C + γ 2 2 ( M S A B C ) 2 f A B C + ( 1 − τ 1 − τ 2 ) 2 ( M S e ) f e 2 ] − 1 (39)

The degree of freedom associated with interaction factor B and C is

f f B C , f τ 0.05

The degree of freedom associated with the interaction factors A × B × C is

f f A B C , f e α

This does not involve obtaining any expression, where f λ , f γ , f π , f ρ and f τ represent the pseudo degrees of freedom for factors B and C, the interactions A × B , A × C and B × C . _{}While f_{ABC} and f_{e} are the numerator and denominator degrees of freedom respectively (

FACTORS | HYPOTHESIS | UNBIASED ESTIMATES OF THE MEAN SQUARES | APPROXIMATE F-STATISTIC |
---|---|---|---|

A | H 0 A : A 1 = ⋯ = A n = 0 | M S ϕ = ϕ 1 M S A C + ϕ 2 M S A B C + ( 1 − ϕ 1 − ϕ 2 ) M S E | M S A M S ϕ |

B | H 0 B : B 1 = ⋯ = B n = 0 | M S λ = λ 1 M S B C + λ 2 M S A B C + ( 1 − λ 1 − λ 2 ) M S E | M S B M S λ |

C | H 0 C : σ C 2 = 0 | M S γ = γ 1 M S A C + γ 2 M S B C + γ 3 M S A B C + ( 1 − γ 1 − γ 2 − γ 3 ) M S E | M S B M S γ |

AB | H 0 A B : ( A B ) 1 = ⋯ = ( A B ) n = 0 | M S π = π 1 M S A B C + ( 1 − π 1 ) M S E | M S A B M S π |

AC | H 0 A C : σ A C 2 = 0 | M S ρ = ρ 1 M S A C + ρ 2 M S A B C + ( 1 − ρ 1 − ρ 2 ) M S E | M S A C M S ρ |

BC | H 0 B C : σ B C 2 = 0 | M S τ = τ 1 M S B C + τ 2 M S A B C + ( 1 − τ 1 − τ 2 ) M S E | M S B C M S τ |

ABC | H 0 A B C : σ A B C 2 = 0 | M S A B C M S E |

Consider a three-factorial experiment involving factor A (Solvents-water, ethanol, ether), factor B (Volumes of solute-25, 50 and 100 ml) and factor C (Time-40, 50, 60 and 70 mins) respectively. The solvents are of varying polarities. Arbitrary volumes of 25, 50 and 100 ml were chosen while the extraction was done at intervals of time 40, 50, 60 and 70 mins. The major aim is to determine the efficiency of different quantities of solvents on the extraction of soluble components of lemon grass per unit time.

In the experiment, the sample (lemon grass) was dried in the oven at 45˚C for 1440 mins. The dried sample was pulverized and 1 g of pulverized sample was used for each solvent in a typical extraction 1g of sample was dissolved in 25 ml of water for 40 mins. At the end of the time, the solute was filtered using a suitable filter paper (Whatman). The solution was then vaporized at 105˚C for 720 mins leaving the remaining extract which was weighted in an analytical balance. The process was repeated and replicated three times for 50, 60 and 70 mins. A similar procedure was done using different volumes of ethanol and ether as extracts at different durations of 40, 50, 60 and 70mins. Results of the extraction are shown in

Using the information in _{A}, MS_{B}, MS_{C}, MS_{AB}, MS_{AC}, MS_{BC}, MS_{ABC} and MS_{E}, we have

M S A = 72.4819 , M S B = 50.5353 , M S C = 67.0578 , M S A B = 53.4128 , M S B C = 25.4262 , M S A C = 86.2273 , M S A B C = 193.7391 and M S E = 1.9473.

Again, the constants are calculated as follows:

k 1 = 7.7153 , k 2 = 1.2531 , k 3 = 6.8274 , k 4 = 1.2531 , k 5 = 1.7325 , k 6 = 2.2840 , k 7 = 0.8354 , k 8 = 8.6482 , k 9 = 11.2040 , k 10 = 0.4177 , k 11 = 0.4177 , k 12 = 67.3453 , k 13 = 4.8323

The ANOVA table for the data is shown in

Our hypothesis for factor A shall be H 0 A : A 1 = ⋯ = A n = 0 ;

Similarly, our hypothesis for factor B shall be H 0 B : B 1 = ⋯ = B n = 0 ;

Our hypothesis for factor C shall be H 0 C : σ C 2 = 0 ;

Our hypothesis for factor A and B shall be H 0 A B : ( A B ) 1 = ⋯ = ( A B ) n = 0 ;

Our hypothesis for factor A and C shall be H 0 A C : σ A C 2 = 0 ;

Our hypothesis for factor B and C shall be H 0 B C : σ B C 2 = 0 ;

And Our hypothesis for factor A, B and C shall be H 0 A B C : σ A B C 2 = 0 .

Time | Solvents (B) | Volumes (A) | X .. k . | N k | X ¯ .. k . | |||
---|---|---|---|---|---|---|---|---|

(mins) (C) | 25 | 50 | 100 | |||||

40 | 1 | 1.0 | 5.0 | 8.0 | ||||

2.0 | 2.0 | 9.0 | ||||||

3.0 | ||||||||

2 | 2.0 | 13.0 | 12.0 | |||||

4.0 | 15.0 | 145 | 20 | 7.25 | ||||

16.0 | ||||||||

3 | 4.0 | 3.0 | 15.0 | |||||

7.0 | 5.0 | 14.0 | ||||||

5.0 | ||||||||

50 | 1 | 12.0 | 16.0 | 2.0 | ||||

11.0 | 17.0 | |||||||

14.0 | 222 | 21 | 10.5714 | |||||

13.0 | 8.0 | 13.0 | ||||||

2 | 15.0 | 9.0 | 15.0 | |||||

8.0 | 11.0 | |||||||

2.0 | 10.0 | 14.0 | ||||||

3 | 5.0 | 11.0 | ||||||

4.0 | 12.0 |

60 | 6.0 | 12.0 | 5.0 | ||||||
---|---|---|---|---|---|---|---|---|---|

1 | 5.0 | 4.0 | |||||||

2.0 | |||||||||

5.0 | 12.0 | 6.0 | 123 | 19 | 6.4737 | ||||

2 | 7.0 | 11.0 | |||||||

3 | 2.0 | 13.0 | 3.0 | ||||||

4.0 | 15.0 | 2.0 | |||||||

5.0 | 4.0 | ||||||||

70 | 1 | 13.0 | 3.0 | 1.0 | |||||

14.0 | 5.0 | 2.0 | |||||||

4.0 | |||||||||

2 | 9.0 | 13.0 | 6.0 | 156 | 21 | 7.4286 | |||

9.0 | 4.0 | ||||||||

7.0 | 5.0 | ||||||||

3 | 2.0 | 6.0 | 15.0 | ||||||

8.0 | 13.0 | ||||||||

5.0 | 12.0 | ||||||||

X i ... | 180 | 213 | 253 | ||||||

N i | 29 | 23 | 29 | ||||||

X ¯ i ... | 6.2069 | 9.2069 | 8.7241 | ||||||

X .. j . | 178 | 248 | 220 | ||||||

N i | 26 | 26 | 29 | ||||||

X ¯ .. j . | 6.8462 | 9.8462 | 7.5862 | ||||||

S.V | d.f | SS | MS | Expected mean square | f-ratio |
---|---|---|---|---|---|

A | 2 | 144.9638 | 72.4819 | ∑ i a N i A i 2 a − 1 + 7.7153 σ A C 2 + 1.2531 σ A B C 2 + σ e 2 | 0.7373 |

B | 2 | 101.0706 | 50.5353 | ∑ j b N j B j 2 b − 1 + 6.8274 σ B C 2 + 1.2531 σ A B C 2 + σ e 2 | 0.9649 |

C | 3 | 201.1734 | 67.0578 | 26.963 σ C 2 + 1.7325 σ A C 2 + 2.2840 σ B C 2 + 0.8354 σ A B C 2 + σ e 2 | 1.4645 |

AB | 4 | 213.6512 | 53.4128 | ∑ i j a b n i j . E ( A B ) i j 2 ( a − 1 ) ( b − 1 ) + 8.6482 σ A B C 2 + σ e 2 | 0.2757 |

AC | 6 | 152.5572 | 86.2273 | 11.2040 σ A C 2 + 0.4177 σ A B C 2 + σ e 2 | 1 |

BC | 6 | 517.3638 | 25.3261 | 0.4177 σ B C 2 + 67.3453 σ A B C 2 + σ e 2 | 1 |

ABC | 12 | 2324.8692 | 193.7391 | 4.8323 σ A B C 2 + σ e 2 | 99.4911 |

Error | 54 | 87.6285 | 1.9473 | σ e 2 | |

Total | 80 |

NOTE: UEMS= UNBIASED ESTIMATES OF THE MEAN SQUARES.

In this study, we have presented a hypothesis testing procedure based on an unbalanced three-way cross-classification mixed effects model with interaction when factors A and B are fixed while factor C is random. From the theoretical results obtained in this study, it was observed that exact F-tests do not exist for any of the hypotheses to be tested. As a consequence, approximate F-tests were considered. A numerical example was given to illustrate theoretical our results.

The authors declare no conflicts of interest regarding the publication of this paper.

Okereke, E.W., Nwabueze, J.C. and Obinyelu, S.O. (2020) Analysis of Variance for Three-Way Unbalanced Mixed Effects Interactive Model. Open Journal of Statistics, 10, 261-273. https://doi.org/10.4236/ojs.2020.102019