^{1}

^{*}

^{1}

Yin [1] has developed a new Bayesian measure of evidence for testing a point null hypothesis which agrees with the frequentist p-value thereby, solving Lindley’s paradox. Yin and Li [2] extended the methodology of Yin [1] to the case of the Behrens-Fisher problem by assigning Jeffreys’ independent prior to the nuisance parameters. In this paper, we were able to show both analytically and through the results from simulation studies that the methodology of Yin [1] solves simultaneously, the Behrens-Fisher problem and Lindley’s paradox when a Gamma prior is assigned to the nuisance parameters.

Consider a hypothesis testing problem about the difference of two means as follows: Let X 1 ~ N ( μ 1 , σ 1 2 ) and X 2 ~ N ( μ 2 , σ 2 2 ) and no assumption is made about σ 1 2 and σ 2 2 . Then testing the hypothesis stated as

H 0 : θ = μ 1 − μ 2 = 0 Versus H 0 : θ ≠ 0 (1)

based on random samples of size n 1 and n 2 respectively, and the assumption that X 1 = ( X 11 , X 12 , ⋯ , X 1 n 1 ) and X 2 = ( X 21 , X 22 , ⋯ , X 2 n 2 ) are independent is known as the Behrens-Fisher problem.

Lindley [_{0} tends to 1. This result holds irrespective of the prior probability assigned to H_{0}. For discussions and arguments concerning Lindley’s paradox, see Spanos [

Scheffe [

p ( x ) = Pr [ F 1 , n 1 + n 2 − 2 ≥ ( x ¯ 1 − x ¯ 2 ) 2 ( n 1 + n 2 − 2 ) ( s ˜ 1 2 B + s ˜ 2 2 1 − B ) − 1 ] (2)

where s ˜ i 2 = ( 1 − n i − 1 ) s i 2 , i = 1 , 2 , s i 2 is the variance of sample i calculated from a sample of size n i and B ~ B e t a ( ( n 1 − 1 ) 2 , ( n 2 − 1 ) 2 ) . Zheng et al. [

Type-II error and the length of the confidence interval is controlled conditioned on a specified Type-I error by using Stein’s two-stage sampling scheme. Ozkip et al. [

Degroot [_{0} is true, and the p-value, in a one-sided hypothesis testing problem under the same class of priors as in Berger and Sellke [

Meng [

p p p ( x ) = Pr { F 1 , n 1 + n 2 ≥ ( x ¯ 1 − x ¯ 2 ) 2 ( n 1 + n 2 ) [ s ˜ 1 2 + ( x ¯ 1 − μ ) 2 ] B n 1 , n 2 − 1 + [ s ˜ 2 2 + ( x ¯ 2 − μ ) 2 ] ( 1 − B n 1 , n 2 ) − 1 } (3)

where s ˜ i 2 = ( 1 − n i − 1 ) s i 2 , i = 1 , 2 and B n 1 , n 2 ~ B e t a ( n 1 2 , n 2 2 ) .

Ghosh and Kim [

P B ( x ) = P ( | θ − E ( θ | x ) | ≥ | θ 0 − E ( θ | x ) | | x ) (4)

where E ( θ | x ) is the posterior expectation of θ under the prior π ( θ ) . A smaller value of P B ( x ) means a bigger distance between θ 0 and the true θ and therefore, suggests stronger evidence against the null hypothesis H_{0}.

Yin and Li [

π ( μ 1 , μ 2 , σ 1 2 , σ 2 2 ) ∝ 1 σ 1 2 σ 2 2 (5)

They showed that the posterior distribution of θ , the difference of the two means μ 1 and μ 2 is given as

θ | x ~ x ¯ 1 − x ¯ 2 − ( S 1 T n 1 − 1 n 1 − S 2 T n 2 − 1 n 2 ) (6)

and the Bayesian measure of evidence under Jeffreys’ independent prior was given by

P J B F ( x ) = P ( | S 1 T n 1 − 1 n 1 − S 1 T n 1 − 1 n 1 | ≥ | x ¯ 1 − x ¯ 2 | ) (7)

where T a − 1 is a t variable with a − 1 degrees of freedom. This approach was shown to solve the two problems and also, to yield credible intervals that actually possess 1 − α coverage probability. This was however only demonstrated where a non-informative prior was used and the use of an informative prior was recommended in order to see how the methodology performs under such conditions.

Let there exist samples of sizes n 1 and n 2 from X 1 ~ N ( μ 1 , σ 1 2 ) and X 2 ~ N ( μ 2 , σ 2 2 ) respectively. Under the assumption of independence, and letting x = (x_{1}, x_{2}), the likelihood function is given by

f ( x | μ 1 , μ 2 , σ 1 2 , σ 2 2 ) ∝ σ 1 − n 1 σ 2 − n 2 exp [ − 1 2 ∑ i = 1 2 S i ( μ i ) σ i 2 ] (8)

where

S i ( μ i ) = ∑ j = 1 n i ( x i j − μ i ) 2

Now, let π ( μ 1 , μ 2 , σ 1 2 , σ 2 2 ) ∝ 1 , τ i = σ i − 2 , and π ( τ i ) ~ G a m m a ( α i , β i ) , i = 1 , 2 . The marginal posterior distribution of ( μ 1 , μ 2 | x ) is given by

π G a ( μ 1 , μ 2 | x ) ∝ ∫ 0 ∞ ∫ 0 ∞ π G a ( μ 1 , μ 2 , σ 1 2 , σ 2 2 | x ) d σ 1 2 d σ 2 2 ∝ ∫ 0 ∞ ∫ 0 ∞ σ 1 − n 1 σ 2 − n 2 exp [ − 1 2 ∑ i = 1 2 S i ( μ i ) σ i 2 ] τ 1 α 1 − 1 τ 2 α 2 − 1 e − ( τ 1 β 1 + τ 2 β 2 ) d σ 1 2 d σ 2 2 (9)

So, we have that

π G a ( μ 1 , μ 2 | x ) ∝ [ 1 2 S 1 ( μ 1 ) + β 1 ] − ( n 1 + 2 α 1 − 4 ) / 2 [ 1 2 S 2 ( μ 2 ) + β 2 ] − ( n 2 + 2 α 2 − 4 ) / 2 ∝ [ 1 + n 1 ( x ¯ 1 − μ 1 ) 2 + 2 β 1 ( n 1 − 1 ) S 1 2 ] − ( n 1 + 2 α 1 − 4 ) / 2 [ 1 + n 2 ( x ¯ 2 − μ 2 ) 2 + 2 β 2 ( n 2 − 1 ) S 2 2 ] − ( n 2 + 2 α 2 − 4 ) / 2 (10)

Let

a i = 1 + 2 β i ( n i − 1 ) S i 2 , i = 1 , 2.

Then we have that

π G a ( μ 1 , μ 2 | x ) ∝ [ 1 + n 1 ( x ¯ 1 − μ 1 ) 2 a 1 ( n 1 − 1 ) S 1 2 ] − ( n 1 + 2 α 1 − 4 ) / 2 [ 1 + n 2 ( x ¯ 2 − μ 2 ) 2 a 2 ( n 2 − 1 ) S 2 2 ] − ( n 2 + 2 α 2 − 4 ) / 2 (11)

Now, let

t i = n i ( n i + 2 α i − 5 ) ( x ¯ i − μ 1 ) a i ( n i − 1 ) S i

And then (11) simplifies to

π G a ( μ 1 , μ 2 | x ) ∝ [ 1 + t 1 2 n 1 + 2 α 1 − 5 ] − ( n 1 + 2 α 1 − 4 ) / 2 [ 1 + t 2 2 n 2 + 2 α 2 − 5 ] − ( n 2 + 2 α 2 − 4 ) / 2 (12)

where clearly, (12) is the kernel of the joint distribution of two independent t random variables t 1 and t 2 with n 1 + 2 α 1 − 5 and n 2 + 2 α 2 − 5 degrees of freedom respectively.

Also, we have that

μ i = x ¯ i − t i S i a i ( n i − 1 ) n i ( n i + 2 α i − 5 ) = x ¯ i − t i S i 2 ( n i − 1 ) ( 1 + 2 β i ( n i − 1 ) S i 2 ) n i ( n i + 2 α i − 5 ) = x ¯ i − t i ( n i − 1 ) S i 2 + 2 β i n i ( n i + 2 α i − 5 )

And this implies that the posterior distribution of θ , the difference of the two means μ 1 and μ 2 is given as

θ | x ~ x ¯ 1 − x ¯ 2 − ( T 1 ( n 1 − 1 ) S 1 2 + 2 β 1 n 1 ( n 1 + 2 α 1 − 5 ) − T 2 ( n 2 − 1 ) S 2 2 + 2 β 2 n 2 ( n 2 + 2 α 2 − 5 ) ) (13)

and

E ( θ | x ) = x ¯ 1 − x ¯ 2

where T i is a random variable that follows a t distribution with n i + 2 α i − 5 degrees of freedom, where i = 1 , 2 . The Bayesian measure of evidence under a Gamma Prior is then given by

P G a B F ( x ) = P ( | T 1 ( n 1 − 1 ) S 1 2 + 2 β 1 n 1 ( n 1 + 2 α 1 − 5 ) − T 2 ( n 2 − 1 ) S 2 2 + 2 β 2 n 2 ( n 2 + 2 α 2 − 5 ) | ≥ | x ¯ 1 − x ¯ 2 | ) (14)

To establish that the Bayesian measure of evidence of Yin (2012) solves the paradox in Lindley (1957) when a Gamma prior is assigned to the nuisance parameters, we need to show that

lim ( n 1 , n 2 ) → ( ∞ , ∞ ) P G a B F ( x ) = 0 (15)

Recall that

P B ( x ) = P ( | θ − E ( θ | x ) | ≥ | θ 0 − E ( θ | x ) | | x ) = 2 [ 1 − P ( Z < Z 0 ) ] = 2 [ 1 − P ( χ 1 2 < Z 0 2 ) ]

where Z is a standard normal random variable. Now, under the Gamma prior, we have that

Z 0 2 = θ 0 − E ( θ | x ) V a r ( θ | x ) Z 0 2 = n 1 ( n 1 + 2 α 1 − 7 ) n 2 ( n 2 + 2 α 2 − 7 ) [ θ 0 − ( x ¯ 1 − x ¯ 2 ) ] 2 n 2 ( n 2 + 2 α 2 − 7 ) [ ( n 1 − 1 ) S 1 2 + 2 β 1 ] + n 1 ( n 1 + 2 α 1 − 7 ) [ ( n 2 − 1 ) S 2 2 + 2 β 2 ] (16)

Then, it can easily be shown that

lim ( n 1 , n 2 ) → ( ∞ , ∞ ) Z 0 2 = ∞ (17)

which implies that (15) holds. To show that (17) holds, we now need to show that

lim n 1 → ∞ [ lim n 2 → ∞ Z 0 2 ] = lim n 2 → ∞ [ lim n 1 → ∞ Z 0 2 ] = ∞ (18)

Let f 1 = lim n 1 → ∞ [ lim n 2 → ∞ Z 0 2 ] , a 1 = 2 α 1 − 7 , a 2 = 2 α 2 − 7 and A = [ θ 0 − ( x ¯ 1 − x ¯ 2 ) ] 2 then we have that

lim n 2 → ∞ Z 0 2 = lim n 2 → ∞ [ ( n 1 2 + n 1 a 1 ) ( n 2 2 + n 2 a 2 ) A ( n 2 2 + n 2 a 2 ) [ ( n 1 − 1 ) S 1 2 + 2 β 1 ] + ( n 1 2 + n 1 a 1 ) [ ( n 2 − 1 ) S 2 2 + 2 β 2 ] ] = lim n 2 → ∞ [ n 2 2 ( n 1 2 + n 1 a 1 ) ( 1 + a 2 n 2 ) A n 2 2 ( 1 + a 2 n 2 ) [ ( n 1 − 1 ) S 1 2 + 2 β 1 ] + n 2 ( n 1 2 + n 1 a 1 ) [ ( 1 − 1 n 2 ) S 2 2 + 2 β 2 n 2 ] ] = lim n 2 → ∞ [ ( n 1 2 + n 1 a 1 ) ( 1 + a 2 n 2 ) A ( 1 + a 2 n 2 ) [ ( n 1 − 1 ) S 1 2 + 2 β 1 ] + n 2 − 1 ( n 1 2 + n 1 a 1 ) [ ( 1 − 1 n 2 ) S 2 2 + 2 β 2 n 2 ] ] = ( n 1 2 + n 1 a 1 ) A ( n 1 − 1 ) S 1 2 + 2 β 1

f 1 = lim n 1 → ∞ [ ( n 1 2 + n 1 a 1 ) A ( n 1 − 1 ) S 1 2 + 2 β 1 ] = lim n 1 → ∞ [ ( n 1 + a 1 ) A ( 1 − 1 n 1 ) S 1 2 + 2 β 1 n 1 ] = ∞ (19)

Secondly, let f 2 = lim n 2 → ∞ [ lim n 1 → ∞ Z 0 2 ] , then we have that

lim n 1 → ∞ Z 0 2 = lim n 1 → ∞ [ ( n 1 2 + n 1 a 1 ) ( n 2 2 + n 2 a 2 ) A ( n 2 2 + n 2 a 2 ) [ ( n 1 − 1 ) S 1 2 + 2 β 1 ] + ( n 1 2 + n 1 a 1 ) [ ( n 2 − 1 ) S 2 2 + 2 β 2 ] ] = lim n 1 → ∞ [ n 1 2 ( 1 + a 1 n 1 ) ( n 2 2 + n 2 a 2 ) A ( n 2 2 + n 2 a 2 ) n 1 [ ( 1 − 1 n 1 ) S 1 2 + 2 β 1 n 1 ] + n 1 2 ( 1 + a 1 n 1 ) [ ( n 2 − 1 ) S 2 2 + 2 β 2 ] ]

= lim n 1 → ∞ [ ( 1 + a 1 n 1 ) ( n 2 2 + n 2 a 2 ) A n 1 − 1 ( n 2 2 + n 2 a 2 ) [ ( 1 − 1 n 1 ) S 1 2 + 2 β 1 n 1 ] + ( 1 + a 1 n 1 ) [ ( n 2 − 1 ) S 2 2 + 2 β 2 ] ]

lim n 1 → ∞ Z 0 2 = ( n 2 2 + n 2 a 2 ) A ( n 2 − 1 ) S 2 2 + 2 β 2

f 2 = lim n 2 → ∞ [ ( n 2 2 + n 2 a 2 ) A ( n 2 − 1 ) S 2 2 + 2 β 2 ] = lim n 2 → ∞ [ ( n 2 + a 2 ) A ( 1 − 1 n 2 ) S 2 2 + 2 β 2 n 2 ] = ∞ (20)

Since from (19) and (20) we have that f 1 = f 2 = ∞ , it has been shown that the Bayesian measure of evidence of Yin (2012) under the Gamma prior solves the paradox in Lindley [

Consequently, since it can be easily seen from (13) that the posterior distribution of θ is symmetric about its expected value, E ( θ | x ) = x ¯ 1 − x ¯ 2 , then Theorem 2 of Yin and Li [

Lemma 1. Let Gamma Priors be assigned to the precisions τ 1 and τ 2 . Then, for values of β 1 ≪ 1 , β 2 ≪ 1 and α 1 = α 2 = 2 , the Posterior distribution of θ , denoted by θ | x is the same as the Posterior distribution under Jeffreys’ independent prior given by π ( μ 1 , μ 2 , σ 1 2 , σ 2 2 ) ∝ σ 1 − 2 σ 2 − 2

Proof. By considering the values of β 1 and β 2 that satisfy β 1 ≪ 1 and β 2 ≪ 1 , we can safely assume that

β 1 ( n 1 − 1 ) S 1 2 ≈ 0 and β 2 ( n 2 − 1 ) S 2 2 ≈ 0

especially where S i 2 ≫ 1 , and n i is sufficiently large, ∀ i = 1 , 2 . Then by setting α 1 = α 2 = 2 ,we have from (10) that

π G a ( μ 1 , μ 2 | x ) ∝ [ 1 + n 1 ( x ¯ 1 − μ 1 ) 2 ( n 1 − 1 ) S 1 2 ] − n 1 / 2 [ 1 + n 2 ( x ¯ 2 − μ 2 ) 2 ( n 2 − 1 ) S 2 2 ] − n 2 / 2 ∝ [ 1 + t 1 2 n 1 − 1 ] − n 1 / 2 [ 1 + t 2 2 n 2 − 1 ] − n 2 / 2 (21)

which is the kernel of the joint distribution of two independent t random variables with t 1 having n 1 − 1 degrees of freedom and t 2 having n 2 − 1 degrees of freedom respectively. The nit can be easily seen that

μ i = x ¯ i − t i S i n i , i = 1 , 2.

and consequently,

θ | x ~ x ¯ 1 − x ¯ 2 − ( S 1 T 1 n 1 − S 2 T 2 n 2 ) (22)

where T i is a t random variable with n i − 1 degrees of freedom.+

Lemma 1 shows that the posterior distribution of the difference in means under Jeffreys’ independent prior is a special case of the posterior distribution of the difference in means under the Gamma prior.

For the purpose of this discussion, we shall refer to the methodology of Yin [

On the other hand, the values in

Thirdly, the values in

n 1 = 3 , n 2 = 3 | |||||||||
---|---|---|---|---|---|---|---|---|---|

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.4353 | 0.4885 | 0.5464 | 0.6089 | 0.6758 | 0.7465 | 0.8203 | 0.8965 | 0.9740 |

p p p ( x ) | 0.3685 | 0.4170 | 0.4732 | 0.5375 | 0.6100 | 0.6905 | 0.7781 | 0.8711 | 0.9675 |

P J B F | 0.5412 | 0.5877 | 0.6363 | 0.6880 | 0.7425 | 0.7993 | 0.8584 | 0.9186 | 0.9795 |

P G a B F | 0.6829 | 0.7165 | 0.7515 | 0.7878 | 0.8259 | 0.8653 | 0.9050 | 0.9454 | 0.9864 |

n 1 = 10 , n 2 = 10 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.1313 | 0.1811 | 0.2451 | 0.3250 | 0.4217 | 0.5350 | 0.6632 | 0.8031 | 0.9503 |

p p p ( x ) | 0.1344 | 0.1790 | 0.2374 | 0.3120 | 0.4051 | 0.5172 | 0.6476 | 0.7928 | 0.9475 |

P J B F | 0.1546 | 0.2069 | 0.2723 | 0.3524 | 0.4474 | 0.5574 | 0.6801 | 0.8131 | 0.9531 |

P G a B F | 0.1692 | 0.2229 | 0.2896 | 0.3694 | 0.4634 | 0.5718 | 0.6911 | 0.8203 | 0.9548 |

n 1 = 30 , n 2 = 30 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0082 | 0.0191 | 0.0418 | 0.0849 | 0.1598 | 0.2778 | 0.4463 | 0.6629 | 0.9131 |

p p p ( x ) | 0.0112 | 0.0229 | 0.0457 | 0.0876 | 0.1598 | 0.2746 | 0.4408 | 0.6580 | 0.9117 |

P J B F | 0.0094 | 0.0214 | 0.0456 | 0.0909 | 0.1675 | 0.2856 | 0.4531 | 0.6675 | 0.9144 |

P G a B F | 0.0102 | 0.0229 | 0.0480 | 0.0939 | 0.1714 | 0.2909 | 0.4580 | 0.6709 | 0.9154 |

n 1 = 3 , n 2 = 5 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.3635 | 0.4202 | 0.4832 | 0.5524 | 0.6275 | 0.7077 | 0.7923 | 0.8802 | 0.9699 |

p p p ( x ) | 0.3212 | 0.3708 | 0.4292 | 0.4969 | 0.5743 | 0.6612 | 0.7564 | 0.8583 | 0.9643 |

P J B F | 0.4579 | 0.5094 | 0.5654 | 0.6255 | 0.6898 | 0.7576 | 0.8284 | 0.9010 | 0.9754 |

P G a B F | 0.6340 | 0.6712 | 0.7108 | 0.7525 | 0.7960 | 0.8414 | 0.8883 | 0.9359 | 0.9838 |

n 1 = 12 , n 2 = 15 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0780 | 0.1183 | 0.1747 | 0.2505 | 0.3483 | 0.4691 | 0.6113 | 0.7711 | 0.9420 |

p p p ( x ) | 0.0843 | 0.1214 | 0.1734 | 0.2443 | 0.3378 | 0.4561 | 0.5989 | 0.7625 | 0.9396 |

P J B F | 0.0917 | 0.1348 | 0.1936 | 0.2712 | 0.3686 | 0.4869 | 0.6258 | 0.7798 | 0.9441 |

P G a B F | 0.1020 | 0.1468 | 0.2073 | 0.2852 | 0.3824 | 0.4997 | 0.6359 | 0.7862 | 0.9461 |

n 1 = 40 , n 2 = 35 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0033 | 0.0091 | 0.0236 | 0.0553 | 0.1179 | 0.2275 | 0.3970 | 0.6278 | 0.9035 |

p p p ( x ) | 0.0050 | 0.0117 | 0.0268 | 0.0584 | 0.1193 | 0.2260 | 0.3929 | 0.6236 | 0.9021 |

P J B F | 0.0038 | 0.0105 | 0.0260 | 0.0592 | 0.1233 | 0.2343 | 0.4039 | 0.6327 | 0.9044 |
---|---|---|---|---|---|---|---|---|---|

P G a B F | 0.0040 | 0.0110 | 0.0269 | 0.0607 | 0.1257 | 0.2371 | 0.4065 | 0.6348 | 0.9052 |

n 1 = 100 , n 2 = 110 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0000 | 0.0000 | 1e-04 | 0.0012 | 0.0082 | 0.0413 | 0.1522 | 0.4125 | 0.8375 |

p p p ( x ) | 0.0000 | 0.0000 | 2e-04 | 0.0015 | 0.0090 | 0.0424 | 0.1523 | 0.4110 | 0.8368 |

P J B F | 0.0000 | 0.0000 | 1e-04 | 0.0013 | 0.0087 | 0.0426 | 0.1548 | 0.4157 | 0.8386 |

P G a B F | 0.0000 | 0.0000 | 1e-04 | 0.0013 | 0.0088 | 0.0432 | 0.1546 | 0.4149 | 0.8388 |

α 1 = 0.5 , α 2 = 0.5 , β 1 = 2.5 , β 2 = 2.5 | |||||||||
---|---|---|---|---|---|---|---|---|---|

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0022 | 0.0068 | 0.0186 | 0.0464 | 0.1041 | 0.2097 | 0.3786 | 0.6143 | 0.8997 |

p p p ( x ) | 0.0035 | 0.0088 | 0.0214 | 0.0492 | 0.1056 | 0.2085 | 0.3750 | 0.6104 | 0.8984 |

P J B F | 0.0025 | 0.0073 | 0.0202 | 0.0493 | 0.1086 | 0.2151 | 0.3840 | 0.6182 | 0.9006 |

P G a B F | 0.0040 | 0.0107 | 0.0267 | 0.0612 | 0.1263 | 0.2373 | 0.4077 | 0.6347 | 0.9053 |

α 1 = 0.5 , α 2 = 1.5 , β 1 = 2.5 , β 2 = 2.0 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0022 | 0.0068 | 0.0186 | 0.0464 | 0.1041 | 0.2097 | 0.3786 | 0.6143 | 0.8997 |

p p p ( x ) | 0.0035 | 0.0088 | 0.0214 | 0.0492 | 0.1056 | 0.2085 | 0.3750 | 0.6104 | 0.8984 |

P J B F | 0.0025 | 0.0073 | 0.0202 | 0.0493 | 0.1086 | 0.2151 | 0.3840 | 0.6182 | 0.9006 |

P G a B F | 0.0034 | 0.0094 | 0.0242 | 0.0568 | 0.1202 | 0.2301 | 0.3998 | 0.6297 | 0.9040 |

α 1 = 2 , α 2 = 2 , β 1 = 0.0005 , β 2 = 0.0005 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0022 | 0.0068 | 0.0186 | 0.0464 | 0.1041 | 0.2097 | 0.3786 | 0.6143 | 0.8997 |

p p p ( x ) | 0.0035 | 0.0088 | 0.0214 | 0.0492 | 0.1056 | 0.2085 | 0.3750 | 0.6104 | 0.8984 |

P J B F | 0.0025 | 0.0073 | 0.0202 | 0.0493 | 0.1086 | 0.2151 | 0.3840 | 0.6182 | 0.9006 |

P G a B F | 0.0026 | 0.0076 | 0.0202 | 0.0494 | 0.1088 | 0.2154 | 0.3852 | 0.6191 | 0.9007 |

α 1 = 4 , α 2 = 5 , β 1 = 2.5 , β 2 = 2.0 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0022 | 0.0068 | 0.0186 | 0.0464 | 0.1041 | 0.2097 | 0.3786 | 0.6143 | 0.8997 |

p p p ( x ) | 0.0035 | 0.0088 | 0.0214 | 0.0492 | 0.1056 | 0.2085 | 0.3750 | 0.6104 | 0.8984 |

P J B F | 0.0025 | 0.0073 | 0.0202 | 0.0493 | 0.1086 | 0.2151 | 0.3840 | 0.6182 | 0.9006 |

P G a B F | 0.0014 | 0.0044 | 0.0134 | 0.0361 | 0.0882 | 0.1877 | 0.3551 | 0.5960 | 0.8945 |

α 1 = 2 , α 2 = 2 , β 1 = 0.0005 , β 2 = 0.0005 | |||||||||
---|---|---|---|---|---|---|---|---|---|

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0001 | 0.0026 | 0.0329 | 0.2189 | 0.7577 |

p p p ( x ) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0003 | 0.0037 | 0.0353 | 0.2178 | 0.7553 |

P J B F | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0002 | 0.0028 | 0.0349 | 0.2244 | 0.7611 |

P G a B F | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0001 | 0.0028 | 0.0349 | 0.2234 | 0.7592 |

α 1 = 2 , α 2 = 1.2 , β 1 = 3.5 , β 2 = 3.0 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0001 | 0.0026 | 0.0329 | 0.2189 | 0.7577 |

p p p ( x ) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0003 | 0.0037 | 0.0353 | 0.2178 | 0.7553 |

P J B F | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0002 | 0.0028 | 0.0349 | 0.2244 | 0.7611 |

P G a B F | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0002 | 0.0041 | 0.0418 | 0.2402 | 0.7678 |

α 1 = 0.3 , α 2 = 0.7 , β 1 = 15 , β 2 = 10 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0001 | 0.0026 | 0.0329 | 0.2189 | 0.7577 |

p p p ( x ) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0003 | 0.0037 | 0.0353 | 0.2178 | 0.7553 |

P J B F | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0002 | 0.0028 | 0.0349 | 0.2244 | 0.7611 |

P G a B F | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0008 | 0.0090 | 0.0644 | 0.2871 | 0.7891 |

α 1 = 9 , α 2 = 4 , β 1 = 15 , β 2 = 10 | |||||||||

( x ¯ 1 − x ¯ 2 ) | 2.500 | 2.200 | 1.900 | 1.600 | 1.300 | 1.000 | 0.700 | 0.400 | 0.100 |

p ( x ) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0001 | 0.0026 | 0.0329 | 0.2189 | 0.7577 |

p p p ( x ) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0003 | 0.0037 | 0.0353 | 0.2178 | 0.7553 |

P J B F | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0002 | 0.0028 | 0.0349 | 0.2244 | 0.7611 |

P G a B F | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0002 | 0.0032 | 0.0369 | 0.2299 | 0.7631 |

Finally, Lehmann’s data on measures of driving times from following two different routes and Sahu’s data on scores of surgical and non-surgical treatments both displayed as

In this paper, we looked at the Bayesian analysis of the Behrens-Fisher problem using the methodology of Yin [

Simulation results further confirm the fact that extending the methodology of Yin [

The authors declare no conflicts of interest regarding the publication of this paper.

Goltong, N.E. and Doguwa, S.I. (2018) Bayesian Analysis of the Behrens-Fisher Problem under a Gamma Prior. Open Journal of Statistics, 8, 902-914. https://doi.org/10.4236/ojs.2018.86060