Computing Confidence Intervals for the Postal Service’s Cost-Elasticity Estimates

Abstract

This paper provides methods for assessing the precision of cost elasticity estimates when the underlying regression function is assumed to be polynomial. Specifically, the paper adapts two well-known methods for computing confidential intervals for ratios: the delta-method and the Fieller method. We show that performing the estimation with mean-centered explanatory variables provides a straightforward way to estimate the elasticity and compute a confidence interval for it. A theoretical discussion of the proposed methods is provided, as well as an empirical example based on publicly available postal data. Possible areas of application include postal service providers worldwide, transportation and electricity.

Share and Cite:

Lyudmila, B. , Cigno, M. and Namoro, S. (2021) Computing Confidence Intervals for the Postal Service’s Cost-Elasticity Estimates. Open Journal of Statistics, 11, 607-619. doi: 10.4236/ojs.2021.115036.

1. Introduction

Around the world, posts are confronted with declining volumes and revenues. In such an environment, it is vitally important that price signals are properly constructed using accurate cost data at the product level. Because posts are multi-product firms they exhibit complex cost behaviors. The presence of common costs and the thin line between fixed and variable costs make it challenging for posts to accurately determine cost elasticity by product. The United States Postal Service (Postal Service) uses quadratic and translog regression equations, among others, for estimating volume variabilities of cost-related variables, such as street time or vehicle capacity, with respect to diverse cost drivers, such as mail volume [1] [2] [3]. The term “volume variability” (or variability) is used by the Postal Service as a substitute for “cost elasticity” [2]. The Postal Service uses both terms as synonyms to mean “the percentage change in cost caused by a percentage change in volume (or other relevant cost driver)” [4].

The present paper is motivated by the fact that confidence intervals are not reported with the Postal Service’s variability estimates. If available, confidence intervals allow assessment of the precision of the estimates and help gauge their stability across alternative independent data samples. In addition, the computation of the relevant confidence intervals provides direct information on the potential effects of covariate multicollinearity on the variability estimates.

Considering that cost elasticities are estimated not only by the United States Postal Service, but also by posts worldwide, the methods described in the paper could be of wide interest [5]. In addition, the described analysis of confidence intervals for cost elasticity estimates could be applicable in other industries, such as transportation or electricity.

The Postal Service computes variability in two steps. In the first step, the Ordinary Least Squares or the generalized least Squares method is used to estimate a chosen econometric model linking the cost-related (dependent) variable to a set of explanatory variables. The estimated relation is treated as the cost-related function. In the second step, the variability is computed as the elasticity of the cost-related function with respect to the variable of interest. Since the results vary across observations, a fixed variability estimate is computed as the elasticity at the sample average of the explanatory vector. The obtained elasticity is a ratio, the numerator and denominator of which are both linear combinations of the estimated parameter vector, with coefficients depending on the sample averages of the explanatory variables. These coefficients converge in probability to constants, thanks to the weak law of large numbers (WLLN) and provided that the involved variables have finite expectations. Hence, the asymptotic normality of the estimator of the parameter vector of the model provides the basic information needed to test the significance of the variability estimate and construct a confidence interval for the implicit variability parameter. Of course, the ratio nature of the estimator raises issues about the coverage probability when the implicit variability parameter is undefined, for example because its denominator is statistically null. Indeed, as shown later in the paper, the denominator can be obtained as the estimate of the constant term in the regression of the dependent variable on the mean-centered versions of the same explanatory variables. There is, however, a possibility that this estimate is insignificant.

This paper presents a method to compute the confidential interval and conduct the relevant tests using the estimates of the regression model with mean-centered explanatory variables, i.e., the main variables are all centered and the interactions variables are computed using the centered variables. As Echambadi and Hess [6] showed, for linear (in parameters) models containing interaction variables, there is a linear and one-to-one correspondence between the OLS (or the generalized least-squares (GLS)) estimates obtained with non-centered variables versus mean-centered variables. The statistical inference for the two models is also shown in the cited paper to be exactly the same, provided that the linear transformation between the parameter vectors in the two models is accounted for in the inference. This is an example of equivariant estimators [7].

A practical advantage of using the centered regression compared to the one with non-centered explanatory variables is that any statistical software will directly provide the numerator (up to a factor equal to the mean of the cost driver) and the denominator of the variability estimates and compute the confidence intervals for the corresponding parameters. Hence, a byproduct of the proposed method is that it also provides a simpler alternative, and numerically equivalent way to compute the variability parameter. The analysis conducted in this paper is of potential interest to any application involving elasticity estimates and it falls in the literature on elasticity estimation, as illustrated in [8] [9] [10] and the references therein.

The purpose of the paper is to present a simpler way to estimate quadratic-form regression functions and two algorithms for computing confidence intervals for the Postal Service’s cost elasticity estimates. More generally, the paper purports to present within the context of elasticity estimation, tools for computing confidence intervals for a parameter that is estimated as a ratio of a pair of asymptotically normal estimators.

In Section 2, the quadratic model is briefly discussed with the goal of providing the precise formal setting in which the analysis is conducted. In Section 3, the confidence interval for the variability parameter is derived from the delta method. Section 4 discusses the inferential problems related to the delta method in this context and shows how to derive an alternative confidence interval based on Fieller’s method. The empirical computation of the two types of confidence interval is discussed in Section 5. The last section concludes the paper.

2. The Formal Setting

A class of regression equation often considered in applications has the form:

y = β 0 + x β + x A x + u . (1)

Regression functions that are not polynomial in the explanatory vector will require a less straightforward analysis than the one proposed in the present paper. In the relation (1), vectors are understood as column vectors and the prime sign denotes transposition.

For example, in the Postal Service’s model, y is a cost-related variable, x is a row-vector of explanatory variables including mail volume, and u is the usual error term. In this paper, the explanatory variables are the components of the vector x. These components, together with their interactions are referred to as the covariates.

Assuming that u is mean-independent of x, the right-hand side of (1), excluding the error term, is the conditional expectation of y, given X = x , where x is an observation of X. To avoid complicating the notation, expectation operators will be applied indifferently to both a random variable and its observation and will mean the same thing even though the latter is not random. Hence, for example, E(X) and E(x) will both denote the mathematical expectation of X and this use will be contextually unambiguous. For two explanatory variables, the model is:

y = β 0 + β 1 x 1 + β 2 x 2 + a 11 x 1 2 + a 22 x 2 2 + a 12 x 1 x 2 + u (2)

β = ( β 1 , β 2 ) , A = ( a 11 a 12 0 a 22 ) (3)

A point worth noting is that the equality x A x = x ( A + A ) x 2 , which uses the

symmetric part of A, i.e., B = A + A , allows us to write model (1) equivalently as

y = β 0 + x β + x B x 2 + u = g θ ( x ) + u . (4)

The index θ is a vector the component of which are the parameters of the model. Model (4) is referred to in this paper as the quadratic regression model. The version of it in which y and all components of x are in the logarithm form is referred to as the translog regression model, after the function introduced by Christensen et al. [11]. It is easily seen that although quadratic in x, the model is linear in the parameters.

3. The Delta-Method Confidence Interval

The developments in this section start from the observation that the quadratic function g θ ( x ) is analytic and, hence, it is its own Taylor expansion centered at any point of its domain. From this observation, it follows that one can equivalently write

g θ ( x ) = α 0 + ( x e ) α + ( x e ) B ( x e ) 2 , (5)

for any interior point e of the domain of g θ , where

α = ( g θ ( x ) x 1 , , g θ ( x ) x d ) | x = e = g θ ( e ) is the gradient of g θ at e and d is the

dimension of the vector x. The motivation for representing the argument of the function g θ as the deviation of x from e is the direct meaning of α as the gradient g θ ( e ) = β + B e (obtained from taking the derivative with respect to x in (4) and substituting e for x in the result), which, computationally has the advantage of being read directly from an estimation output, using any of the common statistical software packages. Let z denote the component of x, which represents the explanatory variable of interest, and x z be the remaining sub-vector of x, after discarding z. The following equality results from these notations: x = ( z , x z ) , assuming that z is the first component of x. The formal expression of the elasticity to be computed at some chosen value of x, say x 0 = ( z 0 , x z 0 ) , is

γ ( x 0 ) = g θ ( x ) z g θ ( x ) z | z = z 0 , x z = x z 0 . (6)

The value x 0 can be taken to be the expected value of x, i.e., x 0 = E ( x ) , which is the case that is considered in this paper. In other words, the parameter of interest in the analysis is

γ ( E ( x ) ) = g θ ( x ) z g θ ( x ) z | x = E ( x ) : = g θ ( E ( x ) ) z g θ ( E ( x ) ) E ( z ) (7)

The least-square estimation of the model

y = α 0 + ( x E ( x ) ) α + ( x E ( x ) ) B ( x E ( x ) ) 2 + u = g δ ( x E ( x ) ) + u , (8)

where g δ ( x E ( x ) ) g θ ( x ) and δ denotes the vector of the parameters in the expression on right-hand side of the first equality in (8), is performed by mean-centering each component of x in (4).

Echambadi and Hess [6] showed that there is a linear one-to-one correspondence between θ ^ and δ ^ , the ordinary least squares (OLS) estimators in the regression in the regression with non-centered and centered variables, respectively. These two regressions will be henceforth referred to as the non-centered and the centered regression, respectively. So, assuming again that z is the first component of x and its sample mean is z ¯ , directly computing the OLS estimate of α z , the coefficient of z z ¯ in (8), is numerically equivalent to first computing the OLS estimate of θ in (4) and then computing β ^ z + B ^ z x ¯ , where β ^ z and B ^ z are, respectively, the estimate of the coefficient of z and the first row of B ^ , the estimate of B.

The least-square estimator of γ ( E ( x ) ) , which is denoted by γ ^ ( E ( x ) ) , only involves a ratio, α ^ z α ^ 0 where the numerator is the estimated coefficient of z z ¯ ,

and the denominator is the estimated constant term, equal to g θ ( x ¯ ) . Because x ¯ converges in probability to E ( x ) (by the WLLN), the continuous mapping theorem (see theorem 1 in Borovkov [12] ) implies that the estimator

γ ^ ( E ( x ) ) = α ^ z α ^ 0 z ¯

is consistent (for estimating γ ( E ( x ) ) , provided that the latter is defined, i.e., that its denominator is non null [10]. Further, the couple ( α ^ z , α ^ 0 ) is asymptotically normal. Assuming that its asymptotic distribution is

( α ^ z α ^ 0 ) ~ N [ ( α z α 0 ) , ( σ ^ z 2 σ ^ z m σ ^ z m σ ^ 0 2 ) ] , (9)

the corresponding asymptotic distribution of γ ^ ( E ( x ) ) is

γ ^ ( E ( x ) ) ~ N [ γ ( E ( x ) ) , z ¯ 2 ( 1 α ^ 0 , α ^ z α ^ 0 2 ) ( σ ^ z 2 σ ^ z m σ ^ z m σ ^ 0 2 ) ( 1 α ^ 0 α ^ z α ^ 0 2 ) ] (10)

N [ γ ( E ( x ) ) , ( z ¯ α ^ 0 ) 2 [ σ ^ z 2 + σ ^ 0 2 ( α ^ z α ^ 0 ) 2 2 σ ^ z m ( α ^ z α ^ 0 ) ] ] . (11)

Using the notation w ^ = [ σ ^ z 2 + σ ^ 0 2 ( α ^ z α ^ 0 ) 2 2 σ ^ z m ( α ^ z α ^ 0 ) ] , the delta-method based 95%-level confidence interval for γ ( E ( x ) ) is written as

[ γ ^ ( E ( x ) ) 1.96 w ^ ( z ¯ α ^ 0 ) 2 ; γ ^ ( E ( x ) ) + 1.96 w ^ ( z ¯ α ^ 0 ) 2 ] (12)

A robust version of (12) can be obtained by using, for example, the White’s heteroskedasticity-consistent covariance matrix estimator for the covariance matrix in (9) [13].

4. The Fieller-Method Confidence Interval

Several researchers have expressed doubt pertaining to a class of confidence-intervals for a ratio, including the one obtained by the delta-method. Doubt about the confidence interval estimated using the delta method arises from the fact that the delta method suggests that a ratio of two normally distributed random variables will be normally distributed, which is not true. The delta method also relies crucially on the assumption that the denominator of the “true” value of the ratio parameter of interest is not null.

To get more insight about a potential problem posed by the delta-method confidence interval, a few definitions are necessary. The presentation here is based on Gleser and Hwang [14]. Suppose some variable of interest X has the parametric probability distribution A P θ ( X A ) , where the finite-dimensional parameter vector θ lies in the set Θ. A confidence interval for a function γ ( θ ) of the parameter vector has the form [ L ( X ) , U ( X ) ] . It is a random interval, because its bounds depend on the random variable X. Of course, for a particular sample, x of X, [ L ( x ) , U ( x ) ] is a fixed (non-random) interval. However, the statistical properties of the confidence interval are studied based on the random interval. One important property is the coverage probability of the interval, which, for a given θ Θ , is defined as the probability p θ , that the random interval contains the parameter of interest, p θ = P θ [ L ( X ) γ ( θ ) U ( X ) ] . The lower bound, ( 1 α ) = inf θ Θ ( p θ ) , of the coverage probability, where α is here a positive real falling in the interval ( 0 , 1 ) , is called the confidence level of the confidence interval and represents the theoretical lowest probability that the random interval contains the parameter of interest. To assess the confidence interval, one additional piece of information is needed, namely, its expected length, which, for a given θ, is the expectation l ( θ ) = E θ [ U ( X ) L ( U ) ] . Clearly, a larger ( 1 α ) and a lower l ( θ ) across the set Θ, makes the interval more appealing for the researcher than it would be otherwise.

In some important cases, such as building a confidence interval for a ratio of two OLS regression parameters, these desirable properties of a confidence interval may be out of reach when the intervals are obtained by a class of methods that includes the delta method. Confidence intervals of the form (12) fall in a class that Dufour [15] (page 1366) refers to as Wald-type confidence sets.

Specifically, Gleser and Hwang [14], Koschat [16], and Dufour [15] have proved what is known in the relevant literature as impossibility theorems. Recall that in the context of the present paper, α z and α 0 are the expectations of the limiting jointly normal distribution of ( α ^ z , α ^ 0 ) , and the variability estimator is based on the ratio between the least-squares estimators of α z and α 0 . Koschat [16] shows that, in general, “[T]here is no procedure that with probability 1 gives bounded α-level confidence intervals for the ratio”. Dufour [15] also shows

that any confidence interval of the form [ ( α ^ z α ^ 0 ) z ¯ ± τ * w ^ ( z ¯ α ^ 0 ) 2 ] , (say,

τ * = 1.96 ), derived, for example, from the estimation of a linear regression equation, such as (12), has zero coverage probability.

Fortunately, there is a way out of these impossibility theorems. Both Koschat and Dufour indicate that the Fieller confidence interval is a better alternative for estimating a ratio [17]. The general principle that underlies the procedure for obtaining a Fieller confidence interval is clearly explained in Rao [18]. This procedure is used here to provide an alternative confidence interval for the variability

estimator. Setting aside the expectation E ( z ) , the ratio λ = ( α z α 0 ) is the parameter of interest and we assume again that the asymptotic covariance matrix of the vector ( α ^ z α ^ 0 ) is ( σ ^ z 2 σ ^ z m σ ^ z m σ ^ 0 2 ) . The equality λ = ( α z α 0 ) implies

α z λ α 0 = 0 and, if λ is the “true” ratio, the least-squares estimator of α z λ α 0 is α ^ z λ α ^ 0 , with E ( α ^ z λ α ^ 0 ) = 0 , and its asymptotic variance is σ ^ z 2 + λ 2 σ ^ 0 2 2 λ σ ^ z m . Letting t denote the Student’s t-statistic for the ( 1 α ) 100 % confidence level, the procedure consists of determining the set of values of λ for

which the inequality ( α ^ z λ α ^ 0 ) 2 σ ^ z 2 + λ 2 σ ^ 0 2 2 λ σ ^ z m t 2 holds with a probability equal to ( 1 α ) :

P ( ( α ^ z λ α ^ 0 ) 2 σ ^ z 2 + λ 2 σ ^ 0 2 2 λ σ ^ z m t 2 ) = P [ ( α ^ z λ α ^ 0 ) 2 t 2 ( σ ^ z 2 + λ 2 σ ^ 0 2 2 λ σ ^ z m ) 0 ] = 1 α (13)

In other words, the confidence set is the set of values of λ over which the null hypothesis H 0 : α z λ α 0 = 0 is not rejected against the alternative H 0 : α z λ α 0 0 in an F-test. It is worth recalling here that the square of the Student’s t is equal (in distribution) to the Fisher’s F. The expression on the left-hand side of the last inequality (in the probability) can be written as a quadratic form in λ:

Q ( λ ) = ( α ^ z λ α ^ 0 ) 2 t 2 ( σ ^ z 2 + λ 2 σ ^ 0 2 2 λ σ ^ z m ) = μ λ 2 + ρ λ + δ , (14)

with μ = α ^ 0 2 t 2 σ ^ 0 2 , ρ = 2 ( σ ^ z m t 2 α ^ z α ^ 0 ) , δ = α ^ z 2 t 2 σ ^ z 2 .

Provided that the quadratic form μ λ 2 + ρ λ + δ in λ has two distinct real-valued roots, these roots represent the two bounds of the ratio parameter

α z α 0 . It is worth noting here that if the non-centered regression is used instead, a

similar equation in matrix form determines the bounds of the Fieller confidence interval [19]. The conditions determining the nature and the signs of the roots of Q ( λ ) are discussed in Scheffé [20]. In fact, from

Δ = ( ρ 2 ) 2 μ δ = ( σ ^ z m t 2 α ^ z α ^ 0 ) 2 ( α ^ 0 2 t 2 σ ^ 0 2 ) ( α ^ z 2 t 2 σ ^ z 2 ) ,

it is clear that if Δ > 0 , then Q ( λ ) has two distinct real-valued roots. Any value of λ falling between these two roots is such that Q ( λ ) is of the opposite sign of μ. Hence, if Δ > 0 and μ > 0 , then Q ( λ ) < 0 and one obtains a bounded confidence interval for γ ( E ( x ) ) . However, if Δ > 0 and μ < 0 , the confidence interval will be the (unbounded) complement set of a bounded interval. If Δ < 0 , then Q ( λ ) is of the sign of μ. So, if, in addition to Δ < 0 , one has μ < 0 , then the confidence interval is the entire real line. The case ( Δ < 0 , μ > 0 ) is mathematically impossible, as shown by Scheffé [20], who also notes that the cases Δ = 0 and μ = 0 are zero-probability (negligible) events.

The event A = { μ = α ^ 0 2 t 2 σ ^ 0 2 < 0 } determines, therefore, the boundedness of the Fieller interval. This event has the meaning that the Student test for α 0 leads to no rejection of the null H 0 : α 0 = 0 . Under this null assumption, the event A has the probability P ( A ) = 1 α . For example, if the null H 0 : α 0 = 0 is not rejected at 5% significance level in a t-test, then the 95%-level confidence interval will be unbounded. This is in sharp contrast to the always bounded confidence interval resulting from the delta method, which assumes away H 0 : α 0 = 0 . By doing so, it imposes a sure rejection ( P ( A ) = 0 ) of H 0 in the described Student test, whence, α = 1 . In other words, it imposes a zero coverage probability for the confidence interval. By doing so, it imposes a sure rejection ( P ( A ) = 0 ) of H 0 in the described Student test, whence, α = 1 . In other words, it imposes a zero coverage probability for the confidence interval.

In the context of the Postal Service, if for example street time is the dependent and volume the cost-driver of interest, the possibility that the best-prediction of street time given average mail-volume—the meaning of the parameter α 0 —be equal to zero is hard to imagine. However, unless α 0 0 is imposed through some additional modeling as an estimation constraint, there is no guaranty that the corresponding estimate will be statistically significant. It seems reasonable, therefore, to always report both confidence intervals, delta and Fieller, regardless of the researcher’s prior belief about the possibility that α 0 be null.

5. Results and Discussion

5.1. Empirical Examples of Delta and Fieller Intervals

The above theory is applied to a data set that is publicly available at

https://www.prc.gov/dockets/document/109489. In the example, the dependent variable is “street time” and labelled as “thours”. Table 1 displays the table of the estimation results, along with all the information needed to compute the confidence intervals for the elasticity estimate. The variable with respect to which the elasticity is computed is “volume”. Its mean-centered version is labelled as “CVol” (centered volume) in the top larger table containing the regression result. Its coefficient and the constant term are labelled in the bottom small table respectively as “Alpha_Vol” and “Alpha_Zero”. The small table also contains the sample means of the volume variable and the covariance matrix of the pair (Alpha_Vol, Alpha_Zero). Recall here that the constant term is equal to g θ ^ ( x ¯ ) and is the denominator of the elasticity. The numerator of the elasticity is the product of Alpha_Vol by the sample average of volume. The volume-elasticity is estimated as 0.62064233. Using the described information, the delta-method 95% confidence interval for the volume elasticity, denoted below by DCI, is

D C I = [ γ ^ ( E ( x ) ) 1.96 w ^ ( z ¯ α ^ 0 ) 2 ; γ ^ ( E ( x ) ) + 1.96 w ^ ( z ¯ α ^ 0 ) 2 ] = [ 0.592578355 ; 0.648706305 ] .

The Fieller 95%-confidence interval, denoted below by FCI, is

F C I = [ 0.592581517 ; 0.648733034 ] .

Both confidence intervals are computed after estimating the centered regression estimation and collecting the estimates α ^ z (Alpha_Vol ) and α ^ 0 (Alpha_Zero), along with the corresponding estimate of their variance-covariance

matrix. The volume-elasticity is computed as γ ^ ( E ( x ) ) = α ^ z α ^ 0 z ¯ , where z ¯ is the

sample mean of the volume variable. The delta-method interval, DCI, is

obtained from (12). To compute the Filler confidence interval for the ratio α z α 0 , the expressions μ, ρ, and δ are first computed as they appear in (14). The discriminant is computes as Δ = ( ρ 2 ) 2 μ δ and, if both ∆ and μ are positive, the left bound of the interval (respectively, the right bound of the interval) is computed as L = 1 μ ( ρ 2 Δ ) (Respectively, L = 1 μ ( ρ 2 + Δ ) ). The final interval, FCI, is obtained for γ ( E ( x ) ) by simply multiplying the above bounds by z ¯ .

Table 1. Regression results with centered explanatory variables.

As discussed above, the main difference between the two confidence intervals is that FCI accounts for the possibility that α 0 be null, while DCI does not. In this example, the two intervals are seen to be very close to one another. The difference between the radius of the Fieller and the delta-method intervals is in the order of 10−5, the delta-method confidence interval being very slightly narrower. In particular, the Fieller confidence interval is bounded, because Δ > 0 and μ > 0 .

5.2. The Translog Regression Function

Because the translog functional form is quadratic in the logs of the components of x, the above analysis carries over to this model, but in a rather trivial way. The model considered becomes

ln ( T ) = g θ ( ln x 1 , , ln x d ) + u (15)

with g θ defined, as above, to be quadratic. To add to the notation used so far, the elasticity of a function f of the vector ( z , x 2 , , x d ) , with respect to the argument z, computed at ( z , x 2 , , x d ) = ( a 1 , a 2 , , a d ) , will be denoted by

E f ( z ) | ( z , x 2 , , x d ) = ( a 1 , a 2 , , a d ) . The sample geometric mean of z will be denoted by z ¯ g or, simply, E f ( z ) | ( a 1 , a 2 , , a d ) . Assuming that the model (15) is estimated with centered explanatory variables and writing ln ( T ) ^ = g θ ^ ( ln x 1 , , ln x d ) , one has

ln ( z ) ¯ = ln ( z ¯ g ) (16)

E e ln ( T ) ^ ( z ) | ( ln ( z ) ¯ , ln ( x 2 ) ¯ , , ln ( x d ) ¯ ) = g θ ^ ( ln x 1 , , ln x d ) ln ( z ) | ( ln ( z ) ¯ , ln ( x 2 ) ¯ , , ln ( x d ) ¯ ) (17)

= g θ ^ ( ln x 1 , , ln x d ) ln ( z ) | ( ln ( z ¯ g ) , ln ( x 2 ¯ g ) , , ln ( x d ¯ g ) ) (18)

The right-hand side of (18) is the coefficient of ln ( z ) ln ( z ) ¯ in the centered regression result:

α ^ ln z = g θ ^ ( ln x 1 , , ln x d ) ln ( z ) | ( ln ( z ¯ g ) , ln ( x 2 ¯ g ) , , ln ( x d ¯ g ) )

It is the elasticity of the predicted street time, e ln ( T ) ^ , with respect to z, computed at the vector of log geometric means. So, the inference on this elasticity is provided directly by the statistical software package.

6. Conclusion

The present paper has provided two alternative ways to compute confidence intervals for cost-elasticity (variability) when the underlying functional form in the regression equation is quadratic. The polynomial aspect of the assumed regression function has considerably simplified the computations. A similar analysis can be conducted on any functional form that is linear in the parameters, but will be a bit more complex if the function is not polynomial in its arguments. Both the delta-method and the Fieller confidence intervals have been derived and illustrated with an empirical example based on publicly available Postal Service data. For the sake of completeness and transparency, both confidence intervals should be reported in applications.

Acknowledgements

The authors would like to thank Irina Murtazashvili for her valuable comments. All errors are the authors’.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] United States Postal Service (2014) Report on the City Carrier Street Time Study. Postal Regulatory Commission, Washington DC, 25-26.
https://www.prc.gov/docs/90/90869/prop.13.city%20carrier.report.pdf
[2] Bradley, D.M. (2016) Research on Estimating the Variability of Purchased Highway Transportation Capacity with Respect to Volume. Postal Regulatory Commission, Washington DC, 2
https://www.prc.gov/docs/96/96940/Research.Report.Proposal.Four.pdf
[3] Bradley, D.M. (2019) A New Study of Special Purpose Route Carrier Costs. Postal Regulatory Commission, Washington DC.
https://www.prc.gov/docs/109/109484/spr.public.study.report.pdf
[4] United States Postal Service (2012) Postal Service Report Regarding Cost Studies: Response to PRC Order No. 1626 (April 18, 2012). Postal Regulatory Commission, Washington DC, 6.
https://www.prc.gov/docs/86/86858/Report_Response_Order_1626.pdf
[5] Cazals, C., Duchemin, P., Florens, J-P., Roy, B. and Vialaneix, O. (2002) An Econometric Study of Cost Elasticity in the Activities of Post Office Counters. In: Crew, M.A. and Kleindorfer, P.R., Eds., Postal and Delivery Services, Kluwer Academic Publishers, Boston, 161-170.
https://doi.org/10.1007/978-1-4613-0253-7_9
[6] Echambadi, R. and Hess, J.D. (2007) Mean-Centering Does Not Alleviate Collinearity Problems in Moderated Multiple Regression Models. Marketing Science, 26, 438-445.
https://doi.org/10.1287/mksc.1060.0263
[7] Lehmann, E.L. and Casella, G. (1998) Theory of Point Estimation. 2nd Edition, Springer, New York, 150.
https://doi.org/10.1007/b98854
[8] Miller, S.E., Capps Jr., O. and Wells, G.J. (1984) Confidence Intervals for Elasticities and Flexibilities from Linear Equations. American Journal of Agricultural Economics, 66, 392-396.
https://doi.org/10.2307/1240807
[9] Anderson, R.G. and Thursby, J.G. (1986) Confidence Interval for Elasticity Estimators in Translog Models. The Review of Economics and Statistics, 68, 647-656.
[10] Hirschberg, J.G., Lye, N. and Slottje, D.J. (2008) Inferential Methods for Elasticity Estimates. Journal of Econometrics, 147, 299-315.
[11] Christensen, L.R., Jorgenson, D.W. and Lau, L.J. (1973) Transcendental Logarithmic Production Frontiers. The Review of Economics and Statistics, 55, 28-45.
https://doi.org/10.2307/1927992
[12] Borovkov, A.A. (1998) Mathematical Statistics. First Edition, Theorem 1, Gordon and Breach Science Publishers, New York, 13.
[13] White, H. (1980) A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity. Econometrica, 48, 817-838.
https://doi.org/10.2307/1912934
[14] Gleser, L.J. and Hwang, J.T. (1987) The Nonexistence of 100(1-α)% Confidence Sets of Finite Expected Diameter-in-Errors in Variables and Related Models. The Annals of Statistics, 15, 1351-1362.
https://doi.org/10.1214/aos/1176350597
[15] Dufour, J.-M. (1997) Some Impossibility Theorems in Econometrics with Applications to Structural and Dynamic Models. Econometrica, 65, 1365-1387.
https://doi.org/10.2307/2171740
[16] Koschat, M.A. (1987) A Characterization of the Fieller Solution. The Annals of Statistics, 15, 462-468.
https://doi.org/10.1214/aos/1176350282
[17] Fieller, E.C. (1940-1941) The Biological Standardization of Insulin. Supplement to the Journal of the Royal Statistical Society, 7, 1-64.
https://doi.org/10.2307/2983630
[18] Radhakrishna, R.C. (1973) Linear Statistical Inference and Its Applications. 2nd Edition, Wiley, Hoboken, 241.
[19] Hirschberg, J.G.J., Lye, N. and Slottje, D.J. (2008) Inferential Methods for Elasticity Estimates. Journal of Econometrics, 147, 299-315.
https://doi.org/10.1016/j.jeconom.2008.09.037
[20] Scheffé, H. (1970) Multiple Testing versus Multiple Estimation. Improper Confidence Sets. Estimation of Directions and Ratios. The Annals of Mathematical Statistics, 41, 1-29.
https://doi.org/10.1214/aoms/1177697184

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.