A Sequential Shrinkage Estimating Method for Tobit Regression Model ()

Haibo Lu^{}, Cuiling Dong^{}, Juling Zhou^{}

School of Mathematical Sciences, Xinjiang Normal University, Urumqi, China.

**DOI: **10.4236/ojmsi.2021.93018
PDF HTML XML
118
Downloads
475
Views
Citations

School of Mathematical Sciences, Xinjiang Normal University, Urumqi, China.

In the applications of Tobit regression models we always encounter the data sets which contain too many variables that only a few of them contribute to the model. Therefore, it will waste much more samples to estimate the “non-effective” variables in the inference. In this paper, we use a sequential procedure for constructing the fixed size confidence set for the “effective” parameters to the model by using an adaptive shrinkage estimate such that the “effective” coefficients can be efficiently identified with the minimum sample size based on Tobit regression model. Fixed design is considered for numerical simulation.

Share and Cite:

Lu, H. , Dong, C. and Zhou, J. (2021) A Sequential Shrinkage Estimating Method for Tobit Regression Model. *Open Journal of Modelling and Simulation*, **9**, 275-280. doi: 10.4236/ojmsi.2021.93018.

1. Introduction

Tobit regression model is called sample selection model or restricted dependent variable model, see in [1] [2]. It is a kind of models whose dependent variables satisfy certain constraints. In some cases, it is also called truncated regression model or censored regression model. Tobit regression model is widely used in Econometrics and other research fields, and plays an increasingly important role in the analysis of cross-sectional data and time series data, illustrated in [3] [4]. However, in applications data sets we encountered usually have too many explanatory variables but only a few of them contributes to the model. Methods such as LASSO and LARS, see in [5] [6], have been proposed to figure out the effective variables, however it is still intractable to know how many samples can identify the effective variables and simultaneously make the parameter estimates achieve a pre-specified accuracy. For linear regression model, Wang and Chang propose a sequential shrinkage estimate method to identify the effective variables and attain accuracy of parameter estimate in [7]. For Tobit regression models, similar methods have not been proposed and there is still a lot of work to do. To handle the problem mentioned above, we propose a sequential procedure for constructing the fixed size confidence set for effective parameters based on an adaptive shrinkage estimate (ASE) such that the effective coefficients can be efficiently identified with the minimum sample size under fixed design.

The rest of this paper is organized as follows. In Section 2, we will give the adaptive shrinkage estimate (ASE) based on the Least Absolute Deviation Estimate (LAD) of Tobit regression models and its asymptotic properties. In Section 3, Sequential sampling strategy based on ASE and stopping rule as well as random size confident set is presented. In Section 4, an example with numerical simulation is given to illustrate the performance of the proposed method via sequential fixed size confidence estimation using synthesized data sets.

2. Sequential Adaptive Shrinkage Estimate Based on LAD

2.1. Asymptotic Properties of LAD

Suppose ${a}^{+}=\mathrm{max}\left\{a,c\right\}$, where c is a known constant, we can define Tobit regression model as:

${y}_{i}^{+}=\mathrm{max}\left\{{x}_{i}^{\text{T}}{\beta}_{0}+{\epsilon}_{i},c\right\},i=1,2,\cdots ,n$ (1)

where ${y}_{i}$ is dependent variable, ${\beta}_{0}$ is a p-dimensional vector of the unknown regression coefficients, ${x}_{i}$ is a p-dimensional vector of covariates and ${\epsilon}_{i}$ is a random error. Without losing generality, suppose $c=0$. Let ${\epsilon}_{i},i=1,2,\cdots ,n$ be independent identically distributed and follows a standard normal distribution with mean 0 and variance ${\sigma}^{2}$, then the Likelihood function will be

$L={\displaystyle \underset{0}{\prod}\left(1-\Phi \left({x}_{i}^{\text{T}}\beta /\sigma \right)\right)}{\displaystyle \underset{1}{\prod}\left({\sigma}^{-1}\Phi \left({x}_{i}^{\text{T}}\beta /\sigma \right)\right)}$ (2)

where $\Phi $ and $\varphi $ are standard normal distribution function and density function, $\underset{0}{\Pi}$ and $\underset{1}{\Pi}$ are the products in $\left\{i:{y}_{i}\le 0\right\}$ and $\left\{i:{y}_{i}>0\right\}$ separately.

Powell proposed a Least Absolute Deviation Estimate (LAD) of ${\beta}_{0}$ in [8], which is written as ${\stackrel{\u02dc}{\beta}}_{n}$, and minimize the function

${Q}_{n}\left(\beta \right)={\displaystyle \underset{i=1}{\overset{n}{\sum}}\left|{y}_{i}^{+}-\mathrm{max}\left\{{x}_{i}^{\text{T}}\beta ,0\right\}\right|}$ (3)

Under the assumptions (A1) Let ${\mathrm{sup}}_{i}\Vert {x}_{i}\Vert <\infty $ and (A2) Let the density function of the random error ${\epsilon}_{i}$, satisfies $f\left(0\right)=0$ and $med\left({\epsilon}_{i}\right)=0$, then there exists some $\delta >0$

$\underset{n\to \infty}{\mathrm{lim}}\frac{\lambda}{\mathrm{log}n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}I\left({x}_{i}^{\text{T}}\beta >\delta \right){x}_{i}{x}_{i}^{\text{T}}}=\infty $ (4)

Chen and Wu proved that $\underset{n\to \infty}{\mathrm{lim}}{\stackrel{\u02dc}{\beta}}_{n}={\beta}_{0},a.s.$ and

$\left(2f\left(0\right){M}_{n}^{1/2}\right)\cdot \sqrt{n}\left({\stackrel{\u02dc}{\beta}}_{n}-{\beta}_{0}\right)\stackrel{d}{\to}N\left(0,{I}_{n}\right)$ (5)

in [9], where ${I}_{n}$ is an identity matrix and ${M}_{n}=E\left(\frac{1}{n}{\displaystyle \underset{i}{\sum}I\left({x}_{i}^{\text{T}}{\beta}_{0}>0\right){x}_{i}{x}_{i}^{\text{T}}}\right)$.

2.2. Adaptive Shrinkage Estimate

Let
$\kappa =\kappa \left(n\right)$ be a non-random function of *n* such that for some
$0<\delta <1/2$ and
$\gamma >0$,
${n}^{1/2}\kappa \to 0$ and
${n}^{1/2+\gamma \delta}\kappa \to \infty $, as
$n\to \infty $. Then, under the assumptions (A1) and (A2), by using Equation (4) we can see that
${n}^{1/2-\eta}\left({\stackrel{\u02dc}{\beta}}_{n}-{\beta}_{0}\right)=O\left(1\right)$ almost surely as n tends to
$\infty $ for some
$\eta >0$. Similar to Wang and Chang in [6], we can define
${\stackrel{^}{\beta}}_{n}={I}_{n}\left(\epsilon \right){\stackrel{\u02dc}{\beta}}_{n}$ as an adaptive shrinkage estimate (ASE) of
${\beta}_{0}$, where
${I}_{n}\left(\epsilon \right)=diag\left\{{I}_{n1}\left(\epsilon \right),{I}_{n2}\left(\epsilon \right),\cdots ,{I}_{np}\left(\epsilon \right)\right\}$ is a
$p\times p$ diagonal matrix.

So far, we get good statistical properties of the proposed ASE estimate under non-random sample size, but our goal is to determine a sample size under which the ASE attains the required accuracy. So we will introduce the sequential sampling scheme based on the ASE below. It is known that construction of the confidence set for ${\beta}_{0}$ depends on the asymptotic distribution of ${\stackrel{^}{\beta}}_{n}$ and sample size under sequential analysis is a random variable. So we need to study asymptotic properties of ASE under random sample size. Fortunately, property of uniform continuity in probability, see in [10] and [11], is a sufficient condition such that the randomly stopped sequence has the same asymptotic distribution as the fixed sample size estimate. That is, $\sqrt{n}\left({\stackrel{^}{\beta}}_{n}-{\beta}_{0}\right),n=1,2,\cdots $, has the property of uniform continuity in probability, which indicates the following Theorem holds.

Theorem 1. Suppose that the (A1) and (A2) are satisfied, and let $N\left(t\right)$ be a positive integer-valued random variable such that $N\left(t\right)/t$ converges to 1 in probability as $t\to \infty $. Then

$\sqrt{N\left(t\right)}\left({\stackrel{^}{\beta}}_{N\left(t\right)}-{\beta}_{0}\right)\to N\left(0,{I}_{0}\Sigma {I}_{0}^{-1}\right)$

in distribution as $t\to \infty $.

From Theorem 1, we can construct a confidence set of
${\beta}_{0}$ and a stopping rule on sequential sampling procedure to determine final sample size. Let
$\left\{\left({y}_{i},{x}_{i}\right):i=1,2,\cdots ,k\right\}$ be the first *k* observations and denoted by
${C}_{k}$. Define a stopping rule
${N}_{d}$ as

$N={N}_{d}\equiv \mathrm{inf}\left\{k:\frac{{d}^{2}}{{a}_{k}^{2}}\ge {\nu}_{k},\forall k\ge {n}_{0}\right\}$ (5)

For sequential estimation procedure, one new observation is collected at a time until the stopping criterion is satisfied. When the stopping rule holds, based on *N* samples a confidence set of
${\beta}_{0}$ is constructed as follow,

${R}_{N}=\left\{Z\in {R}^{p}:\frac{{S}_{N}}{N}\le \frac{{d}^{2}}{{\nu}_{N}};{I}_{{N}_{j}}\left(\epsilon \right)=0\to {z}_{j}=0,1\le j\le p\right\}$ (6)

where ${S}_{N}={\left({Z}_{{N}_{1}}-{\stackrel{^}{\beta}}_{{N}_{1}}\right)}^{\text{T}}{\stackrel{\u02dc}{\Sigma}}_{11}\left({Z}_{{N}_{1}}-{\stackrel{^}{\beta}}_{{N}_{1}}\right)$. Properties of the sequential procedure and the confidence set ${R}_{N}$ are summarized below.

Theorem 2. Assume that the (A1) and (A2) are satisfied, and let *N* be the stopping time defined in Equation (5). Then: 1)
$\underset{d\to 0}{\mathrm{lim}}{d}^{2}N/{a}^{2}\nu =1$ almost surely; 2)
$\underset{d\to 0}{\mathrm{lim}}{d}^{2}N/{a}^{2}\nu =1$ ; 3)
$\underset{d\to 0}{\mathrm{lim}}{d}^{2}E\left(N\right)/{a}^{2}\nu =1$ ; 4)
$\underset{d\to 0}{\mathrm{lim}}{\stackrel{^}{p}}_{0}\left(N\right)={p}_{0}$ almost surely; 5)
$\underset{d\to 0}{\mathrm{lim}}E\left({\stackrel{^}{p}}_{0}\left(N\right)\right)={p}_{0}$ where
$\nu $ is the maximum eigen-value of matrix
${I}_{0}{\Sigma}^{-1}{I}_{0}$.

3. Example and Simulation

We evaluate the performance of the proposed method via sequential fixed size confidence estimation using synthesized data sets. As mentioned previously, by the definition of the stopping rule, when sampling is stopped, the final confidence ellipsoid constructed will have the prescribed precision and coverage probability. Thus, we can compare the average stopping times of procedures based on LAD and ASE. Since the proposed method ignores the non-effective variables, we expect the average stopping time to be significantly smaller than that of the procedure based on LAD with no variable identification mechanism. If the *p*_{0} variables are known in advance, then the most efficient procedure is, of course, to use only these *p*_{0} variables. Therefore, we also construct a sequential procedure under such a situation, and the results of the cases with known *p*_{0} can serve as the baseline, in which the smallest sample size is achieved, asymptotically.

The synthesized data sets for the model with fixed designs are generated as follows: the regressor ${x}_{i}$ are generated independently from a standard multivariate normal distribution with mean 0 and identity covariance matrix beforehand, and the error term ${e}_{i}$ is independently drawn from the standard normal distribution for each $i\ge 1$. The system error is assumed to follow the standard normal distribution. The response generated by Equation (1) and the true parameter ${\beta}_{0}=\left(-1.2,2.0,0,0,0,0,0,0,0,0\right)$ with 8 non-effective variables. Different precisions of confidence ellipsoid $d\in \left\{0.3,0.4,0.5,0.6\right\}$ are chosen with coverage probability equal to 95%, $\alpha =0.05$ in the simulation. We choose $\gamma =1$, $\delta =0.55$ and $\theta =0.70$ in analyzing simulated data. When applying the ASE method, the regularization parameter $\epsilon $ needs to be determined by some model selection criteria, as the AIC, BIC together with a GCV method. For convenience, we only use BIC to illustrate our method.

Table 1 state results of sequential sampling method for cox regression. In the table, we list final sample size *N* (stopping time),
$\kappa ={d}^{2}N/{a}^{2}\nu $ and empirical coverage probability CP of the 95% confidence set
${R}_{N}$. For all of the three cases: LAD,
${\text{LAD}}_{{p}_{0}}$, ASE, the value
$\kappa $ of is very close to 1, and the empirical coverage probability CP approaches the Normal 95% as d decreases, as stated in Theorem 2. However, the sample size N of LAD are much larger than those of the other two cases, and ASE has sample size very close to those of
${\text{LAD}}_{{p}_{0}}$. In conclusion, the proposed ASE is more efficient than LAD.

Table 2 reports powers of identity effective variables and effective variables and estimates of the regression coefficients for Tobit regression. We can see that numbers of incorrectly identified zero variables ( ${N}_{ic}^{\ast}$ ) using ASE is almost close

Table 1. Results of sequential sampling method based on ASE, LAD with all variables and ${\text{LAD}}_{{p}_{0}}$ with only ${p}_{0}$ non-zero variables for Tobit regression model.

${\kappa}^{*}={d}^{2}N/\left({a}^{2}\nu \right)$ ; $C{P}^{+}$ is the empirical coverage probability of 95% confidence ellipsoid region ${R}_{N}$ ; ** Empirical standard deviations are in parentheses.

Table 2. Power of variable identification and estimation of nonzero components under sequential sampling method based on ASE and LAD with Tobit regression model.

${N}_{ic}^{\ast}$ and
${N}_{c}^{\ast}$ are the average number of zero components in
$\beta $ correctly identified and nonzero components incorrectly estimated as zero values, respectively; ^{+} standard deviations are in parentheses.

to 0, and the number of correctly identified zero variables ( ${N}_{c}^{\ast}$ ) are all very close to the true number of effective variables (2 and 8). These results suggest that ${\stackrel{^}{p}}_{0}$ is a good estimator of ${p}_{0}$ under the sequential sampling method based on ASE. The LAD procedure does not identify the effective variables, so ${N}_{c}^{\ast}$ and ${N}_{ic}^{\ast}$ are not available. In addition, all of parameter estimates of effective variables are very close to the true values.

4. Conclusion

Based on an ASE estimate of the parameter in Tobit regression model, a sequential sampling procedure is constructed to estimate a minimum sample size to identify the effective variables and simultaneously make estimate of parameters with required accuracy. We prove that the proposed sequential procedure is asymptotically optimal in the sense of Chow and Robbins, see in [12]. Simulation studies show that the proposed method can save large sample size compared to traditional sequential sampling method. However, this paper supposes the dimension of variables is fixed, not varying as sample size. Our future work is to investigate the properties of sequential sampling method with varying number of variables as sample size.

Support

This research was supported by Research projects of universities in Xinjiang Uygur Autonomous Region under Grant No. XJEDU2016I033 and Xinjiang Normal University postdoctoral research foundation under Grant No. XJNUBS1539.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Tobin, J. (1958) Estimation of Relationships for Limited Dependent Variables. Econometrica, 26, 24-36. https://doi.org/10.2307/1907382 |

[2] | Adams, J.D. (1980) Personal Wealth Transfers. Quarterly Journal of Economics, 95, 159-179. |

[3] |
Ashenfelter, O. and Ham, J. (1979) Education, Unemployment, and Earnings. Journal of Political Economy, 87, S99-S116. https://doi.org/10.1086/260824 |

[4] |
Fair, R.C. (1978) A Theory of Extramarital Affairs. Journal of Political Economy, 86, 45-61. https://doi.org/10.1086/260646 |

[5] |
Tibshirani, R. (1996) Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society, Series B, 58, 267-288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x |

[6] |
Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004) Least Angle Regression. Journal of Annals of Statistics, 32, 407-499. https://doi.org/10.1214/009053604000000067 |

[7] |
Wang, Z.F. and Chang, Y.I. (2013) Sequential Estimate for Linear Regression Models with Uncertain Number of Effective Variables. Metrika, 76, 949-978. https://doi.org/10.1007/s00184-012-0426-4 |

[8] |
Powell, J.L. (1984) Least Absolute Deviations Estimation for the Censored Regression Model. Journal of Econometrics, 25, 303-325. https://doi.org/10.1016/0304-4076(84)90004-6 |

[9] |
Chen, X.R. and Wu, Y.H. (1994) Consistency of Estimates in Censored Linear Regression Models. Communications in Statistics, 23, 1847-1858. https://doi.org/10.1080/03610929408831360 |

[10] |
Anscombe, F.J. (1952) Large Sample Theory of Sequential Estimation. Mathematical Proceedings of the Cambridge Philosophical Society, 48, 600-607. https://doi.org/10.1017/S0305004100076386 |

[11] |
Woodroofe, M. (1982) Nonlinear Renewal Theory in Sequential Analysis. Society for Industrial and Applied Mathematics, Philadelphia. https://doi.org/10.1137/1.9781611970302 |

[12] |
Chow, Y.S. and Robbins, H. (1965) On the Asymptotic Theory of Fixed-Width Sequential Confidence Intervals for the Mean. Journal of Annals of Mathematical Statistics, 36, 457-462. https://doi.org/10.1214/aoms/1177700156 |

Journals Menu

Contact us

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2023 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.