Scientific Research

An Academic Publisher

Estimation of CARA Preferences and Positive Mathematical Programming ()

Keywords

Share and Cite:

*Open Journal of Statistics*,

**8**, 1-13. doi: 10.4236/ojs.2018.81001.

1. Introduction

The treatment of risk in a mathematical programming setting has interested researchers for several decades. It all began with Markowitz [1] who presented the problem of portfolio selection in a mean-variance framework. Freund [2] discussed a quadratic programming approach to deal with output price risk in a mean-variance specification of revenue. Hazell [3] followed with a linear programming minimization of total absolute deviation (MOTAD) of income. Hazell justified his proposal by citing the difficult access―at that time―to a quadratic programming software necessary to solve the mean-variance model.

When dealing with risk, the major issue involves the decision of how to characterize the risk preferences of an economic agent. Pratt [4] proposed a general way to characterize absolute risk aversion―known as the Arrow-Pratt measure of risk―which is defined as the negative ratio of the second derivative to the first derivative of a utility function of wealth. The utility function of an economic agent exhibits decreasing, constant or increasing risk aversion if the Arrow-Pratt risk aversion is decreasing, constant or increasing as a function of wealth. Very often, economists have chosen a negative exponential utility function of wealth $U\left(w\right)=1-\mathrm{exp}\left[-\varphi w\right]$ that exhibits a constant absolute risk aversion (CARA) coefficient $\varphi >0$ . This is the utility function selected also by Freund to represents North Carolina farmers’ preferences. It remains to decide how to estimate the CARA parameter. Freund wrote (Source: [2] , 258). “The estimation of the risk aversion constant $\varphi $ is a purely subjective task, and any chosen value is exceedingly difficult to defend.” Fortunately, the task of estimating $\varphi $ can be made defensible by adopting a chance-constrained approach as presented in section 3.

The objective of this paper, therefore, is twofold: 1) to estimate the CARA parameter in an empirical way that is consistent with the available sample information; 2) to combine the CARA risk analysis with a positive mathematical programming (PMP) approach that uses all the available information.

2. Freund Risk Programming

Freund [2] assumed a $\left(J\times 1\right)$ random vector of market output prices, $\stackrel{\u02dc}{p}$ , distributed as a normal random vector variable $\stackrel{\u02dc}{p}\sim N\left[E\left(\stackrel{\u02dc}{p}\right),{\Sigma}_{p}\right]$ . He assumed that farmers’ preferences toward risk were characterized by a negative exponential utility function $U\left(\stackrel{\u02dc}{r}\right)=1-\mathrm{exp}\left[-\varphi \stackrel{\u02dc}{r}\right]$ , with CARA coefficient $\varphi >0$ and

random revenue $\stackrel{\u02dc}{r}$ . Finally, Freund assumed that farmers made decisions by maximizing their expected utility subject to a non-random linear technology A and a known quantity $\left(I\times 1\right)$ vector of limiting inputs $b$ . Given these assumptions, expected utility corresponds to the following integral

$\begin{array}{c}EU\left(\stackrel{\u02dc}{r}\right)=1-\mathrm{exp}\left[-\varphi \left\{E\left(\stackrel{\u02dc}{r}\right)-\frac{\varphi}{2}\mathrm{var}\left(\stackrel{\u02dc}{r}\right)\right\}\right]\\ =1-\mathrm{exp}\left[-\varphi \left\{E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x-\frac{\varphi}{2}{x}^{\prime}{\Sigma}_{p}x\right\}\right]\end{array}$ (1)

where $x\ge 0$ is a $\left(J\times 1\right)$ vector of decision variables, $E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x$ is expected

revenue, $\frac{\varphi}{2}{x}^{\prime}{\Sigma}_{p}x$ is the risk premium and $\left(E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x-\frac{\varphi}{2}{x}^{\prime}{\Sigma}_{p}x\right)$ is the cer-

tainty equivalent (CE) of the risky prospect. Maximization of the certainty equivalent corresponds to the maximization of the expected utility in Equation (1). Therefore, primal and dual specifications of farmer’s risk programming under this CARA model are stated as

Primal $\mathrm{max}CE=E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x-\frac{\varphi}{2}{x}^{\prime}{\Sigma}_{p}x$ (2)

subject to $\text{Demand}\le \text{Supply}$

$Ax\le b$ (3)

Dual $\mathrm{min}TC={b}^{\prime}y+\frac{\varphi}{2}{x}^{\prime}{\Sigma}_{p}x$ (4)

subject to $MC\ge MR$

${A}^{\prime}y+\varphi {\Sigma}_{p}x\ge E\left(\stackrel{\u02dc}{p}\right)$ (5)

where TC is total cost, MC is marginal cost, MR is marginal revenue, $y\ge 0$ is the $\left(I\times 1\right)$ vector of input shadow prices, the primal constraints represent the technological relations between limiting input and output levels, while the dual constraints express the equilibrium relations between marginal cost and marginal revenue of producing and selling outputs. Marginal cost has two components: ${A}^{\prime}y$ is the marginal cost associated with the production technology and fixed limiting inputs; $\varphi {\Sigma}_{p}x$ is the marginal risk premium, that is, the marginal cost associated with farmer’s awareness of operating in the face of risky output prices.

The solution of the risk problem stated in relations (2)-(5) requires either a priori knowledge of the CARA parameter $\varphi $ or a procedure to estimate it simultaneously with the optimal output levels $x$ and input shadow prices $y$ .

3. Chance Constrained Risky Revenue

With some probability, a farmer may survive unfavorable events such as total revenue being less than total cost. Charnes and Cooper [5] proposed a useful approach to deal with this case. Consider the following probabilistic proposition:

$Prob\left({\stackrel{\u02dc}{p}}^{\prime}x\le {y}^{\prime}Ax\right)\le 1-\beta $ (6)

where the probability that uncertain (risky) total revenue ${\stackrel{\u02dc}{p}}^{\prime}x$ be less than or equal to certain total cost ${y}^{\prime}Ax$ should be smaller than or equal to $1-\beta $ . Intuitively, for how many years could a farmer survive while operating in the red? As an example, say once every twenty years. In this case, we could estimate the probability $1-\beta =1/20=0.05$ .

To derive a deterministic equivalent of relation (6) it is convenient to standardize the random variable ${\stackrel{\u02dc}{p}}^{\prime}x$ by subtracting its expected value $E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x$ and dividing it by the corresponding standard deviation ${\left({x}^{\prime}{\Sigma}_{p}x\right)}^{1/2}$ :

$\begin{array}{l}Prob\left({\stackrel{\u02dc}{p}}^{\prime}x\le {y}^{\prime}Ax\right)\le 1-\beta \\ Prob\left(\frac{{\stackrel{\u02dc}{p}}^{\prime}x-E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x}{{\left({x}^{\prime}{\Sigma}_{p}x\right)}^{1/2}}\le \frac{{y}^{\prime}Ax-E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x}{{\left({x}^{\prime}{\Sigma}_{p}x\right)}^{1/2}}\right)\le 1-\beta \\ Prob\left(\tau \le \frac{{y}^{\prime}Ax-E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x}{{\left({x}^{\prime}{\Sigma}_{p}x\right)}^{1/2}}\right)\le 1-\beta \\ Prob\left(E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x+\tau {\left({x}^{\prime}{\Sigma}_{p}x\right)}^{1/2}\le {y}^{\prime}Ax\right)\le 1-\beta \end{array}$ (7)

By choosing a value of the standard normal random variable $\tau $ , say $\tau =\stackrel{\xaf}{\tau}$ , that corresponds to probability $1-\beta $ , the deterministic equivalent of relation (6) assumes the specification

$E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x+\stackrel{\xaf}{\tau}{\left({x}^{\prime}{\Sigma}_{p}x\right)}^{1/2}\le {y}^{\prime}Ax.$ (8)

There remains to establish a relation between the $\stackrel{\xaf}{\tau}$ parameter and the CARA coefficient $\varphi $ . This relation is obtained by subtracting the complementary slackness condition of the dual constraints (5) from relation (8):

$\begin{array}{l}E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x+\stackrel{\xaf}{\tau}{\left({x}^{\prime}{\Sigma}_{p}x\right)}^{1/2}\le {y}^{\prime}Ax\\ -\left[E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x-\varphi {x}^{\prime}{\Sigma}_{p}x={y}^{\prime}Ax\right].\end{array}$ (9)

With simplification, relation (9) corresponds to

$\stackrel{\xaf}{\tau}/{\left({x}^{\prime}{\Sigma}_{p}x\right)}^{1/2}+\varphi \le 0.$ (10)

Relation (10) defines the CARA parameter $\varphi $ simultaneously with the decision variables $x$ , once the value of $\stackrel{\xaf}{\tau}$ is selected by the researcher. As an example, if the survival probability is determined to be $1-\beta =0.05$ , the one tail value of the normal random variable is $\stackrel{\xaf}{\tau}=-1.645$ .

The solution of the risky output price problem―a la Freund―is finally achieved by solving the following set of relations (using the linear complementarity problem (LCP) approach, for example)

dual constraints $\varphi {\Sigma}_{p}x+{A}^{\prime}y\ge E\left(\stackrel{\u02dc}{p}\right)$ (11)

primal constraints $Ax\le b,\text{\hspace{0.17em}}x\ge 0,\text{}y\ge 0$ (12)

chance constraint $\stackrel{\xaf}{\tau}/{\left({x}^{\prime}{\Sigma}_{p}x\right)}^{1/2}+\varphi =0$ (13)

and the associated complementary slackness conditions. This programming framework resolves the dilemma posed by Freund as to the difficulty of “defending any chosen value of the risk aversion constant $\varphi $ .”

4. CARA and Positive Mathematical Programming

Good empirical research requires the use of all the available information. When dealing with a sample of farms, for example, the most accessible piece of information consists in the output levels of crop activities realized in the previous production cycle. Such information is the end result of a decision-making process by an entrepreneur facing technological and market environments. Under the assumption that this economic agent attempted to maximize profit (minimizing cost), the realized (observed) output levels incorporate information about marginal cost and marginal revenue as the fundamental components of his opportunity costs. The research challenge is to unpack the marginal costs hidden in those observed output levels. Another readily available piece of information regards the price of limiting inputs. For example, a farmer has a pretty good idea about the price of his land. Even if his measure is imprecise, the price of land known to him can be used to anchor the model. We assume, therefore, that the output levels of a previous production cycle are observed (measured), ${x}^{obs}$ , as are the prices of limiting inputs, ${y}^{obs}$ . These pieces of information define calibration constraints that take on the following structure

$x={x}^{obs}+h$ (14)

$y={y}^{obs}+u$ (15)

where $h$ and $u$ are unrestricted deviations. This specification of the calibration constraints admits that the observed quantities and prices may be measured with error by either overstating or understating them. The choice approach to deal with $h$ and $u$ , therefore, is to minimize the sum of squared deviations weighed by appropriate weight matrices, say diagonal W and V, respectively. The necessity of introducing matrices W and V is justified by the different nature of the measurement units involving $h$ and $u$ ―in constraints (14) and (15)―and the corresponding dual variables that are indicated with vector variables $\lambda $ and $\phi $ , respectively. Constraint (14) is defined in terms of quantity units and, therefore, the dual variable $\lambda $ is defined in price units, say dollars. The self-duality of the least-squares approach [6] dictates that the matrix W mediates between the deviation vector $h$ and the dual vector $\lambda $ to establish the equation $\lambda =Wh$ , as demonstrated in the following discussion:

$\mathrm{min}LS={h}^{\prime}Wh/2$

subject to $x={x}^{obs}+h$ dual variable $\lambda $

with Lagrange function corresponding to

$L={h}^{\prime}Wh/2+{\lambda}^{\prime}\left(x-{x}^{obs}-h\right)$

and the first order condition

$\frac{\partial L}{\partial h}=Wh-\lambda =0$ . (16)

Therefore, $\lambda =Wh$ , as asserted. Since $\lambda $ is measured in price units and $h$ is measured in quantity units, an appropriate choice for the diagonal terms of the W matrix corresponds to the expected output prices. Analogous discussion involves the deviations $u$ and the corresponding dual variable $\phi $ . In this case, the least-squares relation turns out to be $\phi =Vu$ . Since $u$ is measured in input price units and $\phi $ is measured in quantity units, an appropriate choice of the diagonal terms of the V matrix is $\left({b}_{i}/{y}_{i}^{obs}\right)$ . Notice that the self-duality of the least-squares method allows for the elimination of vector variables $\lambda $ and $\phi $ from the model to be solved, as shown in the following intermediate step whose goal is the derivation of the dual constraint:

$\mathrm{max}CE=E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x-\frac{\varphi}{2}{x}^{\prime}{\Sigma}_{p}x$

subject to $Ax\le b$

$x={x}^{obs}+h$

with Lagrange function

$L=E{\left(\stackrel{\u02dc}{p}\right)}^{\prime}x-\frac{\varphi}{2}{x}^{\prime}{\Sigma}_{p}x+{y}^{\prime}\left(b-Ax\right)+{\lambda}^{\prime}\left({x}^{obs}+h-x\right)$

and Karush-Kuhn-Tucker (KKT) condition

$\frac{\partial L}{\partial x}=E\left(\stackrel{\u02dc}{p}\right)-\varphi {\Sigma}_{p}x-{A}^{\prime}y-\lambda \le 0$

but since $\lambda =Wh$ under a least-squares approach, the final specification of the dual constraint takes on the following structure

$\varphi {\Sigma}_{p}x+{A}^{\prime}y+Wh\ge E\left(\stackrel{\u02dc}{p}\right)$ . (17)

The left-hand-side of relation (17) represents the total marginal cost of producing output $x$ under technological and risky output price conditions. Analogous discussion involves the primal constraint of the following dual problem

$\mathrm{min}TC={b}^{\prime}y+\frac{\varphi}{2}{x}^{\prime}{\Sigma}_{p}x$

subject to $\varphi {\Sigma}_{p}x+{A}^{\prime}y+Wh\ge E(\; p\; \u02dc\; )$

$y={y}^{obs}+u$

with Lagrange function

$L={b}^{\prime}y+\frac{\varphi}{2}{x}^{\prime}{\Sigma}_{p}x+{x}^{\prime}\left(E\left(\stackrel{\u02dc}{p}\right)-\varphi {\Sigma}_{p}x-{A}^{\prime}y-Wh\right)+{\phi}^{\prime}\left(y-{y}^{obs}-u\right)$

and KKT condition

$\frac{\partial L}{\partial y}=b-Ax+\phi \ge 0$

but since $\phi =Vu$ , the primal constraint assumes the following structure

$Ax\le b+Vu$ . (18)

Finally, phase I model of the PMP approach under a CARA specification of output price uncertainty can be stated as a weighted least-squares problem of finding nonnegative vectors $x$ and $y$ such that

$\mathrm{min}LS={h}^{\prime}Wh/2+{u}^{\prime}Vu/2$ (19)

subject to

$Ax\le b+Vu$ primal constraints (20)

$\varphi {\Sigma}_{p}x+{A}^{\prime}y+Wh\ge E\left(\stackrel{\u02dc}{p}\right)$ dual constraints (21)

$x={x}^{obs}+h$ calibration constraints (22)

$y={y}^{obs}+u$ calibration constraints (23)

${y}^{\prime}\left(b+Vu-Ax\right)=0$ primal CSC (24)

${x}^{\prime}\left(\varphi {\Sigma}_{p}x+{A}^{\prime}y+Wh-E\left(\stackrel{\u02dc}{p}\right)\right)=0$ dual CSC (25)

$\stackrel{\xaf}{\tau}/{\left({x}^{\prime}{\Sigma}_{p}x\right)}^{1/2}+\varphi =0$ chance constraint (26)

where CSC stands for complementary slackness conditions.

The solution of model (19)-(26) produces unique least-squares estimates of output quantities ${x}^{*}$ and input shadow prices ${y}^{*}$ that are as close as possible to the observed information ${x}^{obs}$ and ${y}^{obs}$ . This is the meaning of calibration in the novel PMP approach. Furthermore, the estimates of output quantities and shadow prices maximize the certainty equivalent corresponding to expected utility under a CARA specification of risky output prices.

5. Estimation of a Cost Function―Phase II of PMP

Phase II of the PMP approach estimates a cost function. The specification of such a function follows the familiar theoretical properties: it is non-decreasing in output quantities and input prices; it is concave and homogeneous of degree one in input prices. The following specification meets all these properties:

$C\left(x,y\right)=\left({f}^{\prime}x\right)\left({g}^{\prime}y\right)+\left({g}^{\prime}y\right){x}^{\prime}Qx/2+\left({f}^{\prime}x\right)\left[{\left({y}^{1/2}\right)}^{\prime}G{y}^{1/2}\right]$ (27)

where Q is a symmetric positive definite matrix of dimensions $\left(J\times J\right)$ . The

term $\left[{\left({y}^{1/2}\right)}^{\prime}G{y}^{1/2}\right]$ follows a generalized Leontief specification. The $\left(I\times I\right)$

G matrix has elements ${G}_{i,ii}={G}_{ii,i}\ge 0,i\ne ii,i,ii=1,\cdots ,I$ . The diagonal elements ${G}_{i,i}$ can take on either positive or negative values. The components of vectors $f$ and $g$ are free to take on any value as long as ${f}^{\prime}x>0$ and ${g}^{\prime}y>0$ . The reason for introducing a term like $\left({f}^{\prime}x\right)\left({g}^{\prime}y\right)$ is to add flexibility to the cost function.

The marginal cost function assumes the following specification

$\frac{\partial C}{\partial x}=\left({g}^{\prime}y\right)f+\left({g}^{\prime}y\right)Qx+f\left[{\left({y}^{1/2}\right)}^{\prime}G{y}^{1/2}\right]$ (28)

while Shephard lemma is stated as

$\frac{\partial C}{\partial y}=\left({f}^{\prime}x\right)g+g\left({x}^{\prime}Qx\right)/2+\left({f}^{\prime}x\right)\left[\Delta \left({y}^{-1/2}\right)G{y}^{1/2}\right]$ (29)

where the $\Delta $ matrix is diagonal with elements $\left({y}_{i}^{-1/2}\right)$ .

The estimation of the cost function is performed by combining the elements of phase I and phase II and using all the information for N farms in a weighted least-squares problem:

$\mathrm{min}LS={\displaystyle \underset{n=1}{\overset{N}{\sum}}{{h}^{\prime}}_{n}{W}_{n}{h}_{n}/2}+{\displaystyle \underset{n=1}{\overset{N}{\sum}}{{u}^{\prime}}_{n}{V}_{n}{u}_{n}/2}$ (30)

subject to

${A}_{n}{x}_{n}\le {b}_{n}+{V}_{n}{u}_{n}$ primal constraints (31)

${\varphi}_{n}{\Sigma}_{p}{x}_{n}+{{A}^{\prime}}_{n}{y}_{n}+{W}_{n}{h}_{n}\ge E\left({\stackrel{\u02dc}{p}}_{n}\right)$ dual constraints (32)

${x}_{n}={x}_{n}^{obs}+{h}_{n}$ calibration constraints (33)

${y}_{n}={y}_{n}^{obs}+{u}_{n}$ calibration constraints (34)

${{y}^{\prime}}_{n}\left({b}_{n}+{V}_{n}{u}_{n}-{A}_{n}{x}_{n}\right)=0$ primal CSC (35)

${{x}^{\prime}}_{n}\left({\varphi}_{n}{\Sigma}_{p}{x}_{n}+{{A}^{\prime}}_{n}{y}_{n}+{W}_{n}{h}_{n}-E\left({\stackrel{\u02dc}{p}}_{n}\right)\right)=0$ dual CSC (36)

$-1.645/{\left({{x}^{\prime}}_{n}{\Sigma}_{p}{x}_{n}\right)}^{1/2}+{\varphi}_{n}=0$ chance constraint (37)

$\left({{g}^{\prime}}_{n}{y}_{n}\right){f}_{n}+\left({{g}^{\prime}}_{n}{y}_{n}\right)Q{x}_{n}+{f}_{n}\left[{\left({y}_{n}^{1/2}\right)}^{\prime}G{y}_{n}^{1/2}\right]={\varphi}_{n}{\Sigma}_{p}{x}_{n}+{{A}^{\prime}}_{n}{y}_{n}+{W}_{n}{h}_{n}$

marginal cost function (38)

$\left({{f}^{\prime}}_{n}{x}_{n}\right){g}_{n}+{g}_{n}\left({{x}^{\prime}}_{n}Q{x}_{n}\right)/2+\left({{f}^{\prime}}_{n}{x}_{n}\right)\left[\Delta \left\{{y}_{n}^{-1/2}\right\}G{y}_{n}^{1/2}\right]={A}_{n}{x}_{n}$

Shephard lemma (39)

$Q=LD{L}^{\prime}$ Cholesky factorization (40)

$Q{Q}^{-1}=I$ positive definiteness (41)

where L is a unit lower triangular matrix and D is a diagonal matrix with elements ${D}_{j,j}\ge 0$ . The Cholesky factorization guarantees symmetry and positive semidefiniteness of the Q matrix.

The solution of problem (30)-(41) produces least-squares estimates of all unknown variables and parameters, namely ${\stackrel{^}{x}}_{n},{\stackrel{^}{y}}_{n},{\stackrel{^}{h}}_{n},{\stackrel{^}{u}}_{n},{\stackrel{^}{\varphi}}_{n},{\stackrel{^}{f}}_{n},{\stackrel{^}{g}}_{n},\stackrel{^}{Q},\stackrel{^}{G}$ . In particular, the optimal quantity levels ${\stackrel{^}{x}}_{n}$ , input shadow prices ${\stackrel{^}{y}}_{n}$ and CARA coefficient ${\stackrel{^}{\varphi}}_{n}$ are identical to ${x}^{*}$ , ${y}^{*}$ and ${\varphi}^{*}$ of phase I.

6. Calibrating Model―Phase III of PMP

Using estimates of the cost function parameters, ${\stackrel{^}{f}}_{n},{\stackrel{^}{g}}_{n},\stackrel{^}{Q},\stackrel{^}{G}$ , it is possible to set up a calibrating model―without calibration constraints―that reproduces output levels and shadow input prices that are identical to those obtained with model (30)-(41). This equivalence is achieved because Shephard lemma is equal to the demand for inputs, $Ax$ , as stated in the primal constraint (3) of the CARA risk model. Furthermore, the marginal cost function is equal to the dual constraints (5) of the same problem. In other words, the equivalence between the solution of the calibrating model and the solution of model (30)-(41) reveals the operation of unpacking the information contained in the observed quantities ${x}^{obs}$ and prices ${y}^{obs}$ in the form of effective marginal cost and input demand, respectively.

A calibrating linear complementarity problem for the n-th farm, therefore, can be stated as

$\mathrm{min}CS{C}_{n}={{y}^{\prime}}_{n}z{p}_{n}+{{x}^{\prime}}_{n}z{d}_{n}=0$ (42)

subject to

$\left({\stackrel{^}{{f}^{\prime}}}_{n}{x}_{n}\right){\stackrel{^}{g}}_{n}+{\stackrel{^}{g}}_{n}\left({{x}^{\prime}}_{n}\stackrel{^}{Q}{x}_{n}\right)/2+\left({\stackrel{^}{{f}^{\prime}}}_{n}{x}_{n}\right)\left[\Delta \left\{{y}_{n}^{-1/2}\right\}\stackrel{^}{G}{y}_{n}^{1/2}\right]+z{p}_{n}={b}_{n}+{V}_{n}{\stackrel{^}{u}}_{n}$ (43)

$\left({\stackrel{^}{{g}^{\prime}}}_{n}{y}_{n}\right){\stackrel{^}{f}}_{n}+\left({\stackrel{^}{{g}^{\prime}}}_{n}{y}_{n}\right)\stackrel{^}{Q}{x}_{n}+{\stackrel{^}{f}}_{n}\left[{\left({y}_{n}^{1/2}\right)}^{\prime}\stackrel{^}{G}{y}_{n}^{1/2}\right]=E\left({\stackrel{\u02dc}{p}}_{n}\right)+z{d}_{n}$ (44)

where $z{p}_{n}$ and $z{d}_{n}$ are primal and dual slack variables, respectively.

The solution of model (42)-(44) produces estimates of the output quantities ${x}_{n}$ and shadow input prices ${y}_{n}$ that are identical to the corresponding solutions obtained in solving the phase II model, ${\stackrel{^}{x}}_{n}$ , ${\stackrel{^}{y}}_{n}$ . These estimates are as close as possible to the observed counterparts ${x}_{n}^{obs}$ and ${y}_{n}^{obs}$ . This is no surprise: the PMP process has transferred the same amount of information from the calibration constraints to the cost function while revealing the marginal cost levels and the input shadow prices that presumably influenced the economic agent in making the output and price decisions observed in ${x}_{n}^{obs}$ and ${y}_{n}^{obs}$ . Model (42)-(44) can now be used to evaluate a series of policy scenarios that may consider changes in expected output prices, changes in the quantity of limiting inputs, the introduction of crop subsidies and other analyses.

7. Empirical Example

The PMP procedure discussed in previous sections was applied to a sample of fourteen farms producing four crops (sugar beets, soft wheat, corn and barley). Land is the only limiting input. Given the large amount of information involved in this example, only the quantities of observed output levels and land prices are reported in Table 1.

In any computation of nonlinear models scaling of the original information series is of crucial importance for obtaining a feasible solution. The observed outputs are measured in hundred pound units. The land prices are measured in thousand dollars per acre.

Table 2 presents the optimal quantities of the crop activities and the optimal land prices obtained from the solution of model (30)-(41).

The discrepancy between the observed (Table 1) and the optimal quantities and prices (Table 2) is rather miniscule as reported in Table 3. The specification of the calibration constraints proposed in this paper is similar to a statistical specification of a regression function with non-zero residuals terms. It avoids the tautological specification of the original PMP procedure [7] that concerned only output quantity levels and assumed that $x\le {x}^{obs}\left(1+\epsilon \right)$ , where $\epsilon $ is a user-determined small positive number. In the context of this paper, an analogous specification of the calibration constraints involving both output quantities and input prices would result in an infeasible solution.

The CARA coefficients of the fourteen farms are presented in Table 4.

The CARA coefficient f is measured in 1/$ units, as can be determined by examining the certainty equivalent in equation (2). Its reciprocal is measured in $ units and is called the risk tolerance coefficient. From table 4 and the fact that the certainty equivalent in this sample of farms is measured in 1000 dollar’s units, the risk tolerance varies from $50,000 to $190,000.

Table 5 presents the estimate of the cost function Q matrix. The estimate of $G=-30.136728$ .

The estimates of parameters $f$ and $g$ of the cost function are presented in Table 6.

Table 1. Observed output quantities and land input prices.

Table 2. Optimal output quantities and land shadow prices from model (30)-(41).

The parameters $f$ and $g$ can be interpreted as the individual farm deviations from the sample marginal cost function and the sample Shephard lemma, respectively. The conditions ${f}^{\prime}x>0$ and ${g}^{\prime}y>0$ are satisfied for all farms.

Table 3. Percentage difference between observed and estimated quantities and land prices.

Table 4. Estimated CARA coefficients.

Table 5. Estimate of the cost function Q matrix.

Table 6. Estimates of the f and g parameters of the cost function.

8. Conclusions

The extension of a PMP approach to include also the calibration of dual variables around observed limiting input prices has required a modification of the notion of calibration itself as proposed in the original PMP procedure by Howitt [7] . In that seminal paper, calibration means that optimal output levels, say ${x}^{*}$ , are identically equal to the observed output levels ${x}^{obs}$ (up to a user-deter- mined but very small $\epsilon $ number). The research reported in this paper found that the simultaneous calibration of output levels and limiting input prices―as specified in Equations (14) and (15)―can be achieved only in a statistical manner analogous to a statistical regression analysis where the error terms are minimized by least-squares estimation. In other words, if the traditional specification of the calibration constraints were formulated also for the limiting input prices, say $x\le {x}^{obs}\left(1+\epsilon \right)$ and $y\le {y}^{obs}\left(1+\epsilon \right)$ , an infeasible solution of the programming problem would occur.

A useful consequence of specifying the calibration constraints as in equations (14) and (15), coupled with the adoption of a least-squares procedure to minimize the deviations $h$ and $u$ , is that the calibrating solution $\stackrel{^}{x}$ and $\stackrel{^}{y}$ is unique. This extension of a PMP procedure was associated with the treatment of risky output prices according to a famous paper by Freund [2] . In that paper, Freund did not know how to estimate the CARA parameter of the selected utility function. In this paper, a chance-constrained relation involving random revenue is introduced to allow the derivation of a functional relation that ties the CARA parameter to the decision variables of the entrepreneur operating under a risky price environment.

Another methodological advantage of extending the calibration to the limiting input prices concerns the specification of a complete cost function. In the traditional PMP approach, a cost function involved only the output levels and ignored any input price. In this paper, a complete cost function is specified that satisfies all the theoretical properties.

An empirical example involving fourteen farms, four crops and one limiting input confirms that the proposed PMP procedure is feasible without excessive computational burden. In general, however, not every farm produces all the sample crops. This means that, in reality, the matrix of observed output levels contains some zero observations. When this probable event occurs, the proposed PMP procedure can easily accommodate the zero observations with minimal adjustments. It is sufficient to restate the calibration constraints in two parts: one part dealing with the positive output levels and the second part dealing with the zero levels. The rest of the estimation procedure applies without modification.

While the calibrating solution is unique, this cannot be said―in this numerical example―for the estimated parameters of the cost function. To obtain a unique solution of parameters $Q,G,f$ and $g$ it is necessary to have access to at least two observations per farm. In that case, the marginal cost function and Shephard lemma will admit corresponding residuals that must be minimized according to a second level least-squares criterion. Future research will attempt to extend the PMP approach to the estimation of general risk preferences where economic agents can be decreasingly risk averters, as wealth increases.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | Markowitz, H. (1952) Portfolio Selection. The Journal of Finance, 7, 77-91. |

[2] |
Freund, R.J. (1956) The Introduction of Risk into a Programming Model. Econometrica, 24,253-263. https://doi.org/10.2307/1911630 |

[3] |
Hazell, P.B.R. (1971) A Linear Alternative to Quadratic and Semivariance Programming for Farm Planning Under Uncertainty. American Journal of Agricultural Economics, 53, 53-62. https://doi.org/10.2307/3180297 |

[4] |
Pratt, J.W. (1964) Risk Aversion in the Small and in the Large. Econometrica, 77, 122-136. https://doi.org/10.2307/1913738 |

[5] |
Charnes, A. and Cooper, W.W. (1959) Chance Constrained Programming. Management Science, 6, 73-79. https://doi.org/10.1287/mnsc.6.1.73 |

[6] |
Paris, Q. (2015) The Dual of the Least-Squares Method. Open Journal of Statistics, 5, 658-664. https://doi.org/10.4236/ojs.2015.57067 |

[7] |
Howitt, R.E. (1995) Positive Mathematical Programming. American Journal of Agricultural Economics, 77, 329-342. https://doi.org/10.2307/1243543 |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.