_{1}

^{*}

This study analyzes the minimum dimension of statistically sufficient aggregation when performance measures are joint-normally distributed. Statistically sufficient aggregates incur no loss of information during the aggregation process and provide robustness to performance evaluation systems. This study shows that there is no one-dimensional statistically sufficient aggregate in a multi-task setting.

A management accountant may considerably reduce the complexity of performance evaluation systems through aggregation of performance measures. In designing a performance evaluation system, an important question is whether an aggregate measure can carry all information from the basic measures. Statistically sufficient aggregates satisfy this requirement and incur no loss of information in the aggregation process for all agency particulars [

Statistically sufficient aggregates provide the highest degree of robustness to a performance evaluation system because no agency particulars are involved in the aggregation process. Statistically sufficient aggregates can be commonly used for all managers without being individually “tailored” for each manager.

This study shows that in a setting with two tasks and joint-normally distributed performance measures, the minimal statistical sufficiency requires at least two aggregate measures. That is, when a one-dimensional aggregate is used for two tasks, information loss is necessarily incurred in the aggregation process. This result implies that a uniform performance evaluation system should use multiple aggregate measures.

Motivating employees for multiple tasks is qualitatively different from a single-task setting as the allocation of effort among tasks should be considered [

This study contributes to the literature by first explicitly analyzing the minimum statistical sufficiency of aggregating performance measures under a normal distribution. [

A principal owns a production technology, which requires two-dimensional action a = ( a 1 , a 2 ) from an agent. The economic outcome from the agency is not contractible and the compensation for the agent is based on two contractible performance measures y and z.

The agent exerts two-dimensional action a = ( a 1 , a 2 ) which affects the two basic measures y and z with respective sensitivity vectors m = ( m 1 , m 2 ) and k = ( k 1 , k 2 ) . The basic measures are also affected by random error terms ε and δ and are given as follows:

y = m 1 a 1 + m 2 a 2 + ε , (1)

z = k 1 a 1 + k 2 a 2 + δ , (2)

where ε ~ N ( 0, σ 1 2 ) , δ ~ N ( 0, σ 2 2 ) , and C o v ( y , z ) = ρ ε δ σ 1 σ 2 . The basic measures y and z, conditional on the agent’s action a = ( a 1 , a 2 ) , are distributed by a joint normal density function:

f ( y , z ; a 1 , a 2 ) = 1 2 π 1 − ρ ε δ 2 σ 1 σ 2 exp [ − 1 2 ( 1 − ρ ε δ 2 ) Ξ ] , (3)

where Ξ = ( y − m ⋅ a ) 2 / σ 1 2 − 2 ρ ε δ ( y − m ⋅ a ) ( z − k ⋅ a ) / ( σ 1 σ 2 ) + ( z − k ⋅ a ) 2 / σ 2 2 .

Statistical sufficiency is defined in terms of aggregate measures being sufficient statistics for the basic measures y and z with respect to the agent’s unobservable action a = ( a 1 , a 2 ) . Sufficient statistics for the basic measures y and z can be found by analyzing the joint normal density function (3).

[

Definition 1. (Sufficient statistics; factorization criterion)

Statistics T 1 ( y , z ) and T 2 ( y , z ) are jointly sufficient for the basic measures y and z if, and only if, the joint density function f ( y , z ; a 1 , a 2 ) (3) is factorized for all actions a = ( a 1 , a 2 ) ∈ ℝ 2 :

f ( y , z ; a 1 , a 2 ) = u ( y , z ) v ( T 1 ( y , z ) , T 2 ( y , z ) ; a 1 , a 2 ) , (4)

where u ( ⋅ ) and v ( ⋅ ) are positive functions. The function v ( ⋅ ) depends on the basic measures y and z only through T 1 ( y , z ) and T 2 ( y , z ) .

Aggregate measures T 1 ( y , z ) and T 2 ( y , z ) are jointly sufficient statistics for the basic measures y and z with respect to the agent’s unobservable action ( a 1 , a 2 ) if observing the aggregate measures without observing the basic measures is as useful as observing the basic measures in estimating all levels of the agent’s action. All the information relevant to estimating the agent’s unobservable action ( a 1 , a 2 ) is preserved and passed onto the sufficient statistics T 1 ( y , z ) and T 2 ( y , z ) from the basic measures y and z.

When an aggregation is statistically sufficient, the aggregates are determined only by the statistical properties of the basic measures. That is, the same aggregates T 1 ( y , z ) and T 2 ( y , z ) are good enough to substitute for the basic measures y and z regardless of the agency particulars. Statistically sufficient aggregates are robust and can be uniformly used for all agents without being individually tailored.

While preserving all relevant information, sufficient statistics enable a performance evaluation system to work with a simpler set of measures than the basic measures. As the value of sufficient statistics comes from simplicity, the simplest sufficient statistics are of interest. Minimal sufficient statistics are the simplest set of jointly sufficient statistics [

Definition 2. (Minimal statistical sufficiency)

A set of sufficient statistics is minimal if it can be represented as a function of any other sufficient statistics.

By the factorization criterion (Definition 1), the following two-dimensional jointly sufficient statistics are obtained from the joint normal density function f ( y , z ; a 1 , a 2 ) (3):

T 1 ( y , z ) = ( m 1 − ϕ 1 k 1 ) σ 1 2 y + ( k 1 − ϕ 2 m 1 ) σ 2 2 z , (5)

T 2 ( y , z ) = ( m 2 − ϕ 1 k 2 ) σ 1 2 y + ( k 2 − ϕ 2 m 2 ) σ 2 2 z , (6)

where ϕ 1 = ρ ε δ σ 1 σ 2 / σ 2 2 and ϕ 2 = ρ ε δ σ 1 σ 2 / σ 1 2 . Then, the question is whether the jointly sufficient statistics T 1 ( y , z ) (5) and T 2 ( y , z ) (6) are minimal such that no one-dimensional sufficient statistic exists. The following lemma shows that the jointly sufficient statistics (5) and (6) are minimal if the two performance measures y and z are not perfectly aligned.

Lemma 1. Assume that the basic measures y and z follow the joint normal distribution f ( y , z ; a 1 , a 2 ) (3) and the basic measures y and z are not perfectly aligned. Then, the jointly sufficient statistics T 1 ( y , z ) (5) and T 2 ( y , z ) (6) are minimal.

Given Lemma 1, the following proposition shows that a one-dimensional sufficient statistic is infeasible in a two-task setting, unless the two performance measures y and z are perfectly aligned.

Proposition 1. With the same assumptions in Lemma 1, a one-dimensional sufficient statistic of the basic measures y and z is infeasible.

Proposition 1 shows that in general, it is infeasible to achieve statistical sufficiency with a one-dimensional aggregate if the agent’s action is on two tasks. [

This study shows that there is no one-dimensional sufficient statistic in a multi-task setting with joint-normally distributed performance measures. For statistical sufficiency, there should be at least two aggregates when a manager is on two tasks. Any one-dimensional aggregate will incur loss of information on the manager’s unobservable action and depend on the agency particulars such as the manager’s risk attitude and the firm’s economic environment.

For performance evaluation purposes, performance measures are often aggregated into a one-dimensional aggregate while assigned jobs involve multiple tasks. The result of this study implies that a one-dimensional aggregate should not be uniformly implemented without being customized for each manager. A uniform performance evaluation needs the use of multiple aggregates.

The author declares no conflicts of interest regarding the publication of this paper.

Yoo, J. (2019) Statistically Sufficient Aggregation of Performance Measures under a Normal Distribution. Theoretical Economics Letters, 9, 2229-2235. https://doi.org/10.4236/tel.2019.97141

In the proof, following two points are exploited:

Point 1. If a set of maximum likelihood estimators are jointly sufficient statistics, then the maximum likelihood estimators are minimal jointly sufficient statistics [

Point 2. A one-to-one function of a set of jointly sufficient statistics is also sufficient.

The maximum likelihood estimators ( a 1 * , a 2 * ) for the agent’s action a = ( a 1 , a 2 ) are obtained by solving the following maximization problem:

max a 1 , a 2 f ( y , z ; a 1 , a 2 ) , (7)

where f ( y , z ; a 1 , a 2 ) is the joint normal density function (3). For notational convenience, covariance-adjusted sensitivities, which are assumed to be non-negative, are used:

M 1 = m 1 − ϕ 1 k 1 , K 1 = k 1 − ϕ 2 m 1 ,

M 2 = m 2 − ϕ 1 k 2 , K 2 = k 2 − ϕ 2 m 2 ,

where ϕ 1 = ρ ε δ σ 1 σ 2 / σ 2 2 , ϕ 2 = ρ ε δ σ 1 σ 2 / σ 1 2 .

The second order conditions are satisfied as the covariance-adjusted sensitivities are non-negative: M 1 ≥ 0 , K 1 ≥ 0 , M 2 ≥ 0 , K 2 ≥ 0 . From the first order conditions, the maximum likelihood estimators ( a 1 * , a 2 * ) for the agent’s action a = ( a 1 , a 2 ) are given as follows:

[ a 1 * a 2 * ] = A − 1 [ T 1 ( y , z ) T 2 ( y , z ) ] , (8)

where T 1 ( y , z ) and T 2 ( y , z ) are as in (5) and (6), and A is a 2 by 2 matrix:

A 2 × 2 = [ M 1 σ 1 2 m 1 + K 1 σ 2 2 k 1 M 1 σ 1 2 m 2 + K 1 σ 2 2 k 2 M 2 σ 1 2 m 1 + K 2 σ 2 2 k 1 M 2 σ 1 2 m 2 + K 2 σ 2 2 k 2 ] , (9)

whose determinant is ( m 1 k 2 − m 2 k 1 ) 2 ( 1 − ρ ε δ 2 ) / ( σ 1 2 σ 2 2 ) .

As the two performance measures y and z are not perfectly aligned ( m 1 k 2 − m 2 k 1 ≠ 0 ) , the maximum likelihood estimators ( a 1 * , a 2 * ) are one-to-one to the jointly sufficient statistics ( T 1 ( y , z ) , T 2 ( y , z ) ) . Thus, the maximum likelihood estimators ( a 1 * , a 2 * ) are sufficient by Point 2. As the maximum likelihood estimators are now sufficient, the maximum likelihood estimators ( a 1 * , a 2 * ) are minimal jointly sufficient statistics by Point 1.

As there is a one-to-one relation (8) between the jointly sufficient statistics T 1 ( y , z ) (5) and T 2 ( y , z ) (6) and the maximum likelihood estimators ( a 1 * , a 2 * ) , there exist two (linear) functions α ( ⋅ ) and α − 1 ( ⋅ ) such that:

( a 1 * , a 2 * ) = α ( T 1 ( y , z ) , T 2 ( y , z ) ) , (10)

( T 1 ( y , z ) , T 2 ( y , z ) ) = α − 1 ( a 1 * , a 2 * ) . (11)

As the maximum likelihood estimators ( a 1 * , a 2 * ) are minimal jointly sufficient statistics, the function α ( ⋅ ) in (10) is consistent with Definition 2. Because the maximum likelihood estimators ( a 1 * , a 2 * ) can be represented as a function of any other sufficient statistics, the jointly sufficient statistics T 1 ( y , z ) (5) and T 2 ( y , z ) (6) can also be represented as a function of any other sufficient statistics by the function α − 1 ( ⋅ ) in (11).

A2. Proof of Proposition 1This proof is by contradiction. Suppose that there exists a one-dimensional sufficient statistic τ ( y , z ) of the basic measures y and z:

τ ( y , z ) = r 1 y + r 2 z , (12)

where r 1 and r 2 are non-zero constants. Given that the jointly sufficient statistics T 1 ( y , z ) (5) and T 2 ( y , z ) (6) are minimal (Lemma 1), the jointly sufficient statistics T 1 ( y , z ) (5) and T 2 ( y , z ) (6) must be, by Definition 2, represented as a function of the one-dimensional sufficient statistic τ ( y , z ) (12). That is, there exist functions l 1 ( ⋅ ) and l 2 ( ⋅ ) such that:

T 1 ( y , z ) = l 1 ( r 1 y + r 2 z ) , (13)

T 2 ( y , z ) = l 2 ( r 1 y + r 2 z ) . (14)

As the jointly sufficient statistics T 1 ( y , z ) (5) and T 2 ( y , z ) (6) and the one-dimensional sufficient statistic τ ( y , z ) (12) are all linear in the basic measures y and z, (13) and (14) can be, without loss of generality, rewritten for some non-zero constants c 1 and c 2 as follows:

( m 1 − ϕ 1 k 1 ) σ 1 2 y + ( k 1 − ϕ 2 m 1 ) σ 2 2 z = c 1 ( r 1 y + r 2 z ) , c 1 ∈ ℝ , (15)

( m 2 − ϕ 1 k 2 ) σ 1 2 y + ( k 2 − ϕ 2 m 2 ) σ 2 2 z = c 2 ( r 1 y + r 2 z ) , c 2 ∈ ℝ , (16)

where ϕ 1 = ρ ε δ σ 1 σ 2 / σ 2 2 and ϕ 2 = ρ ε δ σ 1 σ 2 / σ 1 2 . It follows from (15) and (16):

m 1 − ϕ 1 k 1 m 2 − ϕ 1 k 2 = k 1 − ϕ 2 m 1 k 2 − ϕ 2 m 2 , (17)

which leads to a contradiction:

( m 1 k 2 − m 2 k 1 ) ( 1 − ρ ε δ 2 ) = 0. (18)