_{1}

^{*}

This work has successfully shown that the optimum of a quadratic response function with zero coefficients except that of the quadratic term lies at the origin. This was achieved by using optimal designs technique for solving unconstrained optimization problems with quadratic surfaces. In just one move, the objective of the work, that is,
*x*
_{min} = 0 was realized.

This paper seeks to show that given a quadratic univariate response function with zero coefficients except that of the quadratic term, the optimum lies at the origin. [

Traditional solution techniques for solving unconstrained optimization problems with single variable abound. These techniques require many iterations involving very tedious computations [

As stated by [

However, [

This section seeks to prove that the optimum of a quadratic univariate response function with zero coefficients except that of the quadratic term is located at the origin.

Let the quadratic univariate response function, f(x) having zero response parameters except that of the quadratic term be

f ( x ) = b x 2

We are required to show that x min * = 0 . This is done using the algorithm as given by [

Initialization: Select N support points such that 3r ≤ N ≤ 4r or 6 ≤ N ≤ 8 where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix

X = [ 1 x 1 1 x 2 ⋮ ⋮ 1 x N ]

Step 1: Let the optimal starting point computed from X be x 1 * .

Step 2: Partitioning X into r = 2 groups to obtain the design matrices, X_{i}, i = 1, 2 as well as the information matrices M i = X i T X i and their inverses, M i − 1 .

Step 3: Obtain the following:

1) The matrices of the interaction effect of the univariate for the groups

X 1 I = [ x 11 2 x 12 2 ⋮ x 1 k 2 ] and X 2 I = [ x 2 ( k + 1 ) 2 x 2 ( k + 2 ) 2 ⋮ x 2 N 2 ] where k = N 2 .

2) Interaction vector of the response parameter,

g = [ b ]

3) Interaction vectors for the groups,

I i = M i − 1 X i T X i I g

4) Matrices of mean square error for the groups

M ¯ i = M i − 1 + I i I i T = [ v ¯ i 11 v ¯ i 21 v ¯ i 12 v ¯ i 22 ]

5) Matrices of coefficient of convex combinations of the matrices of mean square error

H i = d i a g { v ¯ i 11 Σ v ¯ i 11 , v ¯ i 22 Σ v ¯ i 22 } = d i a g { h i 1 , h i 2 }

and by normalizing H_{i} such that Σ H i * H i * T = I , we have

H i * = d i a g { h i 1 Σ h i 1 2 , h i 2 Σ h i 2 2 }

6) The average information matrix

M ( ξ N ) = Σ H i * M i H i * T = [ m ¯ 11 m ¯ 12 m ¯ 21 m ¯ 22 ]

Step 4: Obtain the response vector

z = [ z 0 z 1 ]

where z 0 = f ( m ¯ 21 ) and z 1 = f ( m ¯ 22 ) and hence, the direction vector

d = [ d 0 _ d 1 ] = M − 1 ( ξ N ) z

which gives d * = d 1 .

Step 5: We now make a move to the point

x 2 * = x 1 * − ρ 1 d 1

where ρ 1 is the step length. The value of the response function at this point is

f ( x 2 * ) = b ( x 1 * − ρ 1 d 1 ) 2 = b [ x 1 * 2 − 2 x 1 * ρ 1 d 1 + ρ 1 2 d 1 2 ]

d f ( x 2 * ) d ρ 1 = − 2 b x 1 * d 1 + 2 b ρ 1 d 1 2 = 0

which gives

ρ 1 = x 1 * d 1

and hence

x 2 * = x 1 * − x 1 * d 1 ( d 1 ) = 0

Step 6: Since the true value of x 1 * in | f ( x 2 * ) − f ( x 1 * ) | = | 0 − b x 1 * 2 | = b x 1 * 2 is unknown, we assume that b x 1 * 2 > ε and hence, we make a second move as follows:

x 3 * = x 2 * − ρ 2 d 2 = 0 − ρ 2 d 2 = − ρ 2 d 2

and

f ( x 3 * ) = b ρ 2 2 d 2 2

d f ( x 3 * ) d ρ 2 = 2 b ρ 2 d 2 2 = 0

But b and d_{2} cannot be zero, which means that ρ 2 = 0 . Since ρ 2 = 0 , there was no need for the second move showing that the optimal solution was obtained at the first move.

Therefore,

x 2 * = x min = 0 and f ( x min ) = 0

Consider the quadratic univariate response function,

f ( x ) = 4 x 2

We are required to show that x min * = 0 . This is done as follows:

Initialization: Select N support points such that 6 ≤ N ≤ 8 and by choosing N = 6, we make an initial design matrix

X = [ 1 1 1 2 1 3 1 4 1 5 1 6 ]

Step 1: Compute the optimal starting point,

x 1 * = ∑ m = 1 6 u m * x m T , u m * > 0

∑ m = 1 6 u m * = 1

u m * = a m − 1 ∑ a m − 1 , m = 1 , 2 , ⋯ , 6

a m = x m x m T , m = 1 , 2 , ⋯ , 6

a 1 = [ 1 1 ] [ 1 1 ] = 2 , a 1 − 1 = 0.5 , a 2 = [ 1 2 ] [ 1 2 ] = 5 , a 2 − 1 = 0.2

a 3 = [ 1 3 ] [ 1 3 ] = 10 , a 3 − 1 = 0.1 , a 4 = [ 1 4 ] [ 1 4 ] = 17 , a 4 − 1 = 0.0588

a 5 = [ 1 5 ] [ 1 5 ] = 26 , a 5 − 1 = 0.0385 , a 6 = [ 1 6 ] [ 1 6 ] = 37 , a 6 − 1 = 0.027

∑ m = 1 6 a m − 1 = 0.9243

Since

u m * = a m − 1 ∑ a m − 1 , m = 1 , 2 , ⋯ , 6

then

u 1 * = 0.5 0.9243 = 0.5409 , u 2 * = 0.2 0.9243 = 0.2164 , u 3 * = 0.1 0.9243 = 0.1082 ,

u 4 * = 0.0588 0.9243 = 0.0636 , u 5 * = 0.0385 0.9243 = 0.0417 , u 6 * = 0.027 0.9243 = 0.0292

Hence, the optimal starting point is

x 1 * = ∑ m = 1 6 u m * x m T = 0.5409 [ 1 1 ] + 0.2164 [ 1 2 ] + 0.1082 [ 1 3 ] + 0.0636 [ 1 4 ] + 0.0417 [ 1 5 ] + 0.0292 [ 1 6 ] = [ 1.0000 1.9364 ]

That is,

x 1 * = 1.9364

Step 2: By partitioning X into 2 we obtain the design matrices

X 1 = [ 1 1 1 2 1 3 ] and X 2 = [ 1 4 1 5 1 6 ]

The respective information matrices are

M 1 = X 1 T X 1 = [ 3 6 6 14 ] and M 2 = X 2 T X 2 = [ 3 15 15 77 ]

and their inverses are

M 1 − 1 = [ 2.3333 − 1 − 1 0.5 ] and M 2 − 1 = [ 12.8333 − 2.5 − 2.5 0.5 ]

Step 3: Obtain the following:

1) The matrices of the interaction effect for the groups

X 1 I = [ 1 4 9 ] and X 2 I = [ 16 25 36 ]

2) Interaction vector of the response parameter,

g = [ 4 ]

3) Interaction vectors for the groups,

I 1 = [ − 13.3333 16.0000 ]

I 2 = [ − 97.3333 40.0000 ]

4) Matrices of mean square error for the groups

M ¯ 1 = [ 180.1111 − 214.3333 − 214.3333 256.5000 ]

M ¯ 2 = [ 9486.6 − 3895.8 − 3895.8 1600.5 ]

5) Matrices of coefficient of convex combinations of the matrices of mean square error

H 1 = d i a g { 180.1111 180.1111 + 9486.6 , 256.5 256.5 + 1600.5 } = d i a g { 0.0186 , 0.1381 }

H 2 = I − H 1 = d i a g { 0.9814 , 0.8619 }

and by normalization, we have

H 1 * = d i a g { 0.0186 0.0186 2 + 0.9814 2 , 0.1381 0.1381 2 + 0.8619 2 } = d i a g { 0.0189 , 0.1582 }

H 2 * = d i a g { 0.9814 0.0186 2 + 0.9814 2 , 0.8619 0.1381 2 + 0.8619 2 } = d i a g { 0.9998 , 0.9874 }

6) The average information matrix

M ( ξ N ) = [ 2.9999 14.8260 14.8260 75.4222 ]

Step 4: Obtain the response vector

z = [ f ( 14.8260 ) f ( 75.4222 ) ] = [ 879.2411 22754.0330 ]

and hence, the direction vector

d = [ − 42039 _ 8565 ]

which gives d * = 8565 .

Step 5: We now make a move to the point

x 2 * = x 1 * − ρ 1 d *

where ρ 1 is the step length. The value of the response function at this point is

f ( x 2 * ) = b ( x 1 * − ρ 1 d * ) 2 = b [ x 1 * 2 − 2 x 1 * ρ 1 d * + ρ 1 2 d * 2 ]

d f ( x 2 * ) d ρ 1 = − 2 b x 1 * d * + 2 b ρ 1 d * 2 = 0

which gives

ρ 1 = x 1 * d * = 0.0002260828

since d * = 8565 and x 1 * = 1.9364 .

Hence

x 2 * = x 1 * − ρ 1 d * = 1.9364 − 0.0002260828 ( 8565 ) ≅ 0

Step 6: Since | f ( x 2 * ) − f ( x 1 * ) | = | 0 − 14.9986 | = 14.9986 > ε = 0.0001 we make a second move as follows:

x 3 * = x 2 * − 8565 ρ 2 = 0 − 8565 ρ 2 = − 8565 ρ 2

and

f ( x 3 * ) = 293436900 ρ 2 2

d f ( x 3 * ) d ρ 2 = 586873800 ρ 2 = 0

Which gives ρ 2 = 0 . Since ρ 2 = 0 , there was no need for the second move showing that the optimal solution was obtained at the first move.

Therefore,

x 2 * = x min = 0 and f ( x min ) = 0

We set out to show in this work that the optimum of a quadratic univariate response function with zero coefficients except that of the quadratic term is located at the origin. By using optimal designs technique for solving unconstrained optimization problems with univariate quadratic surfaces, this primary objective has been successfully achieved. In the course of the proof, we saw that the optimum, x 2 * = x min = 0 was obtained in just one move and f ( x min ) = 0 .

Etukudo, I. (2017) The Optimum of a Quadratic Univariate Response Function Is Located at the Origin. American Journal of Operations Research, 7, 323-330. https://doi.org/10.4236/ajor.2017.76024