Scientific Research

An Academic Publisher

The Optimum of a Quadratic Univariate Response Function Is Located at the Origin

**Author(s)**Leave a comment

*x*

_{min}= 0 was realized.

KEYWORDS

1. Introduction

This paper seeks to show that given a quadratic univariate response function with zero coefficients except that of the quadratic term, the optimum lies at the origin. [1] and [2] stated that even though very few problems exist in real life where managers are concerned with taking decisions involving only one decision variable, this kind of study is justified since it forms the basis of simple extensions which plays a cardinal role to the development of a general multivariate algorithm (see [3] ).

Traditional solution techniques for solving unconstrained optimization problems with single variable abound. These techniques require many iterations involving very tedious computations [4] . Some of the line search techniques in this group include Fibonacci and Golden Section Search techniques. These techniques simply identify the interval of uncertainty containing the optimum and seek to minimize this interval, without actually locating the exact optimum point and the computational efforts in achieving this are enormous. For instance, the procedure in Fibonacci Search technique follows a numerical sequence known as Fibonacci numbers as shown by [1] .

As stated by [5] and [6] , Golden Section Search technique is another efficient method of determining the interval of uncertainty where the desired optimum must lie. While [7] shows the superiority of the Golden Section technique over Fibonacci Search technique since a priori specification of the resolution factor as well as the number of iterations are needed before the later technique is used which are not necessary in the former, [8] and [9] posited and actually proved that the later technique is the best traditional technique of solving the problem under consideration.

However, [10] presented a new technique for obtaining an exact optimum of unconstrained optimization problems with univariate quadratic surfaces. This new technique was brought about from super convergent line series algorithm which uses the principles of optimal designs of experiment [11] [12] and as modified by [13] (see also [14] and [15] ). The algorithmic procedure used in realizing our objective in this work is as given by [10] .

2. The Optimum of a Quadratic Univariate Response Function with Zero Coefficients except That of the Quadratic Term Is Located at the Origin

This section seeks to prove that the optimum of a quadratic univariate response function with zero coefficients except that of the quadratic term is located at the origin.

Let the quadratic univariate response function, f(x) having zero response parameters except that of the quadratic term be

$f\left(x\right)=b{x}^{2}$

We are required to show that ${x}_{\mathrm{min}}^{*}=0$ . This is done using the algorithm as given by [10] .

Initialization: Select N support points such that 3r ≤ N ≤ 4r or 6 ≤ N ≤ 8 where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix

$X=\left[\begin{array}{cc}1& {x}_{1}\\ 1& {x}_{2}\\ \vdots & \vdots \\ 1& {x}_{N}\end{array}\right]$

Step 1: Let the optimal starting point computed from X be ${x}_{1}^{*}$ .

Step 2: Partitioning X into r = 2 groups to obtain the design matrices, X_{i}, i = 1, 2 as well as the information matrices
${M}_{i}={X}_{i}^{\text{T}}{X}_{i}$ and their inverses,
${M}_{i}^{-1}$ .

Step 3: Obtain the following:

1) The matrices of the interaction effect of the univariate for the groups

${X}_{1I}=\left[\begin{array}{c}\begin{array}{c}{x}_{11}^{2}\\ {x}_{12}^{2}\\ \vdots \end{array}\\ {x}_{1k}^{2}\end{array}\right]$ and ${X}_{2I}=\left[\begin{array}{c}\begin{array}{c}{x}_{2\left(k+1\right)}^{2}\\ {x}_{2\left(k+2\right)}^{2}\\ \vdots \end{array}\\ {x}_{2N}^{2}\end{array}\right]$ where $k=\frac{N}{2}$ .

2) Interaction vector of the response parameter,

$g=\left[b\right]$

3) Interaction vectors for the groups,

${I}_{i}={M}_{i}^{-1}{X}_{i}^{\text{T}}{X}_{iI}g$

4) Matrices of mean square error for the groups

${\stackrel{\xaf}{M}}_{i}={M}_{i}^{-1}+{I}_{i}{I}_{i}^{\text{T}}=\left[\begin{array}{cc}{\stackrel{\xaf}{v}}_{i11}& {\stackrel{\xaf}{v}}_{i21}\\ {\stackrel{\xaf}{v}}_{i12}& {\stackrel{\xaf}{v}}_{i22}\end{array}\right]$

5) Matrices of coefficient of convex combinations of the matrices of mean square error

${H}_{i}=diag\left\{\frac{{\stackrel{\xaf}{v}}_{i11}}{\Sigma {\stackrel{\xaf}{v}}_{i11}},\frac{{\stackrel{\xaf}{v}}_{i22}}{\Sigma {\stackrel{\xaf}{v}}_{i22}}\right\}=diag\left\{{h}_{i1},{h}_{i2}\right\}$

and by normalizing H_{i} such that
$\Sigma {H}_{i}^{*}{H}_{i}^{*\text{T}}=I$ , we have

${H}_{i}^{*}=diag\left\{\frac{{h}_{i1}}{\sqrt{\Sigma {h}_{i1}^{2}}},\frac{{h}_{i2}}{\sqrt{\Sigma {h}_{i2}^{2}}}\right\}$

6) The average information matrix

$M\left({\xi}_{N}\right)=\Sigma {H}_{i}^{*}{M}_{i}{H}_{i}^{*\text{T}}=\left[\begin{array}{cc}{\stackrel{\xaf}{m}}_{11}& {\stackrel{\xaf}{m}}_{12}\\ {\stackrel{\xaf}{m}}_{21}& {\stackrel{\xaf}{m}}_{22}\end{array}\right]$

Step 4: Obtain the response vector

$z=\left[\begin{array}{c}{z}_{0}\\ {z}_{1}\end{array}\right]$

where ${z}_{0}=f\left({\stackrel{\xaf}{m}}_{21}\right)$ and ${z}_{1}=f\left({\stackrel{\xaf}{m}}_{22}\right)$ and hence, the direction vector

$d=\left[\begin{array}{c}\underset{\_}{{d}_{0}}\\ {d}_{1}\end{array}\right]={M}^{-1}\left({\xi}_{N}\right)z$

which gives ${d}^{*}={d}_{1}$ .

Step 5: We now make a move to the point

${x}_{2}^{*}={x}_{1}^{*}-{\rho}_{1}{d}_{1}$

where ${\rho}_{1}$ is the step length. The value of the response function at this point is

$f\left({x}_{2}^{*}\right)=b{\left({x}_{1}^{*}-{\rho}_{1}{d}_{1}\right)}^{2}=b\left[{x}_{1}^{*2}-2{x}_{1}^{*}{\rho}_{1}{d}_{1}+{\rho}_{1}^{2}{d}_{1}^{2}\right]$

$\frac{\text{d}f\left({x}_{2}^{*}\right)}{\text{d}{\rho}_{1}}=-2b{x}_{1}^{*}{d}_{1}+2b{\rho}_{1}{d}_{1}^{2}=0$

which gives

${\rho}_{1}=\frac{{x}_{1}^{*}}{{d}_{1}}$

and hence

${x}_{2}^{*}={x}_{1}^{*}-\frac{{x}_{1}^{*}}{{d}_{1}}\left({d}_{1}\right)=0$

Step 6: Since the true value of ${x}_{1}^{*}$ in $\left|f\left({x}_{2}^{*}\right)-f\left({x}_{1}^{*}\right)\right|=\left|0-b{x}_{1}^{*2}\right|=b{x}_{1}^{*2}$ is unknown, we assume that $b{x}_{1}^{*2}>\epsilon $ and hence, we make a second move as follows:

${x}_{3}^{*}={x}_{2}^{*}-{\rho}_{2}{d}_{2}=0-{\rho}_{2}{d}_{2}=-{\rho}_{2}{d}_{2}$

and

$f\left({x}_{3}^{*}\right)=b{\rho}_{2}^{2}{d}_{2}^{2}$

$\frac{\text{d}f\left({x}_{3}^{*}\right)}{\text{d}{\rho}_{2}}=2b{\rho}_{2}{d}_{2}^{2}=0$

But b and d_{2} cannot be zero, which means that
${\rho}_{2}=0$ . Since
${\rho}_{2}=0$ , there was no need for the second move showing that the optimal solution was obtained at the first move.

Therefore,

${x}_{2}^{*}={x}_{\mathrm{min}}=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}f\left({x}_{\mathrm{min}}\right)=0$

3. Numerical Illustration

Consider the quadratic univariate response function,

$f\left(x\right)=4{x}^{2}$

We are required to show that ${x}_{\mathrm{min}}^{*}=0$ . This is done as follows:

Initialization: Select N support points such that 6 ≤ N ≤ 8 and by choosing N = 6, we make an initial design matrix

$X=\left[\begin{array}{c}\begin{array}{cc}1& 1\\ 1& 2\\ 1& 3\end{array}\\ \begin{array}{cc}1& 4\\ 1& 5\\ 1& 6\end{array}\end{array}\right]$

Step 1: Compute the optimal starting point,

${x}_{1}^{*}={\displaystyle {\sum}_{m=1}^{6}{u}_{m}^{*}{x}_{m}^{\text{T}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{m}^{*}>0$

$\underset{m=1}{\overset{6}{\sum}}{u}_{m}^{*}}=1$

${u}_{m}^{*}=\frac{{a}_{m}^{-1}}{{{\displaystyle \sum}}^{\text{}}\text{\hspace{0.05em}}{a}_{m}^{-1}},\text{\hspace{0.17em}}m=1,2,\cdots ,6$

${a}_{m}={x}_{m}{x}_{m}^{\text{T}},\text{\hspace{0.17em}}m=1,2,\cdots ,6$

${a}_{1}=\left[\begin{array}{cc}1& 1\end{array}\right]\left[\begin{array}{c}1\\ 1\end{array}\right]=2,\text{\hspace{0.17em}}{a}_{1}^{-1}=0.5$ , ${a}_{2}=\left[\begin{array}{cc}1& 2\end{array}\right]\left[\begin{array}{c}1\\ 2\end{array}\right]=5,\text{\hspace{0.17em}}{a}_{2}^{-1}=0.2$

${a}_{3}=\left[\begin{array}{cc}1& 3\end{array}\right]\left[\begin{array}{c}1\\ 3\end{array}\right]=10,\text{\hspace{0.17em}}{a}_{3}^{-1}=0.1$ , ${a}_{4}=\left[\begin{array}{cc}1& 4\end{array}\right]\left[\begin{array}{c}1\\ 4\end{array}\right]=17,\text{\hspace{0.17em}}{a}_{4}^{-1}=0.0588$

${a}_{5}=\left[\begin{array}{cc}1& 5\end{array}\right]\left[\begin{array}{c}1\\ 5\end{array}\right]=26,\text{\hspace{0.17em}}{a}_{5}^{-1}=0.0385$ , ${a}_{6}=\left[\begin{array}{cc}1& 6\end{array}\right]\left[\begin{array}{c}1\\ 6\end{array}\right]=37,\text{\hspace{0.17em}}{a}_{6}^{-1}=0.027$

$\underset{m=1}{\overset{6}{\sum}}{a}_{m}^{-1}}=0.9243$

Since

${u}_{m}^{*}=\frac{{a}_{m}^{-1}}{{{\displaystyle \sum}}^{\text{}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{a}_{m}^{-1}},\text{\hspace{0.17em}}m=1,2,\cdots ,6$

then

${u}_{1}^{*}=\frac{0.5}{0.9243}=0.5409$ , ${u}_{2}^{*}=\frac{0.2}{0.9243}=0.2164$ , ${u}_{3}^{*}=\frac{0.1}{0.9243}=0.1082$ ,

${u}_{4}^{*}=\frac{0.0588}{0.9243}=0.0636$ , ${u}_{5}^{*}=\frac{0.0385}{0.9243}=0.0417$ , ${u}_{6}^{*}=\frac{0.027}{0.9243}=0.0292$

Hence, the optimal starting point is

$\begin{array}{c}{x}_{1}^{*}=\underset{m=1}{\overset{6}{{\displaystyle \sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{u}_{m}^{*}{x}_{m}^{\text{T}}=0.5409\left[\begin{array}{c}1\\ 1\end{array}\right]+0.2164\left[\begin{array}{c}1\\ 2\end{array}\right]+0.1082\left[\begin{array}{c}1\\ 3\end{array}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+0.0636\left[\begin{array}{c}1\\ 4\end{array}\right]+0.0417\left[\begin{array}{c}1\\ 5\end{array}\right]+0.0292\left[\begin{array}{c}1\\ 6\end{array}\right]\\ =\left[\frac{1.0000}{1.9364}\right]\end{array}$

That is,

${x}_{1}^{*}=1.9364$

Step 2: By partitioning X into 2 we obtain the design matrices

${X}_{1}=\left[\begin{array}{cc}1& 1\\ 1& 2\\ 1& 3\end{array}\right]$ and ${X}_{2}=\left[\begin{array}{cc}1& 4\\ 1& 5\\ 1& 6\end{array}\right]$

The respective information matrices are

${M}_{1}={X}_{1}^{\text{T}}{X}_{1}=\left[\begin{array}{cc}3& 6\\ 6& 14\end{array}\right]$ and ${M}_{2}={X}_{2}^{\text{T}}{X}_{2}=\left[\begin{array}{cc}3& 15\\ 15& 77\end{array}\right]$

and their inverses are

${M}_{1}^{-1}=\left[\begin{array}{cc}2.3333& -1\\ -1& 0.5\end{array}\right]$ and ${M}_{2}^{-1}=\left[\begin{array}{cc}12.8333& -2.5\\ -2.5& 0.5\end{array}\right]$

Step 3: Obtain the following:

1) The matrices of the interaction effect for the groups

${X}_{1I}=\left[\begin{array}{c}1\\ 4\\ 9\end{array}\right]$ and ${X}_{2I}=\left[\begin{array}{c}16\\ 25\\ 36\end{array}\right]$

2) Interaction vector of the response parameter,

$g=\left[4\right]$

3) Interaction vectors for the groups,

${I}_{1}=\left[\begin{array}{c}-13.3333\\ 16.0000\end{array}\right]$

${I}_{2}=\left[\begin{array}{c}-97.3333\\ 40.0000\end{array}\right]$

4) Matrices of mean square error for the groups

${\stackrel{\xaf}{M}}_{1}=\left[\begin{array}{cc}180.1111& -214.3333\\ -214.3333& 256.5000\end{array}\right]$

${\stackrel{\xaf}{M}}_{2}=\left[\begin{array}{cc}9486.6& -3895.8\\ -3895.8& 1600.5\end{array}\right]$

5) Matrices of coefficient of convex combinations of the matrices of mean square error

$\begin{array}{c}{H}_{1}=diag\left\{\frac{180.1111}{180.1111+9486.6},\frac{256.5}{256.5+1600.5}\right\}\\ =diag\left\{0.0186,0.1381\right\}\end{array}$

${H}_{2}=I-{H}_{1}=diag\left\{0.9814,0.8619\right\}$

and by normalization, we have

$\begin{array}{c}{H}_{1}^{*}=diag\left\{\frac{0.0186}{\sqrt{{0.0186}^{2}+{0.9814}^{2}}},\frac{0.1381}{\sqrt{{0.1381}^{2}+{0.8619}^{2}}}\right\}\\ =diag\left\{0.0189,0.1582\right\}\end{array}$

$\begin{array}{c}{H}_{2}^{*}=diag\left\{\frac{0.9814}{\sqrt{{0.0186}^{2}+{0.9814}^{2}}},\frac{0.8619}{\sqrt{{0.1381}^{2}+{0.8619}^{2}}}\right\}\\ =diag\left\{0.9998,0.9874\right\}\end{array}$

6) The average information matrix

$M\left({\xi}_{N}\right)=\left[\begin{array}{cc}2.9999& 14.8260\\ 14.8260& 75.4222\end{array}\right]$

Step 4: Obtain the response vector

$z=\left[\begin{array}{c}f\left(14.8260\right)\\ f\left(75.4222\right)\end{array}\right]=\left[\begin{array}{c}879.2411\\ 22754.0330\end{array}\right]$

and hence, the direction vector

$d=\left[\begin{array}{c}\underset{\_}{-42039}\\ 8565\end{array}\right]$

which gives ${d}^{*}=8565$ .

Step 5: We now make a move to the point

${x}_{2}^{*}={x}_{1}^{*}-{\rho}_{1}{d}^{*}$

where ${\rho}_{1}$ is the step length. The value of the response function at this point is

$f\left({x}_{2}^{*}\right)=b{\left({x}_{1}^{*}-{\rho}_{1}{d}^{*}\right)}^{2}=b\left[{x}_{1}^{*2}-2{x}_{1}^{*}{\rho}_{1}{d}^{*}+{\rho}_{1}^{2}{d}^{*2}\right]$

$\frac{\text{d}f\left({x}_{2}^{*}\right)}{\text{d}{\rho}_{1}}=-2b{x}_{1}^{*}{d}^{*}+2b{\rho}_{1}{d}^{*2}=0$

which gives

${\rho}_{1}=\frac{{x}_{1}^{*}}{{d}^{*}}=0.0002260828$

since ${d}^{*}=8565$ and ${x}_{1}^{*}=1.9364$ .

Hence

${x}_{2}^{*}={x}_{1}^{*}-{\rho}_{1}{d}^{*}=1.9364-0.0002260828\left(8565\right)\cong 0$

Step 6: Since $\left|f\left({x}_{2}^{*}\right)-f\left({x}_{1}^{*}\right)\right|=\left|0-14.9986\right|=14.9986>\epsilon =0.0001$ we make a second move as follows:

${x}_{3}^{*}={x}_{2}^{*}-8565{\rho}_{2}=0-8565{\rho}_{2}=-8565{\rho}_{2}$

and

$f\left({x}_{3}^{*}\right)=293436900{\rho}_{2}^{2}$

$\frac{\text{d}f\left({x}_{3}^{*}\right)}{\text{d}{\rho}_{2}}=586873800{\rho}_{2}=0$

Which gives ${\rho}_{2}=0$ . Since ${\rho}_{2}=0$ , there was no need for the second move showing that the optimal solution was obtained at the first move.

Therefore,

${x}_{2}^{*}={x}_{\mathrm{min}}=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}f\left({x}_{\mathrm{min}}\right)=0$

4. Conclusion

We set out to show in this work that the optimum of a quadratic univariate response function with zero coefficients except that of the quadratic term is located at the origin. By using optimal designs technique for solving unconstrained optimization problems with univariate quadratic surfaces, this primary objective has been successfully achieved. In the course of the proof, we saw that the optimum, ${x}_{2}^{*}={x}_{\mathrm{min}}=0$ was obtained in just one move and $f\left({x}_{\mathrm{min}}\right)=0$ .

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

*American Journal of Operations Research*,

**7**, 323-330. doi: 10.4236/ajor.2017.76024.

[1] | Eiselt, H.A., Pederzoli, G. and Sandblom, C.L. (1987) Continuous Optimization Models. Walter de Gruyter & Co., Berlin. |

[2] | Taha, H.A. (2005) Operations Research: An Introduction. 7th Edition, Pearson Education, Singapore Pte. Ltd., Indian Branch, Delhi. |

[3] |
Etukudo, I. (2017) Optimal Designs Technique for Locating the Optimum of a Second Order Response Function. American Journal of Operations Research, 7, 263-271. https://doi.org/10.4236/ajor.2017.75018 |

[4] | Singh, S.K., Yadav, P. and Mukherjee. (2015) Line Search Techniques by Fibonacci Search. International Journal of Mathematics and Statistics Invention, 3, 27-29. |

[5] | Winston, W.L. (1994) Operations Research: Applications and Algorithms. 3rd Edition, Duxbury Press, Wadsworth Publishing Company, Belmont, CA. |

[6] | Gerald, C.F. and Wheatley, P. (2004) Applied Numerical Analysis. 7th Edition, Addison-Wesley, Boston. |

[7] | Taha, H.A. (2007) Operations Research: An Introduction. 8th Edition, Asoke K. Ghosh, Prentice Hall of India, Delhi. |

[8] | Subasi, M., Yildirim, N. and Yildirim, B. (2004) An Improvement on Fibonacci Search Method in Optimization Theory. Applied Mathematics and Computation, Elsevier, 147, 893-901. |

[9] | Hassin, R. (1981) On Maximizing Functions by Fibonacci Search. |

[10] | Etukudo, I.A. (2017) Optimal Designs Technique for Solving Unconstrained Optimization Problems with Univariate Quadratic Surfaces. American Journal of Computational and Applied Mathematics, 7, 33-36. |

[11] | Onukogu, I.B. (2002) Super Convergent Line Series in Optimal Design on Experimental and Mathematical Programming. AP Express Publisher, Nigeria. |

[12] | Onukogu, I.B. (1997) Foundations of Optimal Exploration of Response Surfaces. Ephrata Press, Nsukka. |

[13] | Etukudo, I.A. and Umoren, M.U. (2008) A Modified Super Convergent Line Series Algorithm for Solving Linear Programming Problems. Journal of Mathematical Sciences, 19, 73-88. |

[14] |
Umoren, M.U. and Etukudo, I.A. (2010) A Modified Super Convergent Line Series Algorithm for Solving Unconstrained Optimization Problems. Journal of Modern Mathematics and Statistics, 4, 115-122.
https://doi.org/10.3923/jmmstat.2010.115.122 |

[15] | Umoren, M.U. and Etukudo, I.A. (2009) A Modified Super Convergent Line Series Algorithm for Solving Quadratic Programming Problems. Journal of Mathematical Sciences, 20, 55-66. |

Copyright © 2019 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.