Optimal Designs Technique for Locating the Optimum of a Second Order Response Function ()
1. Introduction
The problem of locating the optimum of a second order response function has already been addressed by a method known as response surface methodology (RSM). RSM is simply a collection of mathematical and statistical techniques useful for analyzing problems where several independent variables influence a dependent variable or response. The main objective here is to determine the optimum operating conditions for the system or to determine a region of the factor space in which operating requirements are satisfied [1] . See also [2] [3] [4] [5] and [6] . For instance, the interest of a chemical engineer lies in the optimization of his process yield which is influenced by two variables, reaction time, x1 and reaction temperature, x2. The observed response can be represented as a function of the two independent variables as
(1)
where
is the random error term while the expected response function is
(2)
When the mathematical form of Equation (2) is not known, the expected response function can be approximated within the experimental region by a first order or a second order response function [7] .
According to [1] , the initial estimate of the optimum operating conditions for the system is frequently far from the actual optimum. When this happens, the objective of the experimenter is to move rapidly to the general vicinity of the optimum and the actual step size or step length is determined by the experimenter based on experience. The determination of the step length that could guarantee rapid movement to the vicinity of the optimum by experience or trial and error is a pitfall. In order to advance the existing RSM procedure, [8] proposed a modification which utilized the fusion of the Newton-Raphson and Mean-centre algorithms for obtaining the optimum and the exploration of near optimal settings within the optimal region. The problem with this modification is that it uses over 90% of the steps of the previous method and then introduces several other steps, thereby increasing computer time and computer storage space, only to obtain the selection of near-optimal factors settings which is iterative in nature. In order to circumvent this pitfall, this article seeks to solve this problem by making use of the principles of optimal designs of experiment. To design an experiment optimally, we mean a selection of N support points within the experimental region so that the aim of the experimenter could be realized. Unlike RSM where the step length is obtained by trial and error, [9] had already modified an algorithm by [10] to solve an unconstrained optimization problems using the principle of optimal designs of experiment where the step length is obtained by taking the derivative of the response function. As by [9] , a well-defined method to handle interactive effects in the case of quadratic surfaces has been provided. Since this new technique is a line search algorithm, it relies on a well-defined method of determining the direction of search as given by [11] . The algorithmic procedure which is given in the next section requires that the optimal support points that form the initial design matrix obtained from the entire experimental region be partitioned into r groups,
. However, [12] has shown that with r = 2, optimal solutions are obtained. This method of locating the optimum of a second order response function is an exact solution method as against iterative solution method used in RSM or any other traditional method.
2. The Algorithm
The sequential steps involved in this algorithm are given below:
Initialization: Let the second order response function,
be defined as
Select N support points such that
where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix
Step 1: Compute the optimal starting point,
Step 2: Partition X into r = 2 groups and calculate
1)
2)
Step 3: Calculate the following:
1) The matrices of the interaction effect of the variables, X1I and X2I
2) Interaction vector of the response parameter,
3) Interaction vectors for the groups are
4) Matrices of mean square error for the groups are
5) The Hessian matrices, Hi and normalized Hessian matrices,
6) The average information matrix,
Step 4: Obtain the response vector, z and the direction vector, d.
Normalize d to have d*.
Step 5: Make a move to the point
for a minimization problem or
for a maximization problem where
is the step length obtained from
Step 6: Termination criteria. Is
where ε = 0.0001?
1) Yes. Stop and set
or
as the case may be.
2) No. Replace
by
and return to Step 5. If
, then implement Step 6(1).
3. Numerical Illustration
In this section, we give a numerical illustration of the optimal designs technique for locating the optimum of a second order response function.
by optimal designs technique.
Solution
Initialization: Given the response function,
, select N support points such that
where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix
Step 1: Compute the optimal starting point,
Since
then
Hence, the optimal starting point is
That is,
Step 2: Partitioning X into 2 groups of equal number of support points, we obtain the design matrices,
and
The respective information matrices are
and
and their inverses are
and
Step 3: Calculate the following:
1) The matrices of the interaction effect of the variables for the groups as
2) Interaction vector of the response parameter,
3) Interaction vectors for the groups are
4) Matrices of mean square error for the groups are
5) Matrices of coefficient of convex combinations of the matrices of mean square error are
and by normalizing Hi such that
, we have
6) The average information matrix is
Step 4: Obtain the response vector
where
and hence, the direction vector
and by normalizing d such that
, we have
Step 5: Obtain the step length,
from
That is,
and
Hence,
and by making a move to the next point, we have
Step 6: Since
, we make another move and replace
by
.
The new step length,
is obtained as follows:
That is,
which gives
Since the new step length,
is zero, then an optimizer had been located at the first move and hence
and
4. Conclusion
By using optimal designs technique, we have been able to locate the optimum of a second order response function in just one move. This method circumvented the noticeable pitfall in RSM by taking the derivative of the response function to obtain the step length rather than doing so by intuition or trial and error as is
the case in RSM. A numerical illustration which gives
and
in just one move compares favourably with other known methods such as Newton-Raphson method which requires more than one iteration to reach the optimizer.