On Second-Order Duality in Nondifferentiable Continuous Programming ()
1. Introduction
Second-order duality in mathematical programming has been extensively investigated in the literature. In [1] Chen formulated second order dual for a constrained variational problem and established various duality results under an involved invexity-like assumption. Subsequently, Husain et al. [2], have presented MondWeir type secondorder duality for the problem of [3], and by introducing continuous-time version of secondorder invexity and generalized second-order invexity, validated various duality results. Recently Husain and Masoodi [4] formulated a Wolfe type dual for a nondifferential variational problem and proved usual duality theorems under second-order pseudoinvexity condition.
In this research, in order to relax the requirement of second-order pseudoinvexity we formulate a Mond-Weir type second-order dual to a class of nondifferentiability continuous programming problems where nondifferentiability enters due to the square root of a certain quadratic form appearing in the integrand of the objective functional. The popularity of this type of problems seems to originate from the fact that, even though the objective function and or constraint functions are non-smooth, a simple representation of the dual problem may be found. The theory of non-smooth mathematical programming deals with more general type of functions by means of generalized subdifferentials. However, square root of positive semi-definite quadratic form is one of the few cases of the nondifferentiable functions for which one can write down the sub-or quasi-differentials explicitly. Here, various duality theorems for this pair of MondWeir type dual problems are validated under secondorder pseudo-invexity and quasi-invexity conditions. A pair of Mond-Weir type dual variational problems with natural boundary values rather than fixed end points is formulated and the proofs of its duality results are briefly indicated. It is also shown that our second-order duality results can be considered as dynamic generalizations of corresponding (Static) second-order duality results established for nondifferentiable nonlinear programming problems, considered by J. Zang and Mond [5].
2. Pre-Requisites
Let I = [a, b] be a real interval,
: I × Rn × Rn → R and ψ: I × Rn × Rn → Rm be twice continuously differentiable functions. In order to consider
where x: I → Rn is differentiable with derivative
, denoted by f and
, the first order of f with respect to
and
respectively, that is,
,
.
Denote by
the
Hessian matrix of f, and ψx the m × n Jacobian matrix respectively, that is, with respect to x(t), that is
i, j = 1, 2, ···, n, ψx the m × n Jacobian matrix.

The symbols
and
have analogous representations. Designate by X the space of piecewise smooth functions x: I → Rn, with the norm
, where the differentiation operator D is given by

Thus
except at discontinuities.
We incorporate the following definitions which are required in the subsequent analysis.
Definition 1. (Second-Order Invex): If there exists a vector function
where η: I × Rn × Rn → Rn and with η = 0 at t = a and t = b, such that for a scalar function
, the functional

where f: I × Rn × Rn → R satisfies

Then
is second-order invex with respect to η. Where
, and
, the space of
-dimensional continuous vector functions Definition 2. (Second-Order Pseudoinvex): If the functional
satisfies

Then
is said to be second-order pseudoinvex with respect to η.
Definition 3. (Second-order Quasi-Invex): If the functional
satisfies

Then
is said to be second-order quasi-invex with respect to η.
Remark 1. If f does not depend explicitly on t, then the above definitions reduce to those given in [5] for static cases.
Consider the following class of nondifferentiable continuous programming problems studied in [6]:
(VP): Minimize

Subject to x(a) = 0 = x(b),
,
,
,
.
Where, 1) f, g and h are twice differentiable functions from I × Rn × Rn into R, Rm and Rk respectively, and 2)
is a positive semi-definite n × n matrix with
continuous on I.
The proposition gives the Fritz John optimality conditions which are derived by Chandra, et al. [6].
Proposition 1. (Fritz John optimality Conditions): If (CP) attains a local minimum at
and if
maps X onto a closed subspace of C(I, Rp), then there exist Lagrange multipliers
, piecewise smooth
: I → Rm and
: I → Rk, not all zero, and also piecewise smooth
: I → Rn satisfying, for all
,




If
is surjective, then τ and
are not both zero. The following Schwartz inequality has been used in deriving the above optimality conditions given and will also be required in the forthcoming analysis of the research.
Lemma 1 (Schwartz Inequality): It states that
(1)
with equality in (1) if (and only if)

for some 
Remark 2. The Fritz John necessary optimality conditions in Proposition 1 for (VP) become the KarushKuhn-Tucker type optimality conditions if τ = 1. It suffices for τ = 1, that the following Slater’s Constraint qualification holds:

3. Mond-Weir Type Second-Order Duality
Consider the following continuous programming problem (CP) by ignoring the equality constraint,
, in the problem (VP):
(CP): Minimize

Subject to
(2)
(3)
In the spirit of Zhang and Mond [5], we formulated the following Mond-Weir type second-order dual continuous programming problem (M – WCD):
(M – WCD): Maximize

Subject to
(4)
(5)
(6)
(7)
(8)
where
and

If B(t) = 0, for t Î I, then the problems (CP) and (M – WCD) constitutes the pair of problems treated by Husain et al. [2].
Theorem 1. (Weak Duality): Assume that
(A1):
is feasible for (CP) and (u, y, w, p) is feasible for (M – WCD)(A2): 
is second-order pseudoinvex and

is second-order quasi-invex with respect to the same η
Then, infimum (CP) ≥ supremum (M – WCD).
Proof: Since x is feasible for (CP) and (u, y, w, p) is feasible for (M – WCD), we have

By the second-order quasi-invexity of

and integrating by parts this implies

which by using (4) and (5) yields

Using equality Constraint (6), this gives
.
By integration by parts and using (1), from this we have,
.
This, because of second-order pseudoinvexity of
, implies,

Since
, by the generalized Schwartz inequality, the above inequality gives

This implies Infimum (CP) ≥ Supremum (M – WCD).
Theorem 2. (Strong Duality): If
is an optimal solution of (CP) and is also normal, then there exist piecewise smooth function y: I → Rm and z: I → Rn such that
is a feasible solution of (M – WCD) and the two objective values are equal. Furthermore, if the hypotheses of Theorem 1 hold, then
is an optimal solution of the problem (M – WCD).
Proof: From Proposition 1, there exist piecewise smooth functions
: I → Rm and
: I → Rn such that
(9)
(10)
(11)
(12)
(13)
The relation (10) along with (12) gives

Hence
satisfies the constraints of the problem (M – WCD) Using (10), (11) and
, we have

In view of the hypothesis of Theorem 1, it implies that
is an optimal solution of (M – WCD).
Theorem 3. (Converse Duality): Assume that
is an optimal solution of (M – WCD).
(A2) The vector {Fi, Gi, i = 1, 2, 3, ···, n} are linearly independent. Where Fi and Gi are the ith row of F and G respectively
(A3)
, and
(A4) either 
and 
or 
and
.
Then
(t) is feasible for (CP) and the two objective functionals have the same value. Also, if theorem1 holds for all feasible solution of (CP) and (M – WCD), then
is an optimal solution of (CP).
Proof: Since
is an optimal solution of (M – WCD) by Proposition 1 there exist
and piecewise smooth functions θ: I → Rn and η: I → Rm such that the following Fritz John optimality conditions are satisfied at
:
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
Using the hypothesis (A1), the Equation (3) yields
(23)
(24)
Using (5), (23) and (24) in (14), we have
(25)
Let γ = 0, then (24) implies θ(t) = 0,
and (10) implies τp(t) = 0,
. Thus (25) gives
(26)
This, because of the hypothesis (A5), gives t = 0.The equation (15) implies η(t) = 0, t Î I. Using t = 0 and θ(t) = 0, t Î I in (17), we have θ(t)B(t)ω(t) = 0, t Î I, which together with (20) yields 
Consequently,
a contradicttion to (22). Hence t = g > 0.
Premultiplying (15) by y(t) and using (19), we have

Using (18), this gives

which reduces to

This in view of the hypothesis in (A4) implies, p(t) = 0, t Î I.
Consequently (23) or (24) gives θ(t) = 0, t Î I.
Using θ(t) = 0 along with t > 0, (17) implies
(27)
Hence the Schwartz inequality (1) along with (27) gives
(28)
If
Then
.
So, (28) gives,
(29)
If ϕ(t) = 0, then (27) implies B(t)x(t) = 0, t Î I.
So we still obtain

Therefore from (29) and p(t) = 0, we have

If, for all feasible (x, u, y, ω, p),

is second-order pseudoinvex and

is second-order quasi-invex with respect to the same η by Theorem 1 it implies that
is an optimal solution of the problem (CP).
Theorem 4. (Strict Converse Duality): Assume that
(C1): 
is second-order strictly pesudoinvex and

is second-order quasi-invex with respect to the same η. and
(C2): x is an optimal solution for (CP).
If
is an optimal solution of (M – WCD) then
is an optimal solution of (CP) and
.
Proof: We assume that
and exhibit a contradiction. Since x is an optimal solution of (CP) it follows for theorem 2 that there exist
and
such that (u, y, w, p) is optimal solution of (M – WCD). Since (u, y, w, p) is also an optimal solution for (M – WCD), it follows that

This, because of second-order strict pseudo-invexity of
for all 
gives
(30)
Also from the Constraint of (CP) and (M – WCD).

From the second-order quasi-invexity of

the above inequality implies
(31)
Combining (30) and (31)

That is,

which contradicts the equality constraints of (M – WCD). Hence 
4. Natural Boundary Values
In this section, we formulate a pair of nondifferentiable second-order dual variational problems with natural boundary values rather than fixed end points.
(CP0): Minimize

Subject to 
(CD0): Maximize

Subject to

(32)
(33)
The conditions (32) and (33) are popularly known as natural boundary conditions in calculus of variations.
We shall not repeat the proofs of theorems of the preceding section for these problems as these proofs follow analogously except with some slight modifications.
5. Nondifferentiable Nonlinear Programming Problems
If all functions in the problems (CP0) and (CD0) are independent of t and b – a = 1, then these problems will reduce to following nondifferentiable dual variational problems, treated by Zhang and Mond [5]:
(NP): Minimize 
Subject to

(ND): Maximize 
Subject to



where

and 
6. Conclusions
In this research, we have discussed a class of nondifferentiable continuous programming problems treated in [6] and formulated Mond-Weir type second-order dual variational problem which is in the spirit of Zhang and Mond [5] for a nondifferentiable nonlinear programming problem.
Under second-order pseudoinvexity and second-order quasi-invexity, we established weak, strong, strict-converse and converse duality theorems. When functions, occurring in the formulations of the problems, do not depend explicit on t, our results reduce to those of Zhang and Mond [5].
Thus our results become dynamic generalizations of the results in [5]. The problems of this research can be investigated in Multiobjective setting.