1. Introduction
A function is smooth (of the first order) if it is differentiable and its derivatives are continuous. The nth-order smoothness is defined analogously, that is, a function is nth-order smooth if its -derivatives are smooth. So, the infinite smoothness refers to continuous derivatives of all orders. From this perspective a non-smooth function only has a negative description―it lacks some degree of properties traditionally relied upon in analysis. One could get the impression that non-smooth optimization is a subject dedicated to overcoming handicaps which have to be faced in miscellaneous circumstances where mathematical structure might be poorer than what one would like, but this is far from right. Instead, non-smooth optimization typically deals with highly structured problems, but problems which arise differently, or are modeled or cast differently, from ones for which many of the mainline numerical methods, involving gradient vectors and Hessian matrices, have been designed. Moreover, non-smooth analysis, which refers to differential analysis in the absence of differentiability and can be regarded as a subfield of nonlinear analysis, has grown rapidly in the past decades. In fact, in recent years, non-smooth analysis has come to play a vital role in functional analysis, optimization, mechanics, differential equations, etc.
Among those who have participated in its development are Clarke [1] , Ioffe [2] , Mordukhovich [3] , Rockafellar [4] , but many more have contributed as well. During the early 1960s there was a growing realization that a large number of optimization problems which appeared in applications involved minimization of non-differ- rentiable functions. One of the important areas where such problems appear is optimum control. The subject of non-smooth analysis arises out of the need to develop a theory to deal with the minimization of non-smooth functions.
Recent research on non-smooth analysis mainly focuses on Lipschittz function. Properties of the generalized derivatives of Lipschtitz functions are summarized in the following result of Clarke, 1983 [1] . A function is said to be Lipschitz on a set U if there is a positive real number K such that
A function f that is Lipschitz in a neighborhood of a point x is not necessarily differentiable at x, but for locally Lipschitz functions the following expressions exist
and
The symbols and denote the upper Dini and lower Dini directional derivatives of f at x in direction of v. Hence, we may consider any of these derivatives as a generalized derivative for locally Lipschitz functions. However they suffer from an important drawback. Both of them are in general not convex in the direction and simple examples (which are left to the reader) can be constructed to demonstrate this. Thus they lack the most important property of the directional derivative of a convex function. Clarke observed that in the definition of the upper Dini derivative if one moves the point x, i.e., move through points that converge to x, then one can generate a generalized directional derivative which is sublinear in the direction. Thus we arrive at the definition of the Clarke generalized directional derivative.
Definition 1.1. Let be a locally Lipschitz function. Then the Clarke generalized directional derivative of f at x in direction of v is given by
Clarke also defined the following notion of generalized gradient or subdifferential [1] .
Definition 1.2. Let be locally Lipschitz. Then the Clarke generalized gradient or the subdifferential of f at x, denoted by, is given by
Here denotes the gradient of f; denotes the set under which f is differentiable and co denotes the convex hull of a set. It is shown in [5] that the mapping is upper semicontinuous and bounded on every bounded set.
However, one can realize that calculating the Clarke subdifferential from first principles is not always a simple task. In this manuscript, we develop the definition for GFD of non-smooth functions which will be seen as a refinement of what is given in [6] , to derive GHODs of non-smooth functions, simultaneously.
The paper is organized as follows. Section 2 presents the basic definitions and facts needed in what follows. Our results are then stated and proved in Section 3. We develop the definition for GFD and GSD of non-smooth functions which will be seen as a refinement of what is given in theorems (2.1) and (2.7), to derive GHODs of non-smooth functions, simultaneously (Theorem 3.5). Section 4 presents some illustrative example of the results of the paper.
2. GFD and GSD of Nonsmooth Functions
In this section, we present definitions and results concerning GFD and GSD, which are needed in the remainder of this paper. In order to get this approach some tools from nonsmooth analysis are used, specially generalized derivatives. Since the early 1960s, many different generalized derivatives have been proposed, for instance Rockafellar [4] [7] , Clarke [1] , Clarke et al. [8] and Mordukhovich [3] [9] -[12] . These papers and theirs results include some restrictions, for examples.
1) The function must be locally Lipschitz or convex.
2) We must know that the function is non-differentiable at a fixed point.
3) The generalized derivatives of on is a set, which either is empty or including several members.
4) The directional derivative is used to introduce generalized derivative.
5) The concepts limsup and liminf are applied to obtain the generalized derivative in which calculation of these is usually hard and complicated.
6) To obtain the second derivative, the gradient of the function should be computed which will be hard in some cases.
It is commonly recognized that these GDs are not practical and applicable for solving problems. We mainly use the new GD of Kamyad et al. [6] for nonsmooth functions. This kind of GD is particularly helpful and practical when dealing with nonsmooth continuous and discontinuous functions and does not have the above restrictions and difficulties. Let
Given a function on, consider the following functional optimization problem:
(1)
where,
and, are sufficiently small positive numbers.
Theorem 2.1. (Kamyad et al., 2011) Let and be the optimal solution of the functional optimization problem (1). Then.
Definition 2.2. (Kamyad et al., 2011) Let be a nonsmooth continuous function on the interval and be the optimal solution of the functional optimization problem (1). The generalized first derivative (GFD) of is denoted by and defined as.
Remark 2.3. Note that if is a smooth function, then as introduces in Definition 2.2 is equal to. Further, if is a nonsmooth function, then GFD of is an approximation for the first derivative of.
In what follow, the problem (1) is approximated as the following finite dimentional problem (Kamyad et al. [6] ):
(2)
where N is a given large number. We assume that, , and
for all. In addition, the authors choose the arbitrary points,.
By trapezoidal and midpoint integration rules, the last problem can be written as the following problem in which, , , are its unknown variables:
(3)
Lemma 2.4. Let the pairs, be the optimal solution of the following linear programming (LP) problem:
Then are the optimal solutions of the following nonlinear programming problem (NLP):
where I is a compact set.
Proof. See [13] .
Now, by Lemma 2.4, the problem (3) can be converted to the following equivalent LP problem (see [14] [15] )
(4)
where, and, , , are decision variables of the problem for and and
and, are positive sufficiently small numbers.
Remark 2.5. Note that and are sufficiently small numbers and the points can be
chosen as arbitrary numbers for.
Remark 2.6. Let be the optimal solution of problem (1) for. Then for.
Based on GFD of nonsmooth functions, Kamyad et al. [6] introduced an optimization problem similar to the problem (1) that its optimal solution is the second order derivative of smooth functions on an interval. To attain this goal they define
In addition, suppose and are two sufficiently small numbers and. For given continuous function on, defined the following functional optimization problem:
(5)
where for all and. Also if then and otherwise where is the GFD of (see [6] ).
Theorem 2.7. Let and is the optimal solution of functional optimization problem (5). Then.
Proof. See [16] .
Definition 2.8. Let be a continuous nonsmooth function on interval and be the optimal solution of the functional optimization problem (5). We denote the GSD of by and it is defined as.
Note that if is a smooth function then the is. Also if is a nonsmooth function then GSD of is an approximation for second order derivative of. Thus, according to the above mentioned statements for GFD and generalized results, problem (5) is converted to the following equivalent linear programming problem:
(6)
where, , , , , are decision variables of the problem for and and
for and, are sufficiently small positive numbers.
Remark 2.9. Let be the optimal solution of problem (6) for. Then we have for all.
3. Main Approach
In this section, we propose a method based on GD of Kamyad et al. to obtain simultaneously GFD and GSD for smooth and nonsmooth functions in which the involved functions are Riemann integrable.
Theorem 3.1. Let and
for any where,. Then we have and ,.
Proof. By Theorem 2.2 in [6] we have. Now we use integration by parts and the first condition.
(7)
Since by assumption
and
Again by using integration by parts and rule for we have:
where, and for each. Since by assump-
tion, (7) yields so that
.
By Lemma 2.1 of [6] it follows that. Thus, by Theorem 2.2 of [6] , we conclude that .
Corollary 3.2. Suppose that the conditions of Theorem 3.1 hold. Then and.
Let
Then and for all. We note that V is a total set in . Hence, for any, there exist coefficients such that
Theorem 3.3. Let and
for any. Then we have and
Theorem 3.4. Let be a small number, and. Then there exist such that
for all and,
Proof. The result is clear by Theorem 2.4 of [6] and the integral properties.
Let and be two sufficiently small given numbers and. For a given function on, the following functional optimization problem defined the Extension generalized derivative (EGD):
(8)
where for all.
Theorem 3.5. Let and, be the optimal solution of the functional optimization problem (8). Then,.
Proof. The result follows from Theorems 3.3, 3.4.
Definition 3.6. Let be a nonsmooth continuous function on the interval and, be the optimal solutions of the functional optimization problem (8). The Extension generalized first and second derivatives (EGDs) of is denoted by and, respectively, and defined as and.
Definition 3.7. Let be a nonsmooth Lebesgue integrable function on the interval and, , be the Lebsgue integrable functions optimal solutions of the functional optimization problem (8). The EGDs of is denoted by and, respectivily, and defined as and.
Note that if is a smooth function, then in Definition 3.6 is equal to and is equal to. Further, if is a nonsmooth function, then GFD of is an approximation for the first derivative and GSD of is an approximation for the second derivative of.
We approximate the obtained generalized derivatives of nonsmooth function with Fourier series
where the coefficients, , , satisfy the following relations:
. From Fourier analysis [17] , we have, , and
. Hence there exists such that, , and for all.
Note that, we can approximate the generalized derivatives of nonsmooth function with polynomials of degree M. For this purpose put
and
Hence, in what follow, by using the methods of [6] , we will convert the problem (8) to the corresponding finite dimensional problem,
(9)
where N is a given big number. Let, , , ,
and for and. Then by trapezoidal and midpoint integration rules,
problem 9 can be written as the following problem:
(10)
Now, problem (9) can be converted to the following equivalent linear programming problem (see [14] [15] ):
(11)
where, , for and, , , , , , , for are deci- sion variables of the problem (11) and
for and, are sufficiently small positive numbers.
Remark 3.8. Let and. Then, by pre- vious EGD and using the above method and Taylor expansion of order n, simultaneously, we calculate all derivatives up to nth order of.
Remark 3.9. By looking at the definition of EGD, for a function as where , we have for all. So, to obtain the EGD of the multi-va- riable function with respect to, we employ the LP problem (11) and gain the generalized first and second derivatives of one-variable function.
4. Numerical Results
In this stage, we have found the GFD and GSD of smooth and nonsmooth functions in several examples using problem (11). Here we assume that, , and for all. The problem (11) is solved for functions in these examples using simplex method in MATLAB software. Attend that in our approach points in are selected arbitrarily, and with selection very of these points, we can cover this interval. In this examples, shown the GFD and shown the GSD of.
Example 4.1. Consider the nonsmooth function, is illustrated in Figure 1. Figure 2 shows the GFD of, which is the optimal solution of optimization problem (4).
Example 4.2. Consider the smooth function on interval. The function is illustrated in Figure 3. Figure 4 shows the GSD of which has been obtained by using the problem (6).
Figure 2. GFD of Example 4.1 by using the problem (4).
Example 4.3. Consider the nonsmooth function on. This function is not differentiable in x = 0.25, 0.5, 0.75, and according to the problem (11), GFD and GSD of has been shown in Figure 5.
Example 4.4. Consider the nonsmooth function on. This function is not differentiable in. This function and the GED of this function are shown in Figure 6.
Example 4.5. Consider function on interval which is a non-differentiable function
in points for any. The function and the GED of this function are shown in Figure 7.
5. Conclusions
We propose a strategy for approximating generalized first and second derivatives for a nonsmooth function. We assume that the function of interest which has satisfied, is different from what is usually found in the literature,
Figure 4. GSD of Example 4.2 by using the problem (6).
Figure 5. EGD of Example 4.3 by using the problem (11).
Figure 6. EGD of Example 4.4 by using the problem (11).
Figure 7. EGD of Example 4.5 by using the problem (11).
since it is not required to be locally Lipschitz or convex. These generalized derivatives are computed by solving a linear programming problem, whose solution provides simultaneously the first and second order derivatives. The advantages of our generalized derivatives with respect to the other approaches except simplicity and practically are as follows:
1) The generalized derivative of a non-smooth function by our approach does not depend on the non-smooth- ness points of function. Thus we can use this GD for many cases that we do not know the points of non-differ- rentiability of the function.
2) The generalized derivative of non-smooth functions by our approach gives a good global approximate derivative as on the domain of functions, whereas in the other approaches the GDs are calculated in one given point.
3) The generalized derivative by our approach is defined for non-smooth piecewise continuous functions, whereas the other approaches are defined usually for locally Lipschitz or convex functions.
4) Our approach simultaneously gives first and second derivatives of nonsmooth function plus second derivative obtained directly from the primary function without need to compute the first derivative.
5) The generalized derivative by our approach is valid for functions that are only integrable, where continuity and locally Lipschitz need not be assumed.
In deed we show that if is continuous, and if is a smooth function that va-
nishes at the end points of the interval, then. In the following form, where
continuity need not be assumed: if is an integrable function such that
for some integrable and for all infinitely many times dif-
ferentiable, or smooth, functions such that, then f agrees almost everywhere with an absolutely continuous function; moreover almost everywhere in this case.