Fractionalization of a Class of Semi-Linear Differential Equations

Abstract

The dynamics of a fractionalized semi-linear scalar differential equation is considered with a Caputo fractional derivative. By using a symbolic operational method, a fractional order initial value problem is converted into an equivalent Volterra integral equation of second kind. A brief discussion is included to show that the fractional order derivatives and integrals incorporate a fading memory (also known as long memory) and that the order of the fractional derivative can be considered to be an index of memory. A variation of constants formula is established for the fractionalized version and it is shown by using the Fourier integral theorem that this formula reduces to that of the integer order differential equation as the fractional order approaches an integer. The global existence of a unique solution and the global Mittag-Leffler stability of an equilibrium are established by exploiting the complete monotonicity of one and two parameter Mittag-Leffler functions. The method and the analysis employed in this article can be used for the study of more general systems of fractional order differential equations.

Share and Cite:

Leung, I. and Gopalsamy, K. (2017) Fractionalization of a Class of Semi-Linear Differential Equations. Applied Mathematics, 8, 1715-1744. doi: 10.4236/am.2017.811124.

1. Introduction

Fractional differential equations are those equations in which an unknown function appears under a fractional (or non-integer) order differentiation operator. Interest in the applications of fractional order differential equations has been increasing steadily during the last few decades. Differential equations of fractional (non-integer) order and the related fractional integrals have found applications in diverse areas such as chemistry, biology, physics, finance, pharmacology, electrical circuits, heat transfer, diffusion processes, material science, visco- elasticity etc. One of the main advantages of a fractional (non-integer) order derivative is that it provides a mechanism for the incorporation of memory and hereditary properties of various materials and phenomena and this aspect is briefly considered in the next section. While an integer order derivative of a function considers the local behaviour of the function, a fractional (non-integer) order derivative depends on the previous history of the function values (see for instance the books and survey articles from  -  ).

Fractionalization of a dynamical system governed by integer order dif- ferential equations means a reformulation of such a system with a system containing one or more fractional order (non-integer order) derivatives. In the analysis of dynamical characteristics of a fractionalized system, we have to choose a suitable definition of a fractional derivative. In fractional calculus, there are several definitions (see the next section below) of a fractional deriva- tive usually named after some pioneers of fractional calculus; to name a few we have Grunwald-Leitnikov, Riemann-Liouville, Weyl, Riesz and Caputo deriva- tives. In the case of integer order calculus, the derivative of a constant is zero. We have a similar result for the Caputo derivative but not so in some other cases. Also in the case of initial value problems of integer order calculus, the initial values correspond to the respective initial values of the state variables and this is the case for the initial value problems with the Caputo fractional derivatives. However this is not the case with initial value problems with the Riemann-Liouville fractional derivatives. Riemann-Liouville derivatives are widely studied in the mathematical literature due mainly to the development of fractional calculus. We note that the fractional order Riemann-Liouville derivative of a constant is not zero. We remark that when the initial values of state variables are zero, the fractional Caputo derivative of a function is the same as that of the Riemann-Liouville dervative, under some smoothness of functions involved (see  ).

The purpose of this article is to formulate a fractionalized version of a simple dynamical system defined by a scalar semi-linear integer order differential equation. While there are many ways with which one can fractionalize an integer order differential equation, we propose one of the simplest methods by which an ordinary integer order derivative is replaced by a fractional (non-integer) order one (more details are presented in section 5 below) and then study the global existence of a unique solution along with the stability of its equilibrium solution when it exists. Our method is based on an application of a symbolic operational calculus to derive an analogue of the variation of constants formula well known in the theory of integer order differential equations. Such a variation of constants formula for the fractionalized initial value problem is shown to reduce to the well known formula of the integer order system when the fractional order approaches an integer; we use the Fourier integral theorem for this purpose. We apply a novel method for the existence and stability of solutions by exploiting the complete monotonicity of the relevant Mittag-Leffler functions. In the opinion of these authors this appears to be the first time that such a method has been proposed in the study of the stability of fractional (non-integer) order differential equations; we remark that a comparison principle and the positivity of certain Mittag-Leffler functions have been used by  in their stability considerations of fractional differential equations.

2. Some Preliminaries

In this section we recall and collect for the convenience of the reader a number of definitions and properties of the fractional order derivatives used in the analysis of our fractionalized differential equation of interest. For more details of fractional order differential equations and their application we refer to the works of       -  .

Definition 1 The fractional integral of order $\alpha >0$ of a Lebesgue measurable function $f:\left[0,\infty \right)\to \left(-\infty ,\infty \right)$ is defined by the operator ${J}^{\alpha }$ given by

${J}^{\alpha }f\left(t\right)=\frac{1}{\Gamma \left(\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}f\left(s\right)\text{d}s,\text{\hspace{0.17em}}\text{\hspace{0.17em}}t>0,\text{\hspace{0.17em}}\alpha >0.$ (1)

provided the integral exists pointwise on $\left[0,\infty \right)$ where $\Gamma \left(.\right)$ denotes the gamma function.

We remark that a sentence similar to “fractional order integrals and fractional order derivatives provide excellent instruments for the description of memory and hereditary properties of various materials and processes” appears in numerous publications. (See for examples) The precise meaning of such a statement is not made clear satisfactorily and so we provide the following discussion of the memory aspects of the fractional integral operator.

The existence of memory in a dynamic system means that if at time s the state of a system is given by $f\left(s\right)$ then the time evolution of the system whose magnitude at time $t>s$ is given by a convolutionary integral of the form

$F\left(t\right)={\int }_{0}^{t}\text{ }\text{ }m\left(t-s\right)f\left(s\right)\text{d}s$

in which $m\left(.\right)$ is known as the memory kernel (see for instance  ). We can write the fractional integral ${J}^{\alpha }f\left(t\right)$ in (2.1) in the form

${J}^{\alpha }\left(t\right)={\int }_{0}^{t}\frac{{s}^{\alpha -1}}{\Gamma \left(\alpha \right)}f\left(t-s\right)\text{d}s={\int }_{0}^{t}\text{ }\text{ }{\Psi }_{\alpha }\left(s\right)f\left(t-s\right)\text{d}s$

which expresses the integral as the weighted average of past states $f\left(t-s\right)$ . The weight or memory kernel ${\Psi }_{\alpha }\left(s\right)$ changes with the order $\alpha$ of the integral operator. When $\alpha >1$ and s is large with $t-s>0$ the weight ${\Psi }_{\alpha }\left(s\right)$ is large implying that the distant past states enhance or increase the memory and this is not natural in realistic systems in the opinion of this author. On the other hand if $\alpha \in \left(0,1\right)$ , the distant past effects with s large and $t-s>0$ , are given smaller weights in comparison with the more recent ones. This illustrates that when $\alpha \in \left(0,1\right)$ , the memory is fading and the kernel demonstrates a fading memory”. When $\alpha =1$ the memory kernel becomes a step function indicating that the integral ${J}^{1}f\left(t\right)$ has perfect memory which gives the same weight for all previous states in the integral. Now when the order of the integral $\alpha$ tends to zero, the singularity of the term ${s}^{\alpha -1}$ near zero is neutralized by the Gamma function $\Gamma \left(\alpha \right)$ with $\alpha \to 0$ . Thus as $\alpha \to 0$ , ${\Psi }_{\alpha }\left(s\right)$ tends to a Dirac delta function $\delta \left(s\right)$ leading to

$\underset{\alpha \to 0}{\mathrm{lim}}{J}^{\alpha }f\left(t\right)={\int }_{0}^{t}\text{ }\text{ }\delta \left(s\right)f\left(t-s\right)\text{d}s=f\left(t\right)$

which indicates that when $\alpha \to 0$ , the system has no memory of past states implying that it has no memory of its past history except the current one. We have already seen that when $\alpha \in \left(0,1\right)$ , the system has partial or fading memory, in the sense that with each increment of the time the current state is given a maximum weight and the past states are given weights decreasing like that of the power law of the type ${t}^{-\alpha }$ . We conclude our discussion of the memory effects of a fractional integral with the comment that the order of a fractional integral is an index of memory” (see  ) of the respective fractional operator and the order $\alpha \in \left(0,1\right)$ interpolates between a perfect memory ( $\alpha =1$ ) and absolutely no memory ( $\alpha =0$ ) of the dynamic process.

Definition 2 The Caputo fractional derivative of order $\alpha ,\text{ }0<\alpha <1$ is defined by the operator ${D}_{c}^{\alpha }\left(.\right)$ where

${D}_{C}^{\alpha }f\left(t\right)=\frac{1}{\Gamma \left(1-\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{-\alpha }{f}^{\left(1\right)}\left(s\right)\text{ }\text{d}s,\text{ }t>0,\text{ }{f}^{\left(1\right)}\left(s\right)=\frac{\text{d}f\left(s\right)}{\text{d}s}.$ (2)

We note that the Caputo derivative of a constant is zero as it is the case with integer order derivatives. One requires the integrability of the integer order derivative for the existence of the Caputo derivative. This is not the case for another fractional order derivative known as the Riemann-Liouville fractional derivative ${D}_{RL}^{\alpha }f\left(t\right)$ defined by

${D}_{RL}^{\alpha }f\left(t\right)=\frac{1}{\Gamma \left(1-\alpha \right)}\frac{\text{d}}{\text{d}t}{\int }_{0}^{t}{\left(t-s\right)}^{-\alpha }f\left(s\right)\text{d}s,\text{ }t>0,\text{ }\alpha \in \left(0,1\right).$ (3)

Riemann-Liouville derivative is more general and a function need not even be continuous for the existence of this derivative. We will not consider the Riemann-Liouville derivative in this article. It is found from the definitions of fractional derivatives, that the expressions of derivatives contain convolutional integrals with power law kernels; such convolutions account for the memory effects of the processes modelled by the fractional derivatives. The integer order derivatives have a local character whereas the fractional order derivatives have a non-local nature in the sense that the derivative at some value of the independent variable depends on the past history of the function values. This is one of the major differences of a fractional derivative from its integer order counterpart. As mentioned in the introduction, fractional order derivatives incorporate memory in the system it represents. We mean by memory that a current state of a system depends on some or all of the previous states of the system with certain weights attached to the past states. Fractional derivatives with respect to time are characterized by the power law decay of memory manifested by the kernels in the convolutionary integrals in their definitions. For example in the definition of

the Caputo derivative in (2) the memory kernel is given by $\frac{{t}^{-\alpha }}{\Gamma \left(1-\alpha \right)}$ and this

corresponds to a fading memory since the kernel decays with time. Thus a fractional derivative incorporates a fading memory although the remnants of the memory lingers on for ever with decreasing intensity and this has been interpreted as fractional derivatives having infinite memory. Memory is a characteristic of all living organisms including humans; however many (non- living) materials also have some kind of memory of their past behaviour and this has been examined by  in an article entitled Dead matter has memory. We will now show that the fractional order $\alpha \in \left(0,1\right)$ indicates the strength of memory in the sense that when $\alpha \to 1-$ , the fractional order derivative tends to that of the integer order derivative which has no memory.

We shall briefly examine the behavior of $\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}$ as $\alpha \to 1-$ where $\alpha \in \left(0,1\right)$ . We suppose that $\frac{{\text{d}}^{2}x\left(t\right)}{\text{d}{t}^{2}}$ exists for $t>0$ in the following derivation.

$\begin{array}{c}\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}=\frac{1}{\Gamma \left(1-\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{-\alpha }{x}^{\left(1\right)}\left(s\right)\text{d}s,\text{ }t>0\\ =\frac{{t}^{1-\alpha }}{\Gamma \left(2-\alpha \right)}{x}^{\left(1\right)}\left(0\right)+\frac{1}{\Gamma \left(2-\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{1-\alpha }{x}^{\left(2\right)}\left(s\right)\text{d}s.\end{array}$ (4)

Since the integrand in the integral of (4) is not singular, we have the following:

$\underset{\alpha \to 1-}{\mathrm{lim}}\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}=\underset{\alpha \to 1-}{\mathrm{lim}}\left[\frac{{t}^{1-\alpha }{x}^{\left(1\right)}\left(0\right)}{\Gamma \left(2-\alpha \right)}+\frac{1}{\Gamma \left(2-\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{-\alpha +1}{x}^{\left(2\right)}\left(s\right)\text{d}s\right]$ (5)

$={x}^{\left(1\right)}\left(0\right)+{\int }_{0}^{t}\text{ }\text{ }{x}^{\left(2\right)}\left(s\right)\text{d}s$ (6)

$=\frac{\text{d}x\left(t\right)}{\text{d}t},\text{ }t>0.$ (7)

Thus if the fractional order $\alpha$ corresponds to the memory in the fractional derivative, such memory is lost when $\alpha \to 1-$ as the fractional derivative becomes an integer order derivative. It is not known to this author how to relax the restrictive requirement of the existence of the second integer order derivative in the above derivation of the limiting consistency of the fractional order derivative.

The well known rules for integer order differentiation such as the product rule and chain rule are not applicable for the fractional order differentiation; although such rules exist for fractional order differentiation in the form of infinite series, they are not as useful as in integer order differentiation. In integer order differential and integral calculus, derivatives can represent a rate of change, direction of increase or decrease of a function; an integral can mean an area under the graph of a function. Such interpretations are not applicable for fractional order derivatives and integrals. For the convenience of readers uninitiated in fractional calculus, this article is formatted in a self-contained fashion at the cost of minimal repetition.

One of the properties of fractional integrals we will use in the proof of the equivalence of a fractional differential equations and an associated fractional integral equation is the following semi group property of fractional integrals.

${J}^{p}\left({J}^{q}\text{ }f\left(t\right)\right)={J}^{p+q}f\left(t\right),\text{ }p>0,\text{ }q>0.$ (8)

This is easily seen as follows: we first recall a relation between Beta and Gamma functions given by

$B\left(p,q\right)={\int }_{0}^{1}{\left(1-x\right)}^{p-1}{x}^{q-1}\text{d}x=\frac{\Gamma \left(p\right)\Gamma \left(q\right)}{\Gamma \left(p+q\right)},\text{ }p,q>0$

in which the familiar integral representation of the Gamma function

$\Gamma \left(p\right)={\int }_{0}^{\infty }{\text{e}}^{-t}{t}^{p-1}\text{d}t,\text{ }p>0$

of elementary calculus is used. Using the definition of the fractional integral, we have for $p>0,q>0$ , and let, $s=u+v\left(t-u\right)$

$\begin{array}{c}{J}^{P}\left({J}^{q}f\left(t\right)\right)=\frac{1}{\Gamma \left(p\right)}{\int }_{0}^{t}{\left(t-s\right)}^{p-1}\left[{J}^{q}f\left(s\right)\right]\text{d}s\\ =\frac{1}{\Gamma \left(p\right)}{\int }_{0}^{t}{\left(t-s\right)}^{p-1}\left(\frac{1}{\Gamma \left(q\right)}{\int }_{0}^{s}{\left(s-u\right)}^{q-1}f\left(u\right)\text{d}u\right)\text{d}s\\ =\frac{1}{\Gamma \left(p\right)\Gamma \left(q\right)}{\int }_{0}^{t}{\left(t-s\right)}^{p-1}\left({\int }_{0}^{s}{\left(s-u\right)}^{q-1}f\left(u\right)\text{d}u\right)\text{d}s\\ =\frac{1}{\Gamma \left(p\right)\Gamma \left(q\right)}{\int }_{0}^{t}\text{ }\text{ }f\left(u\right)\text{d}u\left({\int }_{u}^{t}{\left(t-s\right)}^{p-1}{\left(s-u\right)}^{q-1}\text{d}s\right)\end{array}$ (9)

$\begin{array}{l}=\frac{1}{\Gamma \left(p\right)\Gamma \left(q\right)}{\int }_{0}^{t}\text{ }\text{ }f\left(u\right)\text{d}u\left({\int }_{0}^{1}{\left(t-u\right)}^{p+q-1}{\left(1-v\right)}^{p-1}{v}^{q-1}\text{ }dv\right)\\ =\frac{B\left(p,q\right)}{\Gamma \left(p\right)\Gamma \left(q\right)}{\int }_{0}^{t}{\left(t-u\right)}^{p+q-1}f\left(u\right)\text{d}u\\ =\frac{1}{\Gamma \left(p+q\right)}{\int }_{0}^{t}{\left(t-u\right)}^{p+q-1}f\left(u\right)\text{d}u\\ ={J}^{p+q}f\left(t\right),\text{ }t>0.\end{array}$

The proof of (8) is complete. It follows from the definition of Caputo derivative that for arbitrary constants a and b,

${D}_{C}^{\alpha }\left[af\left(t\right)+bg\left(t\right)\right]=a{D}_{C}^{\alpha }f\left(t\right)+b{D}_{C}^{\alpha }g\left(t\right)$

indicating that ${D}_{C}^{\alpha }\left(.\right)$ is a linear operator.

Another important property of ${D}_{C}^{\alpha }\left(.\right)$ which we use later is that for $\alpha \in \left(0,1\right)$ ,

${J}^{\alpha }{D}_{C}^{\alpha }f\left(t\right)=f\left(t\right)-f\left(0\right),\text{ }t>0$ (10)

This can be verified as follows:

$\begin{array}{c}{J}^{\alpha }\left[{D}_{C}^{\alpha }x\left(t\right)\right]=\frac{1}{\Gamma \left(\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}\left[\frac{1}{\Gamma \left(1-\alpha \right)}{\int }_{0}^{s}\frac{{x}^{\left(1\right)}\left(u\right)}{{\left(s-u\right)}^{\alpha }}\text{d}u\right]\text{d}s\\ ={\int }_{0}^{t}\frac{{x}^{\left(1\right)}\left(u\right)\text{d}u}{\Gamma \left(\alpha \right)\Gamma \left(1-\alpha \right)}{\int }_{u}^{t}{\left(t-s\right)}^{\alpha -1}{\left(s-u\right)}^{-\alpha }\text{ }\text{d}s\\ =\frac{1}{\Gamma \left(\alpha \right)\Gamma \left(1-\alpha \right)}{\int }_{0}^{t}\text{ }{x}^{\left(1\right)}\left(u\right)\text{d}u{\int }_{0}^{1}{\left(1-v\right)}^{\alpha -1}{v}^{-\alpha }\text{d}v\\ =\frac{B\left(\alpha ,1-\alpha \right)}{\Gamma \left(\alpha \right)\Gamma \left(1-\alpha \right)}{\int }_{0}^{t}\text{ }{x}^{\left(1\right)}\left(u\right)\text{d}u,\text{ }\left(\text{on}\text{\hspace{0.17em}}\text{letting}\text{\hspace{0.17em}}s=u+v\left(t-u\right)\right)\\ ={\int }_{0}^{t}{x}^{\left(1\right)}\left(u\right)\text{d}u=x\left(t\right)-x\left(0\right).\end{array}$ (11)

It is found from the above that ${J}^{\alpha }$ is not a left inverse of ${D}_{C}^{\alpha }$ if $x\left(0\right)\ne 0$ .

We shall now show that ${D}_{C}^{\alpha }\left(.\right)$ is the left inverse of ${J}^{\alpha }\left(.\right)$ and this fact will be useful later in the conversion of a fractional order differential equation to its equivalent integral equation. First we will verify the following:

${J}^{1-\alpha }\left[\frac{1}{\Gamma \left(\alpha \right)}{t}^{\alpha -1}\right]=1,\text{ }0<\alpha <1,\text{ }t>0.$ (12)

In fact we have from the definition of ${J}^{\alpha }\left(.\right)$ ,

$\begin{array}{c}{J}^{1-\alpha }\left({t}^{\alpha -1}\right)=\frac{1}{\Gamma \left(1-\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{1-\alpha -1}{s}^{\alpha -1}\text{d}s\\ =\frac{1}{\Gamma \left(1-\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{-\alpha }{s}^{\alpha -1}\text{ }\text{d}s,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{let}\text{\hspace{0.17em}}s=tv\\ =\frac{1}{\Gamma \left(1-\alpha \right)}{\int }_{0}^{1}{\left(1-v\right)}^{-\alpha }{v}^{\alpha -1}\text{ }\text{d}v\\ =\frac{1}{\Gamma \left(1-\alpha \right)}B\left(1-\alpha ,\alpha \right)=\Gamma \left(a\right)\end{array}$

Lemma 1 The fractional integral ${J}^{\alpha }$ and the Caputo-derivative ${D}_{C}^{\alpha }$ satisfy the following:

${D}_{C}^{\alpha }\left({J}^{\alpha }\varphi \right)=\varphi .$ (13)

We let

$x\left(t\right)={J}^{\alpha }\varphi \left(t\right)=\frac{1}{\Gamma \left(\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}\varphi \left(s\right)\text{d}s$

and note that

$\frac{\text{d}x\left(t\right)}{\text{d}t}=\frac{\text{d}}{\text{d}t}\left[\frac{1}{\Gamma \left(\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}\varphi \left(s\right)\text{d}s\right]$

$\begin{array}{c}=\frac{\text{d}}{\text{d}t}\left[\frac{\varphi \left(0\right){t}^{\alpha }}{\alpha \Gamma \left(\alpha \right)}+\frac{1}{\alpha \Gamma \left(\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{\alpha }{\varphi }^{\left(1\right)}\left(s\right)\text{d}s\right]\\ =\varphi \left(0\right)\frac{{t}^{\alpha -1}}{\Gamma \left(\alpha \right)}+\frac{1}{\Gamma \left(\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{\varphi }^{\left(1\right)}\left(s\right)\text{d}s\\ =\varphi \left(0\right)\frac{{t}^{\alpha -1}}{\Gamma \left(\alpha \right)}+{J}^{\alpha }\left({\varphi }^{\left(1\right)}\right).\end{array}$

Now operate on both sides of the above by ${J}^{1-\alpha }\left(.\right)$ so that

$\begin{array}{c}{J}^{1-\alpha }\frac{\text{d}x\left(t\right)}{\text{d}t}={J}^{1-\alpha }\left[\frac{\varphi \left(0\right){t}^{\alpha -1}}{\Gamma \left(\alpha \right)}\right]+{J}^{1-\alpha }{J}^{\alpha }{\varphi }^{\left(1\right)}\\ =\varphi \left(0\right)+JD\varphi =\varphi \left(0\right)+\left[\varphi \left(t\right)-\varphi \left(0\right)\right]=\varphi \left(t\right)\end{array}$

and hence

${J}^{1-\alpha }\frac{\text{d}}{\text{d}t}\left({J}^{\alpha }\varphi \right)={D}_{c}^{\alpha }\left({J}^{\alpha }\varphi \right)=\varphi \left(t\right),\text{ }t>0$

and the proof is complete.

Throughout the following, we consider only the Caputo-derivative and we use for convenience the notation

${D}_{c}^{\alpha }x\left(t\right)=\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }},\text{ }0<\alpha <1.$

3. Mittag-Leffler Functions

The exponential function ${\text{e}}^{\lambda x}={\sum }_{n=0}^{\infty }\frac{{\left(\lambda x\right)}^{n}}{n!},\text{\hspace{0.17em}}x\in \left(-\infty ,\infty \right)$ where $\lambda$ is a con-

stant appears naturally in the study of integer order differential equations due to the easily verifiable property,

$\frac{\text{d}}{\text{d}x}{\text{e}}^{\lambda x}=\lambda {\text{e}}^{\lambda x},\text{ }x\in \left(-\infty ,\infty \right).$ (14)

In the study of fractional order differential equations, Mittag-Leffler functions (see    ) play a prominent role like that of the exponential functions in the case of integer order differential equations; It also serve as a tool for discrete fractional modelling (see  ) the one parameter Mittag-Leffler function ${E}_{\alpha }\left(.\right)$ with real parameter $\alpha >0$ is defined by means of a power series for complex z given by

${E}_{\alpha }\left(z\right)=\underset{n=0}{\overset{\infty }{\sum }}\frac{{z}^{n}}{\Gamma \left(\alpha n+1\right)},\text{ }\alpha >0$ (15)

with the Gamma function $\Gamma :\left[0,\infty \right)\to \left[0,\infty \right)$ given by

$\Gamma \left(p\right)={\int }_{0}^{\infty }{x}^{p-1}{\text{e}}^{-x}\text{d}x,\text{ }p>0.$

The two parameter Mittag-Leffler function ${E}_{\alpha ,\beta }\left(z\right)$ and the three parameter Mittag-Leffler function ${E}_{\alpha ,\beta }^{\gamma }\left(z\right)$ are respectively defined by

${E}_{\alpha ,\beta }\left(z\right)=\underset{k=0}{\overset{\infty }{\sum }}\frac{{z}^{k}}{\Gamma \left(\alpha k+\beta \right)},\text{ }\alpha >0,\beta >0$ (16)

${E}_{\alpha ,\beta }^{\gamma }\left(z\right)=\underset{k=0}{\overset{\infty }{\sum }}\frac{{\left(\gamma \right)}_{k}}{\Gamma \left(\alpha k+\beta \right)}\frac{{z}^{k}}{k!},\text{ }\alpha >0,\beta >0,\gamma >0.$ (17)

where ${\left(\gamma \right)}_{n}$ denotes the Pochhammer symbol defined by

${\left(\gamma \right)}_{n}=\frac{\Gamma \left(\gamma +n\right)}{\Gamma \left(\gamma \right)}=\left\{\begin{array}{l}1,\text{ }n=0,\gamma \ne 0\\ \gamma \left(\gamma +1\right)\left(\gamma +2\right)\cdots \left(\gamma +n-1\right),\text{ }n\in N,\gamma \in C.\end{array}$

One can verify that the functions ${E}_{\alpha }\left(.\right),{E}_{\alpha ,\beta }\left(.\right)$ and ${E}_{\alpha ,\beta }^{\gamma }\left(.\right)$ are entire functions defined for all complex z by using the ratio test for convergence and the fact that the ratio of gamma functions satisfy an asymptotic estimation given by 

$\frac{\Gamma \left(z+a\right)}{\Gamma \left(z+b\right)}={z}^{a-b}\left[1+\frac{\left(a-b\right)\left(a+b-1\right)}{2z}+O\left(\frac{1}{{z}^{2}}\right)\right]$ (18)

Our analysis below requires Laplace transforms of fractional derivatives and Mittag-Leffler functions. Let $\mathcal{L}\left[f\left(t\right)\right]$ denote the Laplace transform of $f\left(t\right)$ defined for $t\ge 0$ and assumed to have a Laplace transform defined by

$\mathcal{L}\left[f\left(t\right)\right]={\int }_{0}^{\infty }{\text{e}}^{-st}f\left(t\right)\text{d}t.$ (19)

We derive a few known results on Laplace transforms which we will use below in the stability analysis of our fractionalized system.

We recall some elementary and known facts regarding Laplace transforms (see also  ). If $x\left(t\right)$ is a differentiable function whose laplace transform exists then

$\begin{array}{c}\mathcal{L}\left[\frac{\text{d}x\left(t\right)}{\text{d}t}\right]={\int }_{0}^{\infty }{\text{e}}^{-st}\frac{\text{d}x\left(t\right)}{\text{d}t}\text{d}t,\text{ }s>0\\ =s\mathcal{L}\left[x\left(t\right)\right]-x\left(0\right).\end{array}$

$\mathcal{L}\left[{t}^{-\alpha }\right]={\int }_{0}^{\infty }{\text{e}}^{-st}{t}^{-\alpha }\text{ }\text{d}t=\frac{\Gamma \left(1-\alpha \right)}{{s}^{1-\alpha }},\text{ }\text{ }s>0$

$\mathcal{L}\left[\frac{{t}^{-\alpha }}{\Gamma \left(1-\alpha \right)}\right]=\frac{1}{{s}^{1-\alpha }},\text{ }s>0.$

$\begin{array}{c}\mathcal{L}\left[{D}_{c}^{\alpha }x\left(t\right)\right]=\mathcal{L}\left[\frac{1}{\Gamma \left(1-\alpha \right)}{\int }_{0}^{t}{\left(t-u\right)}^{-\alpha }{x}^{\left(1\right)}\left(u\right)\text{d}u\right],\text{ }0<\alpha <1\\ =\mathcal{L}\left[\left(\frac{{t}^{-\alpha }}{\Gamma \left(1-\alpha \right)}\right)\ast \left({x}^{\left(1\right)}\left(t\right)\right)\right]\text{ }\text{where}\text{\hspace{0.17em}}g\ast h={\int }_{0}^{t}\text{ }g\left(t-s\right)h\left(s\right)\text{d}s\\ ={s}^{\alpha -1}\left[s\mathcal{L}\left(x\left(t\right)\right)-x\left(0\right)\right]\end{array}$

$\mathcal{L}\left[{D}_{c}^{\alpha }x\left(t\right)\right]={s}^{\alpha }\mathcal{L}\left[x\left(t\right)\right]-{s}^{\alpha -1}x\left(0\right).$

We consider the Laplace transform of the Mittag-Leffler function ${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)$ by using the uniform convergence of the series in the definition of the function as follows:

$\mathcal{L}\left[{E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)\right]=\mathcal{L}\left[\underset{k=0}{\overset{\infty }{\sum }}\frac{{\left(-\lambda \right)}^{k}{t}^{\alpha k}}{\Gamma \left(\alpha k+1\right)}\right]=\underset{k=0}{\overset{\infty }{\sum }}\frac{{\left(-\lambda \right)}^{k}\mathcal{L}\left({t}^{\alpha k}\right)}{\Gamma \left(\alpha k+1\right)}=\frac{1}{s}\underset{k=0}{\overset{\infty }{\sum }}{\left(\frac{-\lambda }{{s}^{\alpha }}\right)}^{k}$

$\mathcal{L}\left[{E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)\right]=\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda },\text{\hspace{0.17em}}\text{\hspace{0.17em}}0<\alpha <1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\lambda >0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}|\frac{\lambda }{{s}^{\alpha }}|<1.$

One can derive in a similar way that

$\mathcal{L}\left[{t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-\lambda {t}^{\alpha }\right)\right]=\frac{1}{{s}^{\alpha }+\lambda },\text{\hspace{0.17em}}\text{\hspace{0.17em}}0<\alpha <1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\lambda >0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}|\frac{\lambda }{{s}^{\alpha }}|<1.$

By analytic continuation the Laplace transform $\mathcal{L}\left[f\left(t\right)\right]=F\left(s\right)$ is usually extended for complex values of s over the entire complex plane except for a set of singular points and in some cases the function $F\left(s\right)$ turns out to be multi-valued having branch point singularities as in the case of the function

$F\left(s\right)=\frac{1}{{s}^{\alpha }+\lambda },\text{ }0<\alpha <1,\text{\hspace{0.17em}}s\text{\hspace{0.17em}}\text{beingacomplexnumber}.$

If $F\left(s\right)=\mathcal{L}\left[f\left(t\right)\right]$ is known then one can get the function $f\left(t\right)$ by means of a complex inversion formula given by (see  )

$f\left(t\right)={\mathcal{L}}^{-1}\left[F\left(s\right)\right]=\frac{1}{2\text{π}i}{\int }_{\gamma -i\infty }^{\gamma +i\infty }\text{ }\text{ }{\text{e}}^{st}F\left(s\right)\text{d}s$

in which the integration is performed along a vertical line $s=\gamma$ in the complex plane where $s=x+iy$ ; the real number $\gamma$ is chosen such that the line $s=\gamma$ lies to the right of all the singularities including the branch points, poles, essential singularities; the line of integration can otherwise be arbitraty; the remarkable property of this integral is that the value of the integral is independent of the number $\gamma$ . We choose a closed contour in the complex plane known as the Hankel-Bromwich contour shown in Figure 1 below so that the theory of residues can be used for the evaluation of the inversion integral above. We have from the Cauchy’s residue theorem that

Figure 1. Hankel-Bromwich contour.

$\frac{1}{2\text{π}i}{\oint }_{GHKABCDEFG}\text{ }\text{ }{\text{e}}^{st}\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\text{d}s=0,\text{ }\alpha \in \left(0,1\right)$

since there are no singularities of the integrand in the interior of the contour as

discussed in the following. We note that the only poles of $\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }$ are on the

negative real axis and there are two branch points respectively at the origin and at $s=-\infty$ . For instance let us set $s=r{\text{e}}^{i\theta },0<\theta <\text{π}$ . Then

${s}^{\alpha }+\lambda ={r}^{\alpha }{\text{e}}^{i\theta \alpha }+\lambda =0⇒{r}^{\alpha }\mathrm{cos}\left(\theta \alpha \right)+\lambda =0\text{\hspace{0.17em}}\text{ }&\text{ }\text{\hspace{0.17em}}{r}^{\alpha }\mathrm{sin}\left(\theta \alpha \right)=0;$

since $0<\theta \alpha <\text{π}$ , ${r}^{\alpha }\mathrm{sin}\left(\theta \alpha \right)\ne 0$ which implies that there are no poles on the upper half complex plane. Similarly for $-\text{π}<\theta <0$ we have $-\text{π}<\theta \alpha <0$ implying that $sin\left(\theta \alpha \right)\ne 0$ and hence there are no poles in the lower half of the complex plane. For $\theta =0$ , we have $\theta \alpha =0$ which implies that

${r}^{\alpha }cos\left(\theta \alpha \right)+\lambda \ne 0$ . Thus it follows that the only singularities of $\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }$ are

on the negative real axis of the complex plane. One can then determine the inverse Laplace transform by the method of residues and the Cauchy’s theorem

from complex analysis. Since the origin is a branch point of $\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda },\alpha \in \left(0,1\right)$

due to the multivalued nature of the function we cannot use the usual Bromwich contour. We have to make a branch cut along the negative real axis beginning from $-\infty$ just above the negative real axis going around the origin in a clockwise sense proceeding to $-\infty$ just below the negative real axis and then follow a Bromwich type contour. Such a contour is known as a Hankel- Bromwich contour.

We proceed to evaluate the limits of certain line integrals involved in the computations of the inverse Laplace transform of ${s}^{\alpha -1}/\left[{s}^{\alpha }+\lambda \right]$ by means of the complex contour integral. We begin with

$\underset{T\to \infty }{lim}|{\int }_{HK}\text{ }{\text{e}}^{st}\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\text{ }\text{d}s|.$

On HK, we let $s=x+iT,x\in \left[0,\gamma \right],T>0$ and obtain

$\begin{array}{c}\underset{T\to \infty }{lim}|{\int }_{HK}\text{ }{\text{e}}^{st}\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\text{d}s|\le \underset{T\to \infty }{lim}{\int }_{0}^{\gamma }|\frac{{\left(x+iT\right)}^{\alpha -1}}{{\left(x+iT\right)}^{\alpha }+\lambda }||{\text{e}}^{\left(x+iT\right)t}|\text{d}x\\ \le \underset{T\to \infty }{lim}{\int }_{0}^{\gamma }|\frac{{\left(x+iT\right)}^{\alpha }}{{\left(x+iT\right)}^{\alpha }+\lambda }||\frac{1}{x+iT}||{\text{e}}^{\left(x+iT\right)t}|\text{d}x\\ \le \underset{T\to \infty }{lim}\frac{1}{T}{\int }_{0}^{\gamma }{\text{e}}^{xt}\text{d}x=0.\end{array}$

In a similar way one can show that

$\underset{T\to \infty }{lim}|{\int }_{FG}\text{ }{\text{e}}^{st}\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\text{d}s|=0.$

Next we consider the integral along the arc KA as follows:

$\begin{array}{c}\underset{T\to \infty }{lim}|{\int }_{KA}\text{ }{\text{e}}^{st}\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\text{d}s|\le \underset{T\to \infty }{lim}{\int }_{\text{π}/2}^{\text{π}}|\frac{{\left(T{\text{e}}^{i\theta }\right)}^{\alpha -1}}{{\left(T{\text{e}}^{i\theta }\right)}^{\alpha }+\lambda }||{\text{e}}^{\left(T{\text{e}}^{i\theta }\right)t}||iT{\text{e}}^{i\theta }|\text{d}\theta \\ \le \underset{T\to \infty }{lim}{\int }_{\text{π}/2}^{\text{π}}\frac{{T}^{\alpha }}{{T}^{\alpha }-\lambda }{\text{e}}^{Tcos\theta }\text{ }\text{d}\theta \\ =0,\text{ }\text{since}\text{\hspace{0.17em}}cos\left(\theta \right)<0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{on}\text{\hspace{0.17em}}\left(\text{π}/2,\text{π}\right).\end{array}$

In a similar way one can show that

$\underset{T\to \infty }{\mathrm{lim}}|{\int }_{EF}\text{ }{\text{e}}^{st}\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\text{d}s|=0.$

Since the only singularities are poles on the negative real axis and this has been cut out of the complex plane and branch points of the integrand $s=0$ and $s=-\infty$ are also excluded for the integration along the Hankel path in the contour integral above. Thus we have from the above equation that

$\frac{1}{2\text{π}i}{\int }_{GH}\text{ }{\text{e}}^{st}\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\text{d}s=-\frac{1}{2\text{π}i}\left[\left({\int }_{AB}+{\int }_{BCD}+{\int }_{DE}\right)\right]{\text{e}}^{st}\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\text{d}s.$

One can show that the integrals along the arc BCD goes to zero as the radius $ϵ$ of this arc goes to 0. Thus we are left with the contributions along the line segments AB and DE. Thus we have from the above that

$\begin{array}{l}\underset{T\to \infty }{\mathrm{lim}}\left[\frac{1}{2\text{π}i}{\int }_{\gamma -iT}^{\gamma +iT}{\text{e}}^{st}\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\text{d}s\right]\\ =-\frac{1}{2\text{π}i}\left\{{\int }_{\infty }^{0}\frac{{\left(x{\text{e}}^{i\text{π}}\right)}^{\alpha -1}}{{\left(x{\text{e}}^{i\text{π}}\right)}^{\alpha }+\lambda }{\text{e}}^{i\text{π}}{\text{e}}^{\left(x{\text{e}}^{i\text{π}}\right)t}\text{ }\text{d}x+{\int }_{0}^{\infty }\frac{{\left(x{\text{e}}^{-i\text{π}}\right)}^{\alpha -1}}{{\left(x{\text{e}}^{-i\text{π}}\right)}^{\alpha }+\lambda }{\text{e}}^{-i\text{π}}{\text{e}}^{\left(x{\text{e}}^{i\text{π}}\right)t}\text{d}x\right\}\\ =-\frac{1}{2\text{π}i}\left\{{\int }_{0}^{\infty }\text{ }{\text{e}}^{-xt}\frac{{\left(x{\text{e}}^{i\text{π}}\right)}^{\alpha -1}}{{\left(x{\text{e}}^{i\text{π}}\right)}^{\alpha }+\lambda }\text{d}x-{\int }_{0}^{\infty }{\text{e}}^{-xt}\frac{{\left(x{\text{e}}^{-i\text{π}}\right)}^{\alpha -1}}{{\left(x{\text{e}}^{-i\text{π}}\right)}^{\alpha }+\lambda }\text{d}x\right\}\\ =-\frac{1}{2\text{π}i}{\int }_{0}^{\infty }\left[2i{\left(Im\left(\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\right)\right)}_{s=x{\text{e}}^{i\text{π}}}\right]{\text{e}}^{-xt}\text{ }\text{d}x\\ =-\frac{1}{\text{π}}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-xt}Im{\left[\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\right]}_{s=x{\text{e}}^{i\text{π}}}\text{ }\text{d}x\\ =\frac{1}{\text{π}}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-xt}\frac{\lambda {x}^{\alpha -1}\mathrm{sin}\left(\alpha \text{π}\right)}{{x}^{2\alpha }+2{x}^{\alpha }\lambda \mathrm{cos}\left(\alpha \text{π}\right)+{\lambda }^{2}}\text{d}x.\end{array}$

Thus we have that

${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)=\underset{T\to \infty }{lim}\left[\frac{1}{2\text{π}i}{\int }_{\gamma -iT}^{\gamma +iT}\text{ }{\text{e}}^{st}\mathcal{L}\left[{E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)\right]\text{d}x\right]={\int }_{0}^{\infty }{\text{e}}^{-xt}{K}_{\alpha }\left(x\right)\text{d}s$ (20)

where

${K}_{\alpha }\left(x\right)=-\frac{1}{\text{π}}Im{\left[\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\right]}_{s=x{\text{e}}^{i\text{π}}}=\frac{1}{\text{π}}\frac{\lambda {x}^{\alpha -1}sin\left(\alpha \text{π}\right)}{{x}^{2\alpha }+2\lambda {x}^{\alpha }cos\left(\alpha \text{π}\right)+{\lambda }^{2}}.$

This formula is referred to as Titchmarsh’s theorem’ in the literature on fractional calculus (see   ). However in the literature on integral transforms there is a result known as Titchmarsh’s theorem’ (see  , p. 68) which is different from what appears in fractional calculus by that name.

It follows from the above that when $\alpha \in \left(0,1\right)$

${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)={\mathcal{L}}^{-1}\left[\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\right]=\frac{1}{\text{π}}{\int }_{0}^{\infty }\frac{{\text{e}}^{-xt}}{x}\frac{\lambda {x}^{\alpha }\mathrm{sin}\left(\alpha \text{π}\right)}{{x}^{2\alpha }+2{x}^{\alpha }\lambda \mathrm{cos}\left(\alpha \text{π}\right)+{\lambda }^{2}}\text{d}x,\text{ }t>0.$ (21)

We shall briefly consider the limiting case of the formula (21) as $\alpha \to 1-$ . First we let ${x}^{\alpha }=u$ in (21) and obtain

${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)=\frac{1}{\text{π}}{\int }_{0}^{\infty }{\text{e}}^{-t{u}^{\frac{1}{\alpha }}}\frac{\lambda \mathrm{sin}\left(\alpha \text{π}\right)}{{\left[u+\lambda \mathrm{cos}\left(\alpha \text{π}\right)\right]}^{2}+{\left[\lambda \mathrm{sin}\left(\alpha \text{π}\right)\right]}^{2}}\text{d}u.$ (22)

A direct attempt to evaluate the limit of the right side of (22) as $\alpha \to 1-$ leads to an indeterminacy. We will follow an indirect procedure to evaluate this limit. We use the formula

$\frac{1}{\text{π}}{\int }_{-\infty }^{\infty }\frac{q{\text{e}}^{itx}}{{\left(x+p\right)}^{2}+{q}^{2}}\text{d}x={\text{e}}^{-q|t|-ipt},\text{ }p\in ,q>0,t\in$ (23)

where $i=\sqrt{-1}$ and the Fourier integral formula (see Sneddon ) in the form

$f\left(x\right)=\frac{1}{2\text{π}}{\int }_{-\infty }^{\infty }\text{ }{\text{e}}^{-ix\xi }\left\{{\int }_{-\infty }^{\infty }f\left(u\right){\text{e}}^{iu\xi }\text{ }\text{d}u\right\}\text{ }\text{d}\xi ,\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\in$ (24)

for a suitable function $f\left(.\right)$ . One can have from (24) that

$f\left(x\right)={\int }_{-\infty }^{\infty }f\left(u\right)\left\{\frac{1}{2\text{π}}{\int }_{-\infty }^{\infty }\text{ }{\text{e}}^{-i\xi \left(x-u\right)}\text{ }\text{d}\xi \right\}\text{ }\text{d}u,\text{ }x\in .$ (25)

We note from (25) that the inner integral behaves like that of the Dirac delta function (see  ) and thus we can write

$\frac{1}{\text{2π}}{\int }_{-\infty }^{\infty }{\text{e}}^{-i\xi \left(x-u\right)}\text{ }\text{d}\xi =\delta \left(x-u\right)$ (26)

where $\delta \left(.\right)$ denotes the Dirac delta function. For convenience in the following we define ${f}_{\lambda ,\alpha }\left(.\right)$ as follows

${f}_{\lambda ,\alpha }\left(u\right)=\left(\begin{array}{ll}\frac{\lambda \mathrm{sin}\left(\alpha \text{π}\right)}{{\left[u+\lambda \mathrm{cos}\left(\alpha \text{π}\right)\right]}^{2}+{\left[\lambda \mathrm{sin}\left(\alpha \text{π}\right)\right]}^{2}},\hfill & u\ge 0\hfill \\ 0\hfill & u<0\hfill \end{array}$ (27)

and obtain from (24) that

${f}_{\lambda ,\alpha }\left(u\right)=\frac{1}{2\text{π}}{\int }_{0}^{\infty }{\text{e}}^{-iu\xi }\left\{{\int }_{-\infty }^{\infty }{\text{e}}^{iv\xi }{f}_{\lambda ,\alpha }\left(v\right)\text{d}v\right\}\text{d}\xi$ (28)

which together with (22) and (23) leads to

$\begin{array}{c}{E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)=\frac{1}{\text{π}}{\int }_{0}^{\infty }{\text{e}}^{-t{u}^{\frac{1}{\alpha }}}{f}_{\lambda ,\alpha }\left(u\right)\text{d}u\\ =\frac{1}{\text{π}}{\int }_{0}^{\infty }{\text{e}}^{-t{u}^{\frac{1}{\alpha }}}\left[\frac{1}{2}{\int }_{-\infty }^{\infty }{\text{e}}^{-iu\xi }\left\{{\text{e}}^{-\lambda |\xi |\mathrm{sin}\left(\alpha \text{π}\right)-i\lambda \xi \mathrm{cos}\left(\alpha \text{π}\right)}\right\}\text{d}\xi \right]\text{d}u.\end{array}$ (29)

We are now ready to evaluate the limit of 21) as $\alpha \to 1-$ by using the representation in (29).

$\begin{array}{c}\underset{\alpha \to 1-}{lim}{E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)=\underset{\alpha \to 1-}{lim}{\int }_{-\infty }^{\infty }{\text{e}}^{-t{u}^{\frac{1}{\alpha }}}\left[\frac{1}{2\text{π}}{\int }_{-\infty }^{\infty }{\text{e}}^{-i\xi \left(u+\lambda \mathrm{cos}\left(\alpha \text{π}\right)\right)-|\xi |\lambda \mathrm{cos}\left(\alpha \text{π}\right)}\text{d}\xi \right]\text{d}u\\ ={\int }_{-\infty }^{\infty }{\text{e}}^{-tu}\left[\frac{1}{2\text{π}}{\int }_{-\infty }^{\infty }{\text{e}}^{-i\xi \left(u-\lambda \right)}\text{ }\text{d}\xi \right]\text{d}u\\ ={\int }_{-\infty }^{\infty }{\text{e}}^{-tu}\delta \left(u-\lambda \right)\text{d}u\\ ={\text{e}}^{-\lambda t},\text{ }t\in \left(0,\infty \right).\end{array}$ (30)

Thus we capture the otherwise known result from the definition of ${E}_{\alpha }\left(-\lambda t\right)$ for $t>0$ that

${E}_{1}\left(-\lambda t\right)={\text{e}}^{-\lambda t}.$

4. Complete Monotonicity

It can be found from its definition that ${E}_{\alpha }\left(\lambda t\right)$ provides a generalization of the exponential function ${\text{e}}^{\lambda t}$ since we have from the definition of ${E}_{\alpha }\left(.\right)$ that

${E}_{1}\left(\lambda t\right)=\underset{k=0}{\overset{\infty }{\sum }}\frac{{\left(\lambda t\right)}^{k}}{\Gamma \left(k+1\right)}={\text{e}}^{\lambda t}.$ (31)

We will see below that ${E}_{\alpha }\left(-\lambda t\right),\lambda \in \left(0,\infty \right),t>0$ preserves an interesting property of the exponential function ${\text{e}}^{-\lambda t}$ for $\lambda >0,t>0$ namely

${\text{e}}^{-\lambda t}>0,\text{ }{\left(-1\right)}^{n}\frac{{\text{d}}^{n}}{\text{d}{t}^{n}}{\text{e}}^{-\lambda t}={\lambda }^{n}{\text{e}}^{-\lambda t}>0$ (32)

for $n=0,1,2,\cdots$ . The function ${\text{e}}^{-\lambda t}$ is completely monotonic for $\lambda >0$ and $t\in \left(0,\infty \right)$ . In general a smooth function $f\left(t\right)$ for $t>0$ is said to be completely monotonic if $f\left(.\right)$ is positive and its integer order derivatives satisfy the condition

${\left(-1\right)}^{n}\frac{{\text{d}}^{n}f\left(t\right)}{\text{d}{t}^{n}}>0\text{ }\text{for}\text{\hspace{0.17em}}t>0,\text{\hspace{0.17em}}n=0,1,2,\cdots .$ (33)

The following result characterises completely monotonic functions (see for instance      ).

Theorem 1 (Hausdorff-Bernstein-Widder) A smooth function $f\left(x\right)$ for $x>0$ is completely monotonic if and only if

$f\left(x\right)={\int }_{0}^{\infty }{\text{e}}^{-xt}\text{d}\mu \left(t\right),\text{ }x>0$ (34)

where $\mu \left(t\right)$ is a Borel measure on $\left[0,\infty \right)$ ; that is there exists a non-negative distribution $K\left(t\right)\ge 0$ for $t>0$ satisfying the relation

$f\left(x\right)=\mathcal{L}\left[K\left(t\right)\right]={\int }_{0}^{\infty }{\text{e}}^{-xt}K\left(t\right)\text{d}t,\text{ }x>0$ (35)

in which the integral converges and is infinitely differentiable in the region of convergence of the Laplace transform.

Often one uses the above representability for proving the complete mono- tonicity of a function by finding the relevant function $K\left(.\right)$ . In fact a sufficient condition for the complete monotonicity of a function $f\left(x\right)$ is the existence of a positive locally integrable function $K\left(t\right),t>0$ such that a relation of the type in (35) holds (see  , pp. 61-72). Such a function $K\left(.\right)$ is known as the spectral distribution of the function $f\left(.\right)$ . In our analysis below we use the positivity of the functions ${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)$ and ${t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-\lambda {t}^{\alpha }\right)$ for $\alpha \in \left(0,1\right),\lambda \in \left(0,\infty \right),t\in \left(0,\infty \right)$ . For this purpose we obtain the spectral represen- tation of these two functions from their respective inverse Laplace transforma- tions. For instance we have from the previous section that

${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)={\int }_{0}^{\infty }{\text{e}}^{-xt}{K}_{\alpha }\left(x\right)\text{d}x$ (36)

where

${K}_{\alpha }\left(x\right)=-\frac{1}{\text{π}}Im\left[\frac{{\left(x{\text{e}}^{i\pi }\right)}^{\alpha }}{{\left(x{\text{e}}^{i\pi }\right)}^{\alpha }+\lambda }\right]$ (37)

$=\frac{1}{\text{π}}\frac{\lambda {x}^{\alpha -1}\mathrm{sin}\left(\text{π}\alpha \right)}{{x}^{2\alpha }+2\lambda {x}^{\alpha }\mathrm{cos}\left(\text{π}\alpha \right)+{\lambda }^{2}},\text{ }x\ge 0,\alpha \in \left(0,1\right),\lambda >0.$ (38)

It is not difficult to verify that ${x}^{2\alpha }+2\lambda {x}^{\alpha }\mathrm{cos}\left(\text{π}\alpha \right)+{\lambda }^{2}={\left[{x}^{\alpha }+\lambda \mathrm{cos}\left(\alpha \text{π}\right)\right]}^{2}+{\left[\lambda \mathrm{sin}\left(\alpha \text{π}\right)\right]}^{2}>0$ for $x\ge 0$ . It will follow that ${K}_{\alpha }\left(x\right)>0$ for $x>0,\alpha \in \left(0,1\right)$ and $\lambda \in \left(0,\infty \right)$ which by Theorem 4.1 implies that ${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right),t\ge 0$ is completely monotonic.

Next we consider a special case of the two parameter Mittag-Leffler function ${E}_{\alpha ,\alpha }\left(z\right)$ and show that ${t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-\lambda {t}^{\alpha }\right),t\ge 0,\lambda \in \left(0,\infty \right),\alpha \in \left(0,1\right)$ is completely monotonic. We begin from the observation that

$\mathcal{L}\left[{t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-\lambda {t}^{\alpha }\right)\right]=\frac{1}{{s}^{\alpha }+\lambda },\text{ }s>0$ (39)

which can be extended for complex s by analytic continuation. As before by using the Hankel-Bromwich contour, we can derive that for $0<\alpha <1$ ,

${t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-\lambda {t}^{\alpha }\right)={\mathcal{L}}^{-1}\left[\frac{1}{{s}^{\alpha }+\lambda }\right]$ (40)

$={\int }_{0}^{\infty }{\text{e}}^{-xt}{H}_{\alpha ,\lambda }\left(x\right)\text{d}x$ (41)

where for $x>0,\alpha \in \left(0,1\right),\lambda \in \left(0,\infty \right)$ we have the following:

${H}_{\alpha ,\lambda }\left(x\right)=-\frac{1}{\text{π}}Im\left[\frac{1}{{\left(x{\text{e}}^{i\text{π}}\right)}^{\alpha }+\lambda }\right]$ (42)

$=\frac{1}{\text{π}}\frac{{x}^{\alpha }\mathrm{sin}\left(\alpha \text{π}\right)}{{x}^{2\alpha }+2\lambda {x}^{\alpha }\mathrm{cos}\left(\text{π}\alpha \right)+{\lambda }^{2}}>0.$ (43)

The complete monotonicity of ${t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-\lambda {t}^{\alpha }\right)$ follows with its spectral distribution given by (43). In fact ${H}_{\alpha ,\lambda }\left(x\right)$ vanishes when $\alpha$ is an integer and $\lambda >0$ ; ${H}_{\alpha ,\lambda }\left(x\right)>0$ for all $x>0$ when $\alpha \in \left(0,1\right),\lambda >0$ . One of the consequences of the integral representation of ${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)$ is the derivation of asymptotic estimate of ${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)$ for large values of $t>0$ ; for instance we derive the following asymptotic estimation for $t\to \infty$ ;

${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)=\frac{1}{\text{π}}{\int }_{0}^{\infty }\frac{{\text{e}}^{-xt}}{x}\frac{\lambda {x}^{\alpha }\mathrm{sin}\left(\alpha \text{π}\right)}{{x}^{2\alpha }+2\lambda {x}^{\alpha }\mathrm{cos}\left(\alpha \text{π}\right)+{\lambda }^{2}}\text{d}x$

$=\frac{1}{\text{π}}{\int }_{0}^{\infty }{\text{e}}^{-u}{u}^{\alpha -1}\frac{{t}^{\alpha }\lambda \mathrm{sin}\left(\alpha \text{π}\right)}{{u}^{2\alpha }+2\lambda {u}^{\alpha }{t}^{\alpha }\mathrm{cos}\left(\alpha \text{π}\right)+{\lambda }^{2}{t}^{2\alpha }}\text{d}u$ (44)

$\approx \frac{1}{\text{π}}\frac{\lambda \mathrm{sin}\left(\alpha \text{π}\right)}{{\lambda }^{2}{t}^{\alpha }}{\int }_{0}^{\infty }{\text{e}}^{-u}{u}^{\alpha -1}\text{d}u$

$=\frac{\mathrm{sin}\left(\alpha \text{π}\right)}{\text{π}\lambda {t}^{\alpha }}\Gamma \left(\alpha \right)\text{ }\text{as}\text{\hspace{0.17em}}t\to \infty .$ (45)

Thus we have that as $t\to \infty$ for $\alpha \in \left(0,1\right)$

${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)\approx \frac{1}{\lambda }\frac{\mathrm{sin}\left(\alpha \text{π}\right)}{\text{π}}\frac{1}{{t}^{\alpha }}\Gamma \left(\alpha \right)=\frac{{t}^{-\alpha }}{\lambda \Gamma \left(1-\alpha \right)},\text{ }\text{for}\text{\hspace{0.17em}}\alpha \in \left(0,1\right)$ (46)

on using the Euler’s reflection formula

$\frac{\text{π}}{sin\left(\alpha \text{π}\right)}=\Gamma \left(\alpha \right)\Gamma \left(1-\alpha \right),\text{ }\alpha \in \left(0,1\right)$

which indicates that the Mittag-Leffler function ${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)$ decays for large values of t as a power law and the decay is not exponential as it will be in the case of integer order derivatives. We note that the asymptotic estimation in (46) can be rigorously justified by using Watson’s lemma for loop integrals as discussed at the end of this article (see also for instance pp. 82-83 of  ).

5. A Variation of Constants Formula

One of the well known and widely used methods of studying initial value problems for integer order differential equations is by converting the differential system into an equivalent Volterra integral system; subsequently one studies the integral system equivalent to the original differential system. This method has been followed by many authors in the analysis of non-integer order differential systems. However the equivalence of the fractional order differential system and the corresponding integral system is not satisfactorily established in many cases as it has been pointed out in the survey article by  . In this section we derive a variation of constants formula for a semi-linear fractionalized version of a class of dynamical systems governed by a class of semi-linear differential equations. It is elementary to show that the solution of the initial value problem for the integer order system

$\begin{array}{l}\frac{\text{d}x\left(t\right)}{\text{d}t}=-ax\left(t\right)+bf\left(x\left(t\right)\right),\text{ }t>0\\ x\left(0\right)={x}_{o}\end{array}\right\}$ (47)

where $a,b$ are real constants and $f\left(.\right)$ is a real valued continuous function of the real state variable $x\left(.\right)$ is given by

$x\left(t\right)={x}_{o}{\text{e}}^{-at}+b{\int }_{0}^{t}\text{ }{\text{e}}^{-a\left(t-s\right)}f\left(x\left(s\right)\right)\text{d}s,\text{ }t>0.$ (48)

The Equation (48) is known as a variation of constants formula for (47). The term ${\text{e}}^{-at}$ appearing under the integral in(48) is a solution of the linear homo- geneous initial value problem

$\frac{\text{d}x\left(t\right)}{\text{d}t}=-ax\left(t\right),\text{ }t>0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\left(0\right)=1$ (49)

and the solution of (49) has the semi-group property

${\text{e}}^{a{t}_{1}}{\text{e}}^{b{t}_{2}}={\text{e}}^{a{t}_{1}+b{t}_{2}}\text{ }\text{for}\text{\hspace{0.17em}}a,b,{t}_{1},{t}_{2}\in \left(-\infty ,\infty \right).$ (50)

The solution of the fractional order initial value problem

$\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}=-ax\left(t\right),\text{ }t>0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\left(0\right)={x}_{o},\text{\hspace{0.17em}}\alpha \in \left(0,1\right)$ (51)

when $a\in \left(-\infty ,\infty \right)$ is given by the one parameter Mittag-Leffler function ${E}_{\alpha }\left(-a{t}^{\alpha }\right)$ . For instance one can verify this as follows by using the definition of

$\frac{{\text{d}}^{\alpha }}{\text{d}{t}^{\alpha }}\left(\text{ }\right)$ and the uniform convergence of the series defining the Mittag-Leffler

function ${E}_{\alpha }\left(\text{ }\right)$ .

$\begin{array}{c}\frac{{\text{d}}^{\alpha }}{\text{d}{t}^{\alpha }}{E}_{\alpha }\left(-a{t}^{\alpha }\right)=\frac{{\text{d}}^{\alpha }}{\text{d}{t}^{\alpha }}\underset{k=0}{\overset{\infty }{\sum }}\frac{{\left(-a{t}^{\alpha }\right)}^{k}}{\Gamma \left(\alpha k+1\right)}\\ =\underset{k=1}{\overset{\infty }{\sum }}\frac{{\left(-a\right)}^{k}}{\Gamma \left(\alpha k+1\right)}\frac{{\text{d}}^{\alpha }}{\text{d}{t}^{\alpha }}\left({t}^{\alpha k}\right)\\ =\underset{k=1}{\overset{\infty }{\sum }}\frac{{\left(-a\right)}^{k}}{\Gamma \left(\alpha k+1\right)}\frac{1}{\Gamma \left(1-\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{-\alpha }\alpha k{s}^{\alpha k-1}\text{d}s\\ =\underset{k=1}{\overset{\infty }{\sum }}\frac{{\left(-a\right)}^{k}}{\Gamma \left(\alpha k+1\right)}\frac{1}{\Gamma \left(1-\alpha \right)}\alpha k{t}^{\alpha \left(k-1\right)}{\int }_{0}^{1}{\left(1-v\right)}^{-\alpha }{v}^{\alpha k-1}\text{d}v\end{array}$

$\begin{array}{l}=-a\text{ }\underset{k=1}{\overset{\infty }{\sum }}\frac{{\left(-a\right)}^{k-1}{t}^{\alpha \left(k-1\right)}\alpha kB\left(1-\alpha ,\alpha k\right)}{\Gamma \left(1-\alpha \right)\Gamma \left(\alpha k+1\right)}\\ =-a\underset{k=1}{\overset{\infty }{\sum }}\frac{{\left(-a{t}^{\alpha }\right)}^{k-1}\alpha k\Gamma \left(1-\alpha \right)\Gamma \left(\alpha k\right)}{\Gamma \left(\alpha k+1\right)\Gamma \left(1-\alpha \right)\Gamma \left(\alpha \left(k-1\right)+1\right)}\\ =-a\underset{k=0}{\overset{\infty }{\sum }}\frac{{\left(-a{t}^{\alpha }\right)}^{k}}{\Gamma \left(\alpha k+1\right)}\\ =-a{E}_{\alpha }\left(-a{t}^{\alpha }\right).\end{array}$

We remark that we have used the following relation between Beta and Gamma functions

$B\left(1-\alpha ,\alpha k\right)=\frac{\Gamma \left(1-\alpha \right)\Gamma \left(\alpha k\right)}{\Gamma \left(\alpha k+1-\alpha \right)}$

in the evaluation of the fractional derivative and subsequent simplification. When $\alpha \ne 1$ , the function ${E}_{\alpha }\left(x\right)$ does not have the semi-group property (50) and it is known that (see  )

${E}_{\alpha }\left(a{\left(t+s\right)}^{\alpha }\right)\ne {E}_{\alpha }\left(a{t}^{\alpha }\right){E}_{\alpha }\left(a{s}^{\alpha }\right),\text{ }t,s\ge 0,\text{ }\text{\hspace{0.17em}}a\in \left(-\infty ,\infty \right).$ (52)

As noted by Diethelm  that the absence of a formula of the type ${\text{e}}^{x-y}={\text{e}}^{x}/{\text{e}}^{y}$ for Mittag-Leffler functions ${E}_{\alpha }\left(x\right),\alpha \ne 1$ has been one of the stumbling obstacles for the development of a comprehensive theory for linear fractional differential equtions.

It is the purpose of this section to obtain a variation of constants formula of the type in (48) for the fractional dynamic system governed by the fractional order semi-linear initial value problem,

$\begin{array}{l}\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}=-ax\left(t\right)+bf\left(x\left(t\right)\right),\text{ }t>0\\ x\left(0\right)={x}_{o}\end{array}\right\}$ (53)

We will use a method of symbolic calculus for this purpose similar to that of Babenko’s method (see Podlubny ). We apply the fractional integral operator ${J}^{\alpha }$ to both sides of (53) so that on using (11)

${J}^{\alpha }\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}=-a{J}^{\alpha }x\left(t\right)+b{J}^{\alpha }f\left(x\left(t\right)\right)$ (54)

$x\left(t\right)-x\left(0\right)=-a{J}^{\alpha }x\left(t\right)+b{J}^{\alpha }f\left(x\left(t\right)\right)$ (55)

and hence we have an operational equation of the form

$\left(I+a{J}^{\alpha }\right)x\left(t\right)=x\left(0\right)+b{J}^{\alpha }f\left(x\left(t\right)\right)$ (56)

in which I denotes an identity operator. We use the following operational identity (see for instance the Equations (2.7) and (2.8) in  )

${\left(I+a{J}^{\alpha }\right)}^{-1}x=\underset{k=0}{\overset{\infty }{\sum }}{\left(-a{J}^{\alpha }\right)}^{k}x$ (57)

and observe that

${\left(I+a{J}^{\alpha }\right)}^{-1}\left(I+a{J}^{\alpha }\right)x=\left(I+a{J}^{\alpha }\right){\left(I+a{J}^{\alpha }\right)}^{-1}x=x.$ (58)

We can verify (58) as follows by using (9)

$\begin{array}{l}{\left(I+a{J}^{\alpha }\right)}^{-1}\left(I+a{J}^{\alpha }\right)x\\ =\underset{k=0}{\overset{\infty }{\sum }}{\left(-a{J}^{\alpha }\right)}^{k}\left(I+a{J}^{\alpha }\right)x\\ =\underset{k=0}{\overset{\infty }{\sum }}{\left(-a{J}^{\alpha }\right)}^{k}x+\underset{k=0}{\overset{\infty }{\sum }}{\left(-a{J}^{\alpha }\right)}^{k}\left(a{J}^{\alpha }\right)\\ =\underset{k=0}{\overset{\infty }{\sum }}{\left(-a{J}^{\alpha }\right)}^{k}x-\underset{k=0}{\overset{\infty }{\sum }}{\left(-a{J}^{\alpha }\right)}^{k+1}\\ =Ix.\end{array}$ (59)

The other identity

$\left(I+a{J}^{\alpha }\right){\left(I+a{J}^{\alpha }\right)}^{-1}x=x$

can be similarly established. We have from (55) that

${\left(I+a{J}^{\alpha }\right)}^{-1}\left(I+a{J}^{\alpha }\right)x\left(t\right)={\left(I+a{J}^{\alpha }\right)}^{-1}x\left(0\right)+b{\left(I+a{J}^{\alpha }\right)}^{-1}{J}^{\alpha }f\left(x\left(t\right)\right),$

$\begin{array}{c}x\left(t\right)=\underset{k=0}{\overset{\infty }{\sum }}{\left(-a{J}^{\alpha }\right)}^{k}x\left(0\right)+b\underset{k=0}{\overset{\infty }{\sum }}{\left(-a{J}^{\alpha }\right)}^{k}{J}^{\alpha }f\left(x\left(t\right)\right)\\ =\underset{k=0}{\overset{\infty }{\sum }}{\left(-a\right)}^{k}{J}^{\alpha k}x\left(0\right)+b\underset{k=0}{\overset{\infty }{\sum }}{\left(-a\right)}^{k}{J}^{\alpha k+\alpha }f\left(x\left(t\right)\right)\\ =\underset{k=0}{\overset{\infty }{\sum }}{\left(-a\right)}^{k}\frac{1}{\Gamma \left(\alpha k\right)}{\int }_{0}^{t}{\left(t-s\right)}^{\alpha k-1}x\left(0\right)\text{d}s\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+b\underset{k=0}{\overset{\infty }{\sum }}\left(-{a}^{k}\right)\frac{1}{\Gamma \left(\alpha k+\alpha \right)}{\int }_{0}^{t}{\left(t-s\right)}^{\alpha k+\alpha -1}f\left(x\left(s\right)\right)\text{d}s\\ =\underset{k=0}{\overset{\infty }{\sum }}\frac{{\left(-a\right)}^{k}{t}^{\alpha k}}{\Gamma \left(\alpha k+1\right)}x\left(0\right)+b{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}\underset{k=0}{\overset{\infty }{\sum }}\frac{{\left[-a{\left(t-s\right)}^{\alpha }\right]}^{k}}{\Gamma \left(\alpha k+\alpha \right)}f\left(x\left(s\right)\right)\text{d}s\\ =x\left(0\right){E}_{\alpha }\left(-a{t}^{\alpha }\right)+b{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)f\left(x\left(s\right)\right)\text{d}s\end{array}$

where the order of summation and integration has been interchanged which is justified due to the fact that the infinite series is uniformly convergent for finite values of the terms in the infinite series when $f\left(.\right)$ is assumed to be continuous. One can also show that if $x\left(t\right)$ is a solution of the integral Equation (58), then $x\left(t\right)$ is also a solution of the fractional order differential equation of the initial value problem (53) by reversing the above sequence of steps. A solution of the fractional order integral equation satisfies the initial value of the fractional order initial value problem and for this it is sufficient to verify that

$\underset{t\to 0+}{\mathrm{lim}}{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)f\left(x\left(s\right)\right)\text{d}s=0\text{ }\text{since}\text{\hspace{0.17em}}\text{ }{E}_{\alpha }\left(0\right)=1.$

One can verify by direct computation that

$\frac{1}{a}\frac{\text{d}}{\text{d}s}{E}_{\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)={\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right).$

By the continuity of $f\left(.\right)$ , the complete monotonicity of ${t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{t}^{\alpha }\right)$ and the mean value theorem of integral calculus we have

$\begin{array}{l}{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)f\left(x\left(s\right)\right)\text{d}s\\ =f\left(x\left(\theta t\right)\right){\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)\text{d}s\\ =f\left(x\left(\theta t\right)\right){\int }_{0}^{t}\frac{1}{a}\frac{\text{d}}{\text{d}s}{E}_{\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)\text{d}s\\ =\frac{1}{a}f\left(x\left(\theta t\right)\right){{E}_{\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)|}_{s=0}^{s=t}\\ =\frac{1}{a}f\left(x\left(\theta t\right)\right)\left[{E}_{\alpha }\left(0\right)-{E}_{\alpha }\left(-a{t}^{\alpha }\right)\right]\\ \to 0\text{ }\text{ }\text{as}\text{\hspace{0.17em}}\text{ }t\to 0\end{array}$

where $\theta$ is some number in the interval (0,1). Since ${E}_{\alpha }\left(0\right)=1$ , any solution of the fractional integral equation satisfies the initial condition of the initial value problem (53)

It will follow that if $x\left(t\right),t>0$ is a solution of the more general initial value problem

$\begin{array}{l}\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}=-ax\left(t\right)+bg\left(t,x\left(t\right)\right),\text{ }t>0\\ x\left(0\right)={x}_{o}\end{array}\right\}$ (60)

then it is a solution of the the singular Volterra integral equation

$x\left(t\right)=x\left(0\right){E}_{\alpha }\left(-a{t}^{\alpha }\right)+b{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)g\left(s,x\left(s\right)\right)\text{d}s,t>0$ (61)

and vice versa. The Formula (61) is the variation of constants formula for fractional order systems of the type (60). It is well known that if the forcing term is independent of the state variable, one uses the method of Laplace transforms to derive a formula similar to that of (61). We note that one can use the above procedure to derive a variation of constants formula for more general semi- linear systems.

It is of some interest to examine whether the Formula (61) leads to the usual variation of constants formula of the integer order differential equation corresponding to the case $\alpha =1$ . We have already seen from (30) that

${E}_{\alpha }\left(-a{t}^{\alpha }\right)\to {\text{e}}^{-at},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha \to 1-\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}t\in \left(0,\infty \right).$

We have to examine the limiting value of ${t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{t}^{\alpha }\right)$ as $\alpha \to 1-$ in the integral in (61). It has already been shown at the end of Section 2 that the

fractional derivative in (60) approaches the integer order derivative $\frac{\text{d}x\left(t\right)}{\text{d}t}$

when $\alpha \to 1-$ . It will follow from (40)-(43)

${t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{t}^{\alpha }\right)=\frac{1}{\text{π}}{\int }_{0}^{\infty }{\text{e}}^{-xt}\frac{{x}^{\alpha }\mathrm{sin}\left(\alpha \text{π}\right)}{{\left[{x}^{\alpha }+a\mathrm{cos}\left(\alpha \text{π}\right)\right]}^{2}+{\left[a\mathrm{sin}\left(\alpha \text{π}\right)\right]}^{2}}\text{d}x$ (62)

$=\frac{1}{a\alpha \text{π}}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-t{u}^{\frac{1}{\alpha }}}\frac{{u}^{\frac{1}{\alpha }}a\mathrm{sin}\left(\alpha \text{π}\right)}{{\left[u+a\mathrm{cos}\left(\alpha \text{π}\right)\right]}^{2}+{\left[a\mathrm{sin}\left(\alpha \text{π}\right)\right]}^{2}}\text{d}u.$ (63)

Proceeding as we examined the case of $li{m}_{\alpha \to 1-}{E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)$ , in Section 3 (see Equation (3.17)) we can obtain that

$\underset{\alpha \to 1-}{lim}{t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{t}^{\alpha }\right)=\frac{1}{a}{\int }_{0}^{\infty }{\text{e}}^{-tu}u\delta \left(u-a\right)\text{d}u$ (64)

$={\text{e}}^{-at},\text{ }t\in \left(0,\infty \right)$ (65)

so that the Formula (61) coincides with

$x\left(t\right)=x\left(0\right){\text{e}}^{-at}+b{\int }_{0}^{t}{\text{e}}^{-a\left(t-s\right)}g\left(s,x\left(s\right)\right)\text{d}s,\text{ }t>0$ (66)

of the integer order differential equation

$\frac{\text{d}x\left(t\right)}{\text{d}t}=-ax\left(t\right)+bg\left(t,x\left(t\right)\right),\text{ }t>0.$

Thus the variation of constants formula of the fractionalized semi-linear differential equation reduces to the usual variation of constants formula of the integer order differential equation when the order of the fractional derivative approaches an integer.

6. Fractionalization, Global Existence and Stability

In this section we consider the dynamics of a fractionalized version of the initial value problem

$\frac{\text{d}x\left(t\right)}{\text{d}t}=-ax\left(t\right)+bf\left(x\left(t\right)\right),\text{ }x\left(0\right)={x}_{o}.$

It is elementary that the above integer order differential equation is equivalent to the integral equation

$x\left(t\right)=x\left(0\right)-a{\int }_{0}^{t}x\left(s\right)\text{d}s+b{\int }_{0}^{t}\text{ }f\left(x\left(s\right)\right)\text{d}s,\text{ }t>0.$

One of the methods of fractionalization of the above initial value problem is to replace the above integral equation by a fractional integral equation of the form

$x\left(t\right)=x\left(0\right)-a{\int }_{0}^{t}\frac{{\left(t-s\right)}^{\alpha -1}}{\Gamma \left(\alpha \right)}x\left(s\right)\text{d}s+b{\int }_{0}^{t}\frac{{\left(t-s\right)}^{\alpha -1}}{\Gamma \left(\alpha \right)}f\left(x\left(s\right)\right)\text{d}s,\text{ }t>0$

which is the same as

$x\left(t\right)=x\left(0\right)-a{J}^{\alpha }x\left(t\right)+b{J}^{\alpha }f\left(x\left(t\right)\right)$

and this is equivalent to by virtue of Lemma 2.3,

$\frac{{\text{d}}^{\alpha }}{\text{d}{t}^{\alpha }}x\left(t\right)=\frac{{\text{d}}^{\alpha }}{\text{d}{t}^{\alpha }}x\left(0\right)-a\frac{{\text{d}}^{\alpha }}{\text{d}{t}^{\alpha }}{J}^{\alpha }x\left(t\right)+b\frac{{\text{d}}^{\alpha }}{\text{d}{t}^{\alpha }}{J}^{\alpha }f\left(x\left(t\right)\right)$

$\frac{{\text{d}}^{\alpha }}{\text{d}{t}^{\alpha }}x\left(t\right)=-ax\left(t\right)+bf\left(x\left(t\right)\right).$

It is found that this method reduces to the replacement of the integer order derivative by fractional order derivative. Such a method has been predominantly used in the literature on fractional order differential equations. We remark that the integral equation

$x\left(t\right)=x\left(0\right)-a{\int }_{0}^{t}\text{ }x\left(s\right)\text{d}s+b{\int }_{0}^{t}\text{ }f\left(x\left(s\right)\right)\text{d}s$

can also be fractionalized by the fractional integral equation

$x\left(t\right)=x\left(0\right)-a{\int }_{0}^{t}\frac{{\left(t-s\right)}^{\alpha -1}}{\Gamma \left(\alpha \right)}x\left(s\right)\text{d}s+b{\int }_{0}^{t}\frac{{\left(t-s\right)}^{\beta -1}}{\Gamma \left(\beta \right)}f\left(x\left(s\right)\right)\text{d}s$

where $\alpha \in \left(0,1\right),\beta \in \left(0,1\right)$ and $\alpha \ne \beta$ and this fractional integral equation is the same as that of

$x\left(t\right)=x\left(0\right)-a{J}^{\alpha }x\left(t\right)+b{J}^{\beta }f\left(x\left(t\right)\right)$

and this is not equivalent to the replacement of integer order derivative by a fractional order derivative. Thus there is no unique way of fractionalization of initial value problems with integer order differential equations. In this article we study the dynamics of the fractionalized version where $\alpha =\beta$ and the more general case where $\alpha \ne \beta$ will be considered in a future article.

One of the methods used in the theory of integer order differential equations to establish the existence of globally defined solutions is based on an application of technique proposed by  . In this method one usually modifies a norm in a suitable Banach space and use the induced metric in the application of the well known contraction mapping principle. The modified norm is equivalent to the original norm and a contraction in the new metric is also a contraction in the original metric. This method has been used by several authors (see     ).

Let $B=B\left[0,T\right]$ denote the linear space of real valued continuous functions defined on $\left[0,T\right]$ where T denotes an arbitrary positive number. For $x\in B$ , we define a norm ${‖\text{ }.\text{ }‖}_{\lambda }$ given by

${‖x‖}_{\lambda }=\underset{t\in \left[0,T\right]}{\mathrm{sup}}\left[{\text{e}}^{-\lambda t}|x\left(t\right)|\right],\text{ }x\in B$ (67)

for a suitable positive number $\lambda$ which will be determined below. We remark that the norm ${‖\text{ }.\text{ }‖}_{\lambda }$ is equivalent to the usual supremum norm defined by $‖\text{ }.\text{ }‖$ where

${‖x‖}_{\lambda }=\underset{t\in \left[0,T\right]}{\mathrm{sup}}|x\left(t\right)|,\text{ }x\in B$ (68)

and hence with the metric induced by ${‖\text{ }.\text{ }‖}_{\lambda }$ the space $B=B\left[0,T\right]$ becomes a complete metric space. We recall that by the variation of constants formula, every solution of the fractional order initial value problem

$\begin{array}{l}\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}=-ax\left(t\right)+bf\left(x\left(t\right)\right)\\ x\left(0\right)={x}_{o}\end{array}\right\},\text{\hspace{0.17em}}\text{\hspace{0.17em}}t>0,\text{\hspace{0.17em}}\alpha \in \left(0,1\right)$ (69)

is a solution of the fractional order integral equation

$x\left(t\right)={x}_{o}{E}_{\alpha }\left(-a{t}^{\alpha }\right)+b{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)f\left(x\left(s\right)\right)\text{d}s,\text{ }t>0$ (70)

and every solution of (70) is a solution of (69). We can now proceed to formulate the following global existence of solutions of (69).

Theorem 2 Let a be a positive number and let $\alpha \in \left(0,1\right)$ . Let b be any real number. Suppose that $f\left(.\right)$ is continuous and satisfies a global Lipschitz con- dition such that

$|f\left(x\right)-f\left(y\right)|\le K|x-y|,\text{ }x,y\in B$ (71)

for some positive number K. Then the integral equation (70) has a unique solution in B.

We define an operator F acting on the functions of B by the following:

$\left(Fx\right)\left(t\right)={x}_{o}{E}_{\alpha }\left(-a{t}^{\alpha }\right)+b{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)f\left(x\left(s\right)\right)\text{d}s,\text{\hspace{0.17em}}t>0.$ (72)

From the properties of ${E}_{\alpha }\left(.\right),{E}_{\alpha ,\alpha }\left(.\right)$ and the continuity of $f\left(.\right)$ it will follow that F maps B into B. We have for all $x,y\in B$

$Fx-Fy=b{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)\left[f\left(x\left(s\right)\right)-f\left(y\left(s\right)\right)\right]\text{d}s.$ (73)

By using the positivity of ${E}_{\alpha ,\alpha }\left(-a{t}^{\alpha }\right)$ and the Lipschitz continuity of f, we have from (73) that

$|Fx-Fy|\le |b|{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)K|x\left(s\right)-y\left(s\right)|\text{d}s$ (74)

$\begin{array}{l}{\text{e}}^{-\lambda t}|Fx-Fy|\\ \le |b|K{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{\text{e}}^{-\lambda \left(t-s\right)}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right){\text{e}}^{-\lambda s}|x\left(s\right)-y\left(s\right)|\text{d}s\\ \le |b|K{‖x-y‖}_{\lambda }{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{\text{e}}^{-\lambda \left(t-s\right)}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)\text{d}s\\ \le |b|K{‖x-y‖}_{\lambda }M{\int }_{0}^{t}\text{ }{s}^{\alpha -1}{\text{e}}^{-\lambda s}\text{ }\text{d}s\\ \le |b|K{‖x-y‖}_{\lambda }M{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\lambda s}{s}^{\alpha -1}\text{ }\text{d}s\end{array}$ (75)

${‖Fx-Fy‖}_{\lambda }\le |b|K‖x-y‖\frac{M\Gamma \left(\alpha \right)}{{\lambda }^{\alpha }}$ (76)

in which we have used the boundedness of the function ${E}_{\alpha ,\alpha }\left(.\right)$ on the positive real line so that

$M=\underset{t\in \left(0,\infty \right)}{\mathrm{sup}}{E}_{\alpha ,\alpha }\left(-a{t}^{\alpha }\right).$

We choose the real number $\lambda$ such that

$\frac{|b|KM\Gamma \left(\alpha \right)}{{\lambda }^{\alpha }}<1$ (77)

from which it will follow that $F:B↦B$ is a contraction and by the well known contraction mapping principle, $F:B↦B$ has a unique fixed point which is a solution of (70) and hence that of (69). Since T is a positive, otherwise unrestricted number, the solution of (70) and hence that of (69) is defined on $\left[0,\infty \right)$ . This completes the proof.

The asymptotic behaviour as $t\to \infty$ of solutions of differential equations and the stability of equilibria of dynamic systems have been well studied in the case of integer order differential equations. There are several articles devoted to the general analysis of stability of equilibria of non-integer order differen- tial equations (see for instance the articles by     -  and applications of delay fractional differentail equations to epidemic analysis(see   ) The asymptotic behaviour of solutions of fractional order systems can be quite different from the corresponding integer order systems indicating a change in the asymptotic behavior of a fractionalized dynamic system from its integer order counterpart. For instance an unstable integer order system can become a stable one when the system is fractionalized; also new asymptotic behaviour not found in the integer order system can emerge in fractionalized systems (see   We shall now proceed to consider the asymptotic be- haviour of solutions of the semi-linear fractional order system

$\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}=-ax\left(t\right)+bf\left(x\left(t\right)\right),\text{ }t>0,\text{\hspace{0.17em}}\alpha \in \left(0,1\right).$ (78)

We suppose that there exists a unique real constant say ${x}^{*}\ne 0$ such that

$a{x}^{*}=bf\left({x}^{*}\right).$ (79)

Since the Caputo derivative of a constant is zero, the constant ${x}^{*}$ is an equilibrium of (78). Although several authors have investigated the stability of equilibrium solution of fractional order differential equations, there are not many investigations dealing with global asymptotic stability of nonlinear systems. In the following we use once again the complete monotonicity of the one and two parameter Mittag-Leffler functions ${E}_{\alpha }\left(-{t}^{\alpha }\right)$ and ${E}_{\alpha ,\alpha }\left(-{t}^{\alpha }\right),t>0$ in our analysis of the stability of the equilibrium ${x}^{*}$ .

Definition 3 The equilibrium ${x}^{*}$ of (78) with $0<\alpha <1$ is said to be globally stable if for arbitrary $ϵ>0$ , every solution $x\left(t\right)$ of (78) satisfies

$|x\left(t\right)-{x}^{*}|<ϵ,\text{ }t>0.$

The equilibrium ${x}^{*}$ is said to be globally asymptotically stable if it is stable and furthermore satisfies

$|x\left(t\right)-{x}^{*}|\to 0\text{ }\text{as}\text{\hspace{0.17em}}t\to \infty .$

In the case of fractional order differential equations there is another concept of stability which different from the exponential asymptotic stability common to the theory of integer-order differential equations.

Definition 4 The equilibrium ${x}^{*}$ of (78) is said to be Mittag-Leffler stable if every solution of (78) satisfies

$|x\left(t\right)-{x}^{*}|\le {\left[m\left(x\left(0\right)\right){E}_{\alpha }\left(-a{t}^{\alpha }\right)\right]}^{p},\text{ }t>0$ (80)

where $m\left(.\right)$ is locally Lipschitz with $m\left({x}^{*}\right)=0,m\left(x\right)\ge 0$ and p is some positive number.

Theorem 3 Suppose ${x}^{*}$ is a unique equilibrium of (78) with

$a>K|b|\text{ }\text{and}\text{ }\alpha \in \left(0,1\right).$ (81)

Then the equilibrium ${x}^{*}$ of (78) is globally Mittag-Leffler stable.

Proof. We have from

$\begin{array}{l}\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}=-ax\left(t\right)+bf\left(x\left(t\right)\right)\\ \frac{{\text{d}}^{\alpha }{x}^{*}}{\text{d}{t}^{\alpha }}=-a{x}^{*}+bf\left({x}^{*}\right)\end{array}\right\}$ (82)

that

$\frac{{\text{d}}^{\alpha }\left[x\left(t\right)-{x}^{*}\right]}{\text{d}{t}^{\alpha }}=-a\left[x\left(t\right)-{x}^{*}\right]+b\left[f\left(x\left(t\right)\right)-f\left({x}^{*}\right)\right],\text{ }\text{ }t>0$ (83)

and hence by the variation of constants formula

$\begin{array}{c}x\left(t\right)-{x}^{*}=\left[x\left(0\right)-{x}^{*}\right]{E}_{\alpha }\left(-a{t}^{\alpha }\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+b{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)\left[f\left(x\left(s\right)\right)-f\left({x}^{*}\right)\right]\text{d}s.\end{array}$ (84)

Now we use the non-negative and complete monotone character of the functions ${E}_{\alpha }\left(-a{t}^{\alpha }\right)$ and ${t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{t}^{\alpha }\right)$ and obtain from (84) that

$\begin{array}{c}|x\left(t\right)-{x}^{*}|\le |x\left(0\right)-{x}^{*}|{E}_{\alpha }\left(-a{t}^{\alpha }\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+|b|{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)K|x\left(s\right)-{x}^{*}|\text{d}s\end{array}$ (85)

Let $M\left(t\right),t\in \left[0,\infty \right)$ be a real valued and non-negative function such that

$\begin{array}{c}|x\left(t\right)-{x}^{*}|=|x\left(0\right)-{x}^{*}|{E}_{\alpha }\left(-a{t}^{\alpha }\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+|b|{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{\left(t-s\right)}^{\alpha }\right)\left[K|x\left(s\right)-{x}^{*}|-M\left(s\right)\right]\text{d}s.\end{array}$ (86)

By using the convolution theorem of Laplace transforms, we derive from (86) that

$\begin{array}{l}\mathcal{L}\left[|u\left(t\right)|\right]\\ =|u\left(0\right)|\mathcal{L}\left[{E}_{\alpha }\left(-a{t}^{\alpha }\right)\right]+|b|\mathcal{L}\left[{t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{t}^{\alpha }\right)\right]\left[K\mathcal{L}\left(|u\left(t\right)|\right)-\mathcal{L}\left[M\left(t\right)\right]\right]\end{array}$ (87)

where $\mathcal{L}\left[.\right]$ denotes the Laplace transform operator and

$u\left(t\right)\equiv x\left(t\right)-{x}^{*},\text{ }t\ge 0.$ (88)

We have from the known Laplace transforms of the one and two parameter Mittag-Leffler functions that

$\mathcal{L}\left[|u\left(t\right)|\right]=|u\left(0\right)|\frac{{s}^{\alpha -1}}{{s}^{\alpha }+a}+|b|\frac{1}{{s}^{\alpha }+a}\left\{K\mathcal{L}\left[|u\left(t\right)|\right]-\mathcal{L}\left[M\left(t\right)\right]\right\}$ (89)

which simplifies to

$\mathcal{L}\left[|u\left(t\right)|\right]=|u\left(0\right)|\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\left(a-K|b|\right)}-|b|\frac{1}{{s}^{\alpha }+\left(a-K|b|\right)}\mathcal{L}\left[M\left(t\right)\right]$ (90)

We can invert the Laplace transforms in (90) and obtain

$\begin{array}{c}|u\left(t\right)|=|u\left(0\right)|{E}_{\alpha }\left[-\left(a-K|b|\right){t}^{\alpha }\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-|b|{\int }_{0}^{t}{\left(t-s\right)}^{\alpha -1}{E}_{\alpha ,\alpha }\left[-\left(a-K|b|\right){\left(t-s\right)}^{\alpha }\right]M\left(s\right)\text{d}s\end{array}$ (91)

On invoking the complete monotonicity of ${t}^{\alpha -1}{E}_{\alpha ,\alpha }\left(-a{t}^{\alpha }\right)$ and the non- negativity of $M\left(t\right)$ , we derive from (91) that

$|u\left(t\right)|\le |u\left(0\right)|{E}_{\alpha }\left[-\left(a-K|b|\right){t}^{\alpha }\right],\text{ }t>0$ (92)

which is the same as

$|x\left(t\right)-{x}^{*}|\le |x\left(0\right)-{x}^{*}|{E}_{\alpha }\left[-\left(a-K|b|\right){t}^{\alpha }\right],\text{ }t>0$ (93)

from which the result will follow. The proof is complete.

We note that when $\alpha \to 1-$ the above inequality leads to the well known exponential asymptotic stability corresponding to the ordinary differential equation.

7. Some Remarks

We shall briefly consider the power law decay nature of the function ${E}_{\alpha }\left[-\left(a-K|b|\right){t}^{\alpha }\right]$ for $\alpha \in \left(0,1\right)$ by using Watson’s lemma for loop contours (see for instance  , pp. 82-83 or  , pp. 250-252) suitable for asymptotic expansions. We let for convenience $\lambda =a-k|b|$ and note that

$\begin{array}{c}\mathcal{L}\left[{E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)\right]=\frac{{s}^{\alpha -1}}{{s}^{\alpha }+\lambda }\\ =\underset{k=1}{\overset{n}{\sum }}\frac{{\left(-1\right)}^{k-1}{s}^{k\alpha -1}}{{\lambda }^{k}}+\frac{{\left(-1\right)}^{n}{s}^{\left(n+1\right)\alpha -1}}{{s}^{\alpha }+\lambda },\text{ }s\to 0.\end{array}$ (94)

It will follow from the inverse of Laplace transforms for loop contours that

${E}_{\alpha }\left(-\lambda {t}^{\alpha }\right)=\frac{1}{2\text{π}i}{\int }_{-\infty }^{0+}\text{ }{\text{e}}^{st}\left\{\underset{k=1}{\overset{n}{\sum }}\frac{{\left(-1\right)}^{k-1}{s}^{k\alpha -1}}{{\lambda }^{k}}+\frac{{\left(-1\right)}^{n}{s}^{\left(n+1\right)\alpha -1}}{{s}^{\alpha }+\lambda }\right\}\text{d}s$

$=\underset{k=1}{\overset{n}{\sum }}\frac{{\left(-1\right)}^{k-1}}{{\lambda }^{k}}\frac{{t}^{-k\alpha }}{\Gamma \left(1-k\alpha \right)}+O\left({t}^{-\left(n+1\right)\alpha }\right),\text{ }t\to \infty .$ (95)

The notation ${\int }_{-\infty }^{0+}$ is known as a loop integral where the path of integration starts at $s=-\infty$ with $arg\text{ }s=-\text{π}$ , encircles the origin once in the counter- clockwise direction and returns to $s=-\infty$ with $arg\text{ }s=\text{π}$ . We have from (93) and (95) that

$|x\left(t\right)-{x}^{*}|\le |x\left(0\right)-{x}^{*}|\left\{\underset{k=1}{\overset{n}{\sum }}\frac{{\left(-1\right)}^{k-1}}{{\left(a-k|b|\right)}^{k}}\frac{{t}^{-k\alpha }}{\Gamma \left(1-k\alpha \right)}+O\left({t}^{-\left(n+1\right)\alpha }\right)\right\}$

which for $n=1$ reads

$|x\left(t\right)-{x}^{*}|\le \frac{|x\left(0\right)-{x}^{*}|}{a-k|b|}\frac{{t}^{-\alpha }}{\Gamma \left(1-\alpha \right)}\text{ }\text{for}\text{\hspace{0.17em}}\text{ }t\to \infty ,\text{\hspace{0.17em}}\alpha \in \left(0,1\right).$

The non-exponential and power law nature of the convergence of solutions to the equilibrium displayed by the above relation is ascribed to the long memory effect of the fractional derivative.

We conclude with the following observation regarding the fractionalization of systems of the form

$\frac{\text{d}x\left(t\right)}{\text{d}t}=-ax\left(t\right)+bf\left(x\left(t\right)\right)+p.$ (96)

For instance the following are some of the fractionalized versions of (96):

$A\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}+B\frac{{\text{d}}^{\beta }x\left(t\right)}{\text{d}{t}^{\beta }}=-ax\left(t\right)+bf\left(x\left(t\right)\right)+p$ (97)

where $\alpha ,\beta \in \left(0,1\right)$ , $A,B$ are constants; a further generalized fractionalization of (96) is given by

${\int }_{0}^{1}\text{ }\text{ }h\left(\alpha \right)\frac{{\text{d}}^{\alpha }x\left(t\right)}{\text{d}{t}^{\alpha }}\text{d}\alpha =-ax\left(t\right)+bf\left(x\left(t\right)\right)+p$ (98)

where $h:\left[0,1\right]\to \left[0,\infty \right)$ is a continuous function. General results related to the existence and stability of solutions of equations of the form (97) and (98) will be useful and interesting developments to the existing literature on fractional order dynamic systems.

Acknowledgements

The project was supported by Small Research Grant, 2015-2016, Hong Kong Institute of Education.

Conflicts of Interest

The authors declare no conflicts of interest.     customer@scirp.org +86 18163351462(WhatsApp) 1655362766  Paper Publishing WeChat 