Scientific Research

An Academic Publisher

A Comparison Study of ADI and LOD Methods on Option Pricing Models

**Author(s)**Leave a comment

KEYWORDS

1. Introduction

Imagine than an option contract for two underlying assets with the present prices of x and y and the future prices of X_{1} and X_{1} at the maturity time of T has been signed. If the volatility of the first and second assets are shown by
${\sigma}_{1}\mathrm{,}{\sigma}_{2}$ and the
$\rho $ is the correlation coefficient between the prices of the two underlying assets and r is interest rate, then
$u\left(x\mathrm{,}y\mathrm{,}T\right)$ is the this contract price in the following equation which is called the two dimensional Black-Scholes equation.

$\begin{array}{l}\frac{\partial u}{\partial t}=\frac{1}{2}{{\displaystyle \sigma}}_{1}^{2}{{\displaystyle x}}^{2}\frac{{{\displaystyle \partial}}^{2}u}{\partial {{\displaystyle x}}^{2}}+\frac{1}{2}{{\displaystyle \sigma}}_{1}^{2}{{\displaystyle y}}^{2}\frac{{{\displaystyle \partial}}^{2}u}{\partial {{\displaystyle y}}^{2}}+\frac{1}{2}\rho {{\displaystyle \sigma}}_{1}{{\displaystyle \sigma}}_{2}xy\frac{{{\displaystyle \partial}}^{2}u}{\partial x\partial y}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+rx\frac{\partial u}{\partial x}+ry\frac{\partial u}{\partial y}-ru.\end{array}$ (1)

where $\left(x\mathrm{,}y\mathrm{,}t\right)\in \Omega \times \left(\mathrm{0,}T\right]$ and $u=u\left(x,y,t\right)$ is the three-variate u function. One of the variables is t and the other two variables are the spatial variables. By the two dimensional Black-Scholes equation we mean that there are two special variables in this equation. Although pricing options problems are defined in an infinite set such as:

$\Omega \times \left(0,T\right]=\left\{\left(x,y,t\right)|\left(x,y\right)\in \Omega ,t\in \left(0,T\right]\right\}$

To use numerical methods, a limited range will be considered. Therefore, Equation (1) will be considered on a finite range where M and L are selected so large that the error in the u price become negligible [1] . Boundary conditions are necessary to solve the above equation. Various forms of boundary conditions can be artificially added to the problem. This conditions are:

(1) linear boundary conditions

$\frac{{\partial}^{2}u}{\partial {x}^{2}}\left(0,y,t\right)=0,\text{\hspace{1em}}0\le y\le M,\text{\hspace{1em}}t\ge 0,$

$\frac{{\partial}^{2}u}{\partial {x}^{2}}\left(L,y,t\right)=0,\text{\hspace{1em}}0\le y\le M,\text{\hspace{1em}}t\ge 0,$

$\frac{{\partial}^{2}u}{\partial {y}^{2}}\left(x,0,t\right)=0,\text{\hspace{1em}}0\le x\le L,\text{\hspace{1em}}t\ge 0,$

$\frac{{\partial}^{2}u}{\partial {y}^{2}}\left(x,M,t\right)=0,\text{\hspace{1em}}0\le x\le L,\text{\hspace{1em}}t\ge 0.$

(2) Dirichlet boundary conditions

$u\left(x,0,t\right)=\Lambda \left(x,0\right),\text{\hspace{1em}}0\le x\le L,\text{\hspace{1em}}t\ge 0,$

$u\left(x,M,t\right)=\Lambda \left(x,M\right),\text{\hspace{1em}}0\le x\le L,\text{\hspace{1em}}t\ge 0,$

$u\left(0,y,t\right)=\Lambda \left(0,y\right),\text{\hspace{1em}}0\le y\le M,\text{\hspace{1em}}t\ge 0,$

$u\left(L,y,t\right)=\Lambda \left(L,y\right),\text{\hspace{1em}}0\le y\le M,\text{\hspace{1em}}t\ge 0.$

The third condition is Neumann boundary condition which is not used in the present thesis. A combination of the first and the second condition might also be used [2] .

2. Weight Methods for the Two Dimensional Black-Scholes Equation

The two dimensional Black-Scholes equation can be discretized as the one dimensional Black-Scholes equation. To simplify the computations, the discrete operator L is defined as follows [3] .

$\begin{array}{c}L{u}_{ij}^{n}=\frac{{\sigma}_{1}^{2}}{2}{\left(i\Delta x\right)}^{2}\frac{{u}_{i+1,j}^{n}-2{u}_{i,j}^{n}+{u}_{i-1,j}^{n}}{{\left(\Delta x\right)}^{2}}+\frac{{\sigma}_{2}^{2}}{2}{\left(j\Delta y\right)}^{2}\frac{{u}_{i,j+1}^{n}-2{u}_{i,j}^{n}+{u}_{i,j-1}^{n}}{{\left(\Delta y\right)}^{2}},\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\rho {\sigma}_{1}{\sigma}_{2}\left(i\Delta x\right)\left(j\Delta y\right)\frac{{u}_{i+1,j+1}^{n}-{u}_{i,j+1}^{n}+{u}_{i+1,j}^{n}+{u}_{i,j}^{n}}{\Delta x\Delta y}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+r\left(i\Delta x\right)\frac{{u}_{i+1,j}^{n}-{u}_{i-1,j}^{n}}{2\Delta x}+r\left(j\Delta y\right)\frac{{u}_{i,j+1}^{n}-{u}_{i,j-1}^{n}}{2\Delta y}-r{u}_{i,j}^{n}.\end{array}$ (2)

The explicit method is as follows

$\frac{{u}_{ij}^{n+1}-{u}_{ij}^{n}}{\Delta t}=L{u}_{ij}^{n},$ (3)

And the implicit method is as follows

$\frac{{u}_{i,j}^{n+1}-{u}_{i,j}^{n}}{\Delta t}=L{u}_{i,j}^{n+1}.$

In general form, CrankNicolson method is as follows

$\frac{{u}_{i,j}^{n+1}-{u}_{i,j}^{n}}{\Delta t}=\left(1-\theta \right)L{u}_{i,j}^{n}+\theta L{u}_{i,j}^{n+1}$

There is no need to solve the set in the explicit method since

${u}_{ij}^{n+1}={u}_{ij}^{n}+\Delta tL{u}_{ij}^{n},$

However, the explicit method has conditional stability and its stability domain is much smaller than the one dimensional mode. In fact, as the number of dimensions increase, the stability domain of the explicit method become smaller. The implicit method is stable and need to solve the linear equation set.

$\frac{{u}_{i,j}^{n+1}-{u}_{i,j}^{n}}{\Delta t}=L{u}_{i,j}^{n+1}.$

The dimensions of this set equals the numbers of points in the net. Since in the two dimensional mode the number of points is twice the number of points in the one dimensional mode; therefore, we have a large set of ${N}_{x}\mathrm{,}{N}_{y}$ dimensions which is too expensive to solve. The same can be found in Crank Nicolson method for $0\le \theta \le 1$ Therefore these methods are not recommended. ADI and LOD methods will be proposed in the next section. These methods are both unconditionally stable and have low computational costs. In these methods, by decomposing the L operator to two x and y operators at each step, a number of three diagonal sets with ${N}_{x}$ or ${N}_{y}$ dimension will be solved to be able to go to the next step [4] .

3. ADI Method for the Two-Dimensional Black-Scholes Model

To obtain the Alternating Direction Implicit (ADI), the time derivation will be estimated as,

$\frac{\partial u}{\partial t}\cong \frac{{{\displaystyle u}}_{i,j}^{n+1}-{{\displaystyle u}}_{i,j}^{n}}{\Delta t}=\frac{{u}_{i,j}^{n+1}-{u}_{i,j}^{n+\frac{1}{2}}+{u}_{i,j}^{n+\frac{1}{2}}-{u}_{i,j}^{n}}{\Delta t}=\frac{{u}_{i,j}^{n+1}-{u}_{i,j}^{n+\frac{1}{2}}}{\Delta t}+\frac{{u}_{i,j}^{n+\frac{1}{2}}-{u}_{i,j}^{n}}{\Delta t}.$ (4)

In fact, we added and subtracted one intermediate step ${t}_{n+\frac{1}{2}}$ and divided the

time derivation into two parts. Now, by the using the LOD of L, we try to find the following equation to solve the equation.

$\frac{{u}_{i,j}^{n+1}-{u}_{i,j}^{n+\frac{1}{2}}}{\Delta t}+\frac{{u}_{i,j}^{n+\frac{1}{2}}-{u}_{i,j}^{n}}{\Delta t}={L}_{x}{u}_{i}^{n+\frac{1}{2}}+{L}_{y}{u}_{i}^{n+1}.$ (5)

We will explain the way of obtaining the above mentioned equation in the following. To get from $-n$ to $-\left(n+1\right)$ in ADI method, we first solve

$\frac{{u}_{i,j}^{n+\frac{1}{2}}-{u}_{i,j}^{n}}{\Delta t}={L}_{x}{u}_{i,j}^{n+\frac{1}{2}}$ (6)

For $j=1,\cdots ,{N}_{y}$ , and every one of them is a three diagonal set from the ${N}_{x}$

dimension. By this action, the values of u will be obtained in $n+\frac{1}{2}$ stage. Then,

to obtain u in $\left(n+1\right)$ phase, we solve

$\frac{{u}_{i,j}^{n+1}-{u}_{i,j}^{n+\frac{1}{2}}}{\Delta t}={L}_{y}{u}_{i,j}^{n+1}$ (7)

For $i=1,\cdots ,{N}_{x}$ every one of them is a set from the ${N}_{y}$ dimension. Referring to the above explanations and considering the fact that solving three diagonal sets has $O\left(N\right)$ cost, we conclude that getting from $-n$ to $-\left(n+1\right)$ step equals the rank of the number of network points.

${N}_{y}O\left({N}_{x}\right)+{N}_{x}O\left({N}_{y}\right)=O\left({N}_{x}{N}_{y}\right)$

This is the least possible calculation cost. Now, we explain how we can obtain ${L}_{x}$ and ${L}_{y}$ and equation sets. First, the time step of ${t}_{n}\mathrm{,}{t}_{n+1}$ is divided into the

two steps of $\left[{t}_{n}\mathrm{,}{t}_{n+\frac{1}{2}}\right]$ and $\left[{t}_{n+\frac{1}{2}}\mathrm{,}{t}_{n+1}\right]$ . in the first time step, the derivations

should be implicitly estimated in relation to x and the derivations will be explicitly estimated in relation to y variable. In other words, we write $\left(\Delta x=\Delta y=h\right)$

$\begin{array}{c}\frac{{u}_{i,j}^{n+\frac{1}{2}}-{u}_{i,j}^{n}}{\frac{1}{2}\Delta t}=\frac{{\sigma}_{1}^{2}}{2}{x}_{i}^{2}\frac{{u}_{i+1,j}^{n+\frac{1}{2}}-2{u}_{i,j}^{n+\frac{1}{2}}+{u}_{i-1,j}^{n+\frac{1}{2}}}{{h}^{2}}+\frac{{\sigma}_{2}^{2}}{2}{y}_{j}^{2}\frac{{u}_{i,j+1}^{n}-2{u}_{i,j}^{n}+{u}_{i,j-1}^{n}}{{h}^{2}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{2}\rho {\sigma}_{1}{\sigma}_{2}{x}_{i}{y}_{j}\frac{{u}_{i+1,j+1}^{n}+{u}_{i-1,j-1}^{n}-{u}_{i+1,j-1}^{n}-{u}_{i-1,j+1}^{n}}{4{h}^{2}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+r{x}_{i}\frac{{u}_{i+1,j}^{n+\frac{1}{2}}-{u}_{i,j}^{n+\frac{1}{2}}}{h}+r{y}_{j}\frac{{u}_{i,j+1}^{n}-{u}_{i,j}^{n}}{h}-r{u}_{i,j}^{n+\frac{1}{2}}.\end{array}$ (8)

In the second time step, i.e. in $\left[{t}_{n+\frac{1}{2}}\mathrm{,}{t}_{n+1}\right]$ we will act in reverse. In other

words, we implicitly estimate the derivation in relation to y and explicitly in relation to x. Therefore, we have

$\begin{array}{c}\frac{{u}_{i,j}^{n+1}-{u}_{i,j}^{n+\frac{1}{2}}}{\frac{1}{2}\Delta t}=\frac{{\sigma}_{1}^{2}}{2}{x}_{i}^{2}\frac{{u}_{i+1,j}^{n+\frac{1}{2}}-2{u}_{i,j}^{n+\frac{1}{2}}+{u}_{i-1,j}^{n+\frac{1}{2}}}{{h}^{2}}+\frac{{\sigma}_{2}^{2}}{2}{y}_{j}^{2}\frac{{u}_{i,j+1}^{n+1}-2{u}_{i,j}^{n+1}+{u}_{i,j-1}^{n+1}}{{h}^{2}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{2}\rho {\sigma}_{1}{\sigma}_{2}{x}_{i}{y}_{j}\frac{{u}_{i+1,j+1}^{n+\frac{1}{2}}+{u}_{i-1,j-1}^{n+\frac{1}{2}}-{u}_{i+1,j-1}^{n+\frac{1}{2}}-{u}_{i-1,j+1}^{n+\frac{1}{2}}}{4{h}^{2}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+r{x}_{i}\frac{{u}_{i+1,j}^{n+\frac{1}{2}}-{u}_{i,j}^{n+\frac{1}{2}}}{h}+r{y}_{j}\frac{{u}_{i,j+1}^{n}-{u}_{i,j}^{n}+1}{h}-r{u}_{i,j}^{n+1}.\end{array}$ (9)

In the above equations, the derivation of $\frac{{\partial}^{2}u}{\partial x\partial y}$ is explicitly written and y in

the last sentence is implicitly written. Now, if we multiply the two sides of the

Equation (8) by $\frac{1}{2}$ and defne the left side with ${L}_{x}{u}_{i\mathrm{,}j}^{n+\frac{1}{2}}$ , also if we multiply the two sides of the Equation (9) by $\frac{1}{2}$ define the left side with ${L}_{y}{u}_{i\mathrm{,}j}^{n+1}$ , we get the

(8) and (9) equations. Now, if j is constant in (8)) equation and transfer the value

in $n+\frac{1}{2}$ step to the left, the unknown values are.

${u}_{i,j}^{n+\frac{1}{2}}\mathrm{,}{u}_{i-1,j}^{n+\frac{1}{2}}\mathrm{,}{u}_{i+1,j}^{n+\frac{1}{2}}$

In the present equations, the coefficient ${u}_{i-1,j}^{n+\frac{1}{2}}$ is

${\alpha}_{i}=-\left(\frac{{\sigma}_{1}^{2}}{4}\frac{\Delta t}{{h}^{2}}{x}_{i}^{2}\right)$

And cofficient ${u}_{i,j}^{n+\frac{1}{2}}$ is

${\beta}_{i}=\left(1+\frac{{\sigma}_{1}^{2}}{2}\frac{\Delta t}{{h}^{2}}{x}_{i}^{2}-\frac{1}{2}\frac{\Delta t}{h}r{x}_{i}+\frac{1}{2}r\right)$

Coefficient ${u}_{i+1,j}^{n+\frac{1}{2}}$ is

${\gamma}_{i}=-\left(\frac{{\sigma}_{1}^{2}}{4}\frac{\Delta t}{{h}^{2}}{x}_{i}^{2}+\frac{r}{2}\frac{\Delta t}{h}{x}_{i}\right)$

if we transfer the values in the $-n$ step to the right side and define

$\begin{array}{c}{f}_{ij}:={u}_{i,j}^{n}+\frac{\Delta t}{4}{\left({\sigma}_{2}{y}_{j}\right)}^{2}\frac{{u}_{i,j+1}^{n}-2{u}_{i,j}^{n}+{u}_{i,j-1}^{n}}{{h}^{2}}+\frac{\Delta t}{2}r{y}_{j}\frac{{u}_{i,j+1}^{n}-{u}_{i,j}^{n}}{h}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{\Delta t}{2}\rho {\sigma}_{1}{\sigma}_{2}{x}_{i}{y}_{j}\frac{{u}_{i+1,j+1}^{n}+{u}_{i-1,j-1}^{n}-{u}_{i-1,j+1}^{n}-{u}_{i+1,j-1}^{n}}{4{h}^{2}}.\end{array}$

then, we get to the below equation

${\alpha}_{i}{u}_{i-1,j}^{n+\frac{1}{2}}+{\beta}_{i}{u}_{i,j}^{n+\frac{1}{2}}+{\gamma}_{i}{u}_{i+1,j}^{n+\frac{1}{2}}={f}_{i,j}.$

For constant j and for $i=1,2,\cdots ,{N}_{x}$ in the above equation, the equation set is defined which is a three diagonal set.

${A}_{x}=\left[\begin{array}{cccccc}{\beta}_{1}& {\gamma}_{1}& 0& \cdots & 0& 0\\ {\alpha}_{2}& {\beta}_{2}& {\gamma}_{2}& \cdots & 0& 0\\ 0& \ddots & \ddots & \ddots & \cdots & 0\\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ 0& 0& 0& {\alpha}_{{N}_{x}-1}& {\beta}_{{N}_{x}-1}& {\gamma}_{{N}_{x}-1}\\ 0& 0& 0& \cdots & {\alpha}_{{N}_{x}}& {\beta}_{{N}_{x}}\end{array}\right]$

The sentence ${\alpha}_{1}{u}_{\mathrm{0,}j}^{n+\frac{1}{2}}$ in the first line and the sentence ${\gamma}_{{N}_{x}}{u}_{{N}_{x}+1,j}^{n+\frac{1}{2}}$ in the last

line will be transferred to right and will be subtracted from ${f}_{{N}_{x}}{}_{\mathrm{,}j}\mathrm{,}{f}_{\mathrm{1,}j}$ respectively. These sentences will be substituted from the boundary condition and the left side of the rectangular area of $\Omega =\left[0,L\right]\times \left[0,M\right]$ . the new values will be

presented by ${\stackrel{^}{f}}_{i\mathrm{,}j}$ values will be obtained in $n+\frac{1}{2}$ step [1] . The algorithm of

this half-step is

for $j=1:{N}_{y}$

for $i=1:{N}_{x}$

set ${\alpha}_{i},{\beta}_{i},{\gamma}_{i}$ and ${f}_{ij}$ .

end

solve ${A}_{x}{u}_{1:{N}_{x},j}^{n+\frac{1}{2}}={\stackrel{^}{f}}_{1:{N}_{x},j}$ by using Thomas algorithm.

end

Considering the (9), we follow the same procedure for the second half-step. First, we define

$\text{\hspace{0.05em}}{\alpha}_{j}=-\left(\frac{{\sigma}_{2}^{2}}{4}\frac{\Delta t}{{h}^{2}}{y}_{j}^{2}\right)$

ana

${\beta}_{j}=\left(1+\frac{{\sigma}_{2}^{2}}{2}\frac{\Delta t}{{h}^{2}}{y}_{j}^{2}-\frac{1}{2}\frac{\Delta t}{h}r{y}_{j}+\frac{1}{2}r\right)$

and

${\gamma}_{j}=-\left(\frac{{\sigma}_{2}^{2}}{4}\frac{\Delta t}{{h}^{2}}{y}_{j}^{2}+\frac{r}{2}\frac{\Delta t}{h}{y}_{j}\right)$

${A}_{y}=\left[\begin{array}{cccccc}{\beta}_{1}& {\gamma}_{1}& 0& \cdots & 0& 0\\ {\alpha}_{2}& {\beta}_{2}& {\gamma}_{2}& \cdots & 0& 0\\ 0& \ddots & \ddots & \ddots & \cdots & 0\\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ 0& 0& 0& {\alpha}_{{N}_{y}-1}& {\beta}_{{N}_{y}-1}& {\gamma}_{{N}_{y}-1}\\ 0& 0& 0& \cdots & {\alpha}_{{N}_{y}}& {\beta}_{{N}_{y}}\end{array}\right]$

which is a three diagonal set. The sentence ${\alpha}_{1}{u}_{i,0}^{n+1}$ In first line and the sentence ${\gamma}_{{N}_{y}}{u}_{i,{N}_{y}+1}^{n+1}$ in the last line will be transferred to the right of the set and their values will be substituted considering the border condition of the sides in top and bottom of $\left[0,L\right]\times \left[0,M\right]$ of the rectangle area. The new values are presented by ${\stackrel{^}{g}}_{i\mathrm{,}j}$ the set ${N}_{x}$ will be solved for $i=1,2,\cdots ,{N}_{x}$ and the u value will be obtained at $n+1$ step.

$\begin{array}{c}{g}_{ij}:={u}_{i,j}^{n+\frac{1}{2}}+\frac{\Delta t}{4}{\left({\sigma}_{1}{x}_{i}\right)}^{2}\frac{{u}_{i+1,j}^{n+\frac{1}{2}}-2{u}_{i,j}^{n+\frac{1}{2}}+{u}_{i-1,j}^{n+\frac{1}{2}}}{{h}^{2}}+\frac{\Delta t}{2}r{x}_{i}\frac{{u}_{i+1,j}^{n+\frac{1}{2}}-{u}_{i,j}^{n+\frac{1}{2}}}{h}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{\Delta t}{2}\rho {\sigma}_{1}{\sigma}_{2}{x}_{i}{y}_{j}\frac{{u}_{i+1,j+1}^{n+\frac{1}{2}}+{u}_{i-1,j-1}^{n+\frac{1}{2}}-{u}_{i-1,j+1}^{n+\frac{1}{2}}-{u}_{i+1,j-1}^{n+\frac{1}{2}}}{4{h}^{2}}.\end{array}$

therefore, the following equation will be obtained.

${\alpha}_{i}{u}_{i,j-1}^{n+1}+{\beta}_{i}{u}_{i,j}^{n+1}+{\gamma}_{i}{u}_{i,j+1}^{n+1}={g}_{i,j}.$ (10)

The second phase algorithm is as follows.

for $i=1:{N}_{x}$

for $j=1:{N}_{y}$

set ${\alpha}_{j},{\beta}_{j},{\gamma}_{j}$ and ${g}_{ij}$ .

end

solve ${A}_{y}{\left({u}_{i,1:{N}_{y}}^{n+1}\right)}^{T}={\left({g}_{i,1:{N}_{y}}\right)}^{T}$ by using Thomas algorithm.

end

As it can be seen, in the first half-step of ${N}_{y}$ , the three diagonal set will be solved from the dimension of ${N}_{x}$ and in the second half-step of ${N}_{x}$ , the three diagonal set will be solved from the dimension of Therefore, the calculation cost of the method to get from the n step to the $n+1$ step is from the rank of ${N}_{y}$ ${N}_{x}$ , that is, the rank of the number of the network points.

4. LOD Method

The LOD method, similar to ADI method is divided into two steps in each time step. The first stage estimations are implicit and in relation to x and the second step is explicit and in relation to y. the Black-Scholes model is rewritten as [5] ,

$\begin{array}{c}\frac{1}{2}\frac{\partial u}{\partial t}+\frac{1}{2}\frac{\partial u}{\partial t}=\left\{\frac{{\left({\sigma}_{1}x\right)}^{2}}{2}\frac{{\partial}^{2}u}{\partial {x}^{2}}+rx\frac{\partial u}{\partial x}+\frac{1}{2}{\sigma}_{1}{\sigma}_{2}\rho xy\frac{{\partial}^{2}u}{\partial x\partial y}-\frac{r}{2}u\right\}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left\{\frac{{\left({\sigma}_{2}y\right)}^{2}}{2}\frac{{\partial}^{2}u}{\partial {y}^{2}}+ry\frac{\partial u}{\partial y}+\frac{1}{2}{\sigma}_{1}{\sigma}_{2}\rho xy\frac{{\partial}^{2}u}{\partial x\partial y}-\frac{r}{2}u\right\}.\end{array}$ (11)

in which the derivations for x and y are written separatel y. The derivation

$\frac{{\partial}^{2}u}{\partial x\partial y}$ and the u are divided between the two groups. Now, the time step of $\left[{t}_{n}\mathrm{,}{t}_{n+1}\right]$ will be divided into two steps of $\left[{t}_{n}\mathrm{,}{t}_{n+\frac{1}{2}}\right]$ and we will solve the first half-step, that is, $\left[{t}_{n}\mathrm{,}{t}_{n+\frac{1}{2}}\right]$ of the equation

$\frac{1}{2}\frac{\partial u}{\partial t}=\left\{\frac{{\left({\sigma}_{1}x\right)}^{2}}{2}\frac{{\partial}^{2}u}{\partial {x}^{2}}+rx\frac{\partial u}{\partial x}+\frac{1}{2}{\sigma}_{1}{\sigma}_{2}\rho xy\frac{{\partial}^{2}u}{\partial x\partial y}-\frac{r}{2}u\right\}.$ (12)

We will solve the half-step $\left[{t}_{n+\frac{1}{2}}\mathrm{,}{t}_{n+1}\right]$ of the equation.

$\frac{1}{2}\frac{\partial u}{\partial t}=\left\{\frac{{\left({\sigma}_{2}y\right)}^{2}}{2}\frac{{\partial}^{2}u}{\partial {y}^{2}}+ry\frac{\partial u}{\partial y}+\frac{1}{2}{\sigma}_{1}{\sigma}_{2}\rho xy\frac{{\partial}^{2}u}{\partial x\partial y}-\frac{r}{2}u\right\}.$ (13)

Both equation will be fragmented implicitly. In this way, the operators ${L}_{x}^{o}\mathrm{,}{L}_{y}^{o}$ defined as follows.

$\begin{array}{c}{L}_{x}^{o}{u}_{ij}^{n+\frac{1}{2}}:=\frac{{\left({\sigma}_{1}{x}_{i}\right)}^{2}}{2}\frac{{u}_{i-1,j}^{n+\frac{1}{2}}-2{u}_{i,j}^{n+\frac{1}{2}}+{u}_{i+1,j}^{n+\frac{1}{2}}}{{h}^{2}}+r{x}_{i}\frac{{u}_{i+1,j}^{n+\frac{1}{2}}-{u}_{i,j}^{n+\frac{1}{2}}}{h}-\frac{r}{2}{u}_{i,j}^{n+\frac{1}{2}},\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{2}{\sigma}_{1}{\sigma}_{2}\rho {x}_{i}{y}_{j}\frac{{u}_{i+1,j+1}^{n}+{u}_{i-1,j-1}^{n}-{u}_{i-1,j+1}^{n}+{u}_{i+1,j-1}^{n}}{4{h}^{2}}\end{array}$ (14)

and

$\begin{array}{c}{L}_{y}^{o}{u}_{ij}^{n+1}=\frac{{\left({\sigma}_{2}{y}_{j}\right)}^{2}}{2}\frac{{u}_{i,j-1}^{n+1}-2{u}_{i,j}^{n+1}+{u}_{i,j+1}^{n+1}}{{h}^{2}}+r{y}_{j}\frac{{u}_{i,j+1}^{n+1}-{u}_{i,j}^{n+1}}{h}-\frac{r}{2}{u}_{i,j}^{n+1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{2}{\sigma}_{1}{\sigma}_{2}\rho {x}_{i}{y}_{j}\frac{{u}_{i+1,j+1}^{n+\frac{1}{2}}+{u}_{i-1,j-1}^{n+\frac{1}{2}}-{u}_{i-1,j+1}^{n+\frac{1}{2}}+{u}_{i+1,j-1}^{n+\frac{1}{2}}}{4{h}^{2}}.\end{array}$ (15)

Now the two following sets are made.

$\frac{{u}_{i,j}^{n+\frac{1}{2}}-{u}_{i,j}^{n}}{\Delta t}={L}_{x}^{o}{u}_{i,j}^{n+\frac{1}{2}},$ (16)

and

$\frac{{u}_{i,j}^{n+1}-{u}_{i,j}^{n+\frac{1}{2}}}{\Delta t}={L}_{y}^{o}{u}_{i,j}^{n+1},$ (17)

which are called the first phase and the second phase sets respectively. Therefore, the algorithm of LOD method has two phases. In the first phase, the Equation (12) will be fragmented and summarized.

${\alpha}_{i}{u}_{i-1,j}^{n+\frac{1}{2}}+{\beta}_{i}{u}_{i,j}^{n+\frac{1}{2}}+{\gamma}_{i}{u}_{i+1,j}^{n+\frac{1}{2}}={f}_{i,j}$

The coefficients of ${\alpha}_{i}$ , ${\beta}_{i}$ , and ${\gamma}_{i}$ are defined as

${\alpha}_{i}=-\frac{{\sigma}_{1}^{2}{x}_{i}^{2}\Delta t}{2{h}^{2}},\text{\hspace{1em}}{\beta}_{i}=1-\frac{{\sigma}_{1}^{2}{x}_{i}^{2}\Delta t}{{h}^{2}}-r{x}_{i}\frac{\Delta t}{h}+\frac{r}{2},\text{\hspace{1em}}{\gamma}_{i}=-\left(\frac{{\sigma}_{1}^{2}{x}_{i}^{2}\Delta t}{2{h}^{2}}-r{x}_{i}\frac{\Delta t}{h}\right).$

Also, the right side of ${f}_{i\mathrm{,}j}$ is

${f}_{ij}=\frac{\Delta t}{2}\rho {\sigma}_{1}{\sigma}_{2}{x}_{i}{y}_{j}\frac{{u}_{i+1,j+1}^{n}+{u}_{i-1,j-1}^{n}-{u}_{i-1,j+1}^{n}-{u}_{i+1,j-1}^{n}}{4{h}^{2}}.$

The only difference between ADI and LOD is in the right hand side value of

${f}_{i\mathrm{,}j}$ . on the LOD, the discrete values of $\frac{{\partial}^{2}u}{\partial {y}^{2}}$ and $\frac{\partial u}{\partial y}$ do not exist. Therefore,

the algorithm of the first phase of the LOD method is similar to ADI method and the only difference is in the right side. In the second phase of the fragmentation of the Equation (13), we have

${\alpha}_{j}{u}_{i,j-1}^{n+1}+{\beta}_{j}{u}_{i,j}^{n+1}+{\gamma}_{j}{u}_{i,j+1}^{n+1}={g}_{i,j}$

The coefficients of ${\alpha}_{j}$ , ${\beta}_{j}$ , and ${\gamma}_{j}$ are defined as

$\begin{array}{l}{\alpha}_{j}=-\frac{{\sigma}_{2}^{2}{y}_{j}^{2}\Delta t}{2{h}^{2}},\text{\hspace{1em}}\\ {\beta}_{j}=1-\frac{{\sigma}_{2}^{2}{y}_{j}^{2}\Delta t}{{h}^{2}}-r{y}_{j}\frac{\Delta t}{h}+\frac{r}{2},\text{\hspace{1em}}\\ {\gamma}_{j}=-\left(\frac{{\sigma}_{2}^{2}{y}_{j}^{2}\Delta t}{2{h}^{2}}-r{y}_{j}\frac{\Delta t}{h}\right).\end{array}$

And the right side values are defined as

$\frac{\Delta t}{2}\rho {\sigma}_{1}{\sigma}_{2}{x}_{i}{y}_{j}\frac{{u}_{i+1,j+1}^{n+\frac{1}{2}}+{u}_{i-1,j-1}^{n+\frac{1}{2}}-{u}_{i-1,j+1}^{n+\frac{1}{2}}-{u}_{i+1,j-1}^{n+\frac{1}{2}}}{4{h}^{2}}.$

This phase is similar to the second phase of the ADI method. The only differ-

ence is that the right side does not include the fragmented values of $\frac{{\partial}^{2}u}{\partial {x}^{2}}$ and $\frac{\partial u}{\partial x}$ .

5. Meassuring the Numerical Method Error

If x is the exact value of a numerical quantity and $\stackrel{^}{x}$ is an approximate quantity. Absolute error is defined as

$\Delta x=\left|x-\stackrel{^}{x}\right|$ (18)

But if the purpose is estimating a function such as $u\left(x\mathrm{,}y\mathrm{,}T\right)$ , instead of one number of the function value $u\left(x\mathrm{,}y\mathrm{,}T\right)$ in the points $\left({x}_{i}\mathrm{,}{y}_{j}\right)$ which $1\le i\le {N}_{x},$ $1\le j\le {N}_{y}$ should be investigated. Therefore, the definition of (18) is corrected. If the exact response value for $t=T$ with the function.

${u}^{e}\left(x\mathrm{,}y\mathrm{,}T\right)\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\in \left[\mathrm{0,}L\right]\mathrm{,}\text{\hspace{0.17em}}y\in \left[\mathrm{0,}M\right]$

And the approximate values in the points $\left({x}_{i}\mathrm{,}{y}_{j}\right)$ with $\stackrel{^}{u}\left({x}_{i}\mathrm{,}{y}_{j}\mathrm{,}T\right)$ . In this way, the error is

${\Vert e\Vert}_{\infty}=\underset{1\le i\le {N}_{x}}{\mathrm{max}}\underset{1\le j\le {N}_{y}}{\mathrm{max}}\left|{u}_{ij}^{e}-{\stackrel{^}{u}}_{ij}\right|$

Defined as the “extreme norm error” or “the maximum error”. The “norm error of two” or “mean square error” is defined as

${\Vert e\Vert}_{2}=\sqrt{\frac{1}{{N}_{x}{N}_{y}}{\displaystyle \underset{i=1}{\overset{{N}_{x}}{\sum}}}{\displaystyle \underset{j=1}{\overset{{N}_{y}}{\sum}}}{\left({u}^{e}\left({x}_{i},{y}_{j},T\right)-{\stackrel{^}{u}}_{ij}\right)}^{2}.}$ (19)

While applying the numerical results in a real market, all ${u}_{i\mathrm{,}j}$ won’t be used since the basic assets cannot be numerical and if the two basic assets in problem are $x\mathrm{,}y$ respectively. Neighborhood around ${x}_{1}$ is considered as x. the error must be investigated in a limited area of the present prices. This area is called G [1] .

$G=\left\{\left(x,y\right)|0.9{X}_{1}\le x\le 1.1{X}_{1},0.9{X}_{2}\le x\le 1.1{X}_{2}\right\}$

The neighborhood gure is as follows.

In which every $\left({x}_{i}\mathrm{,}{y}_{j}\right)$ is in G. Therefore, the following will be done.

$0.9{X}_{1}\le {x}_{i}\le 1.1{X}_{1}\Rightarrow \left[0.9\frac{{X}_{1}}{\Delta x}\right]\le i\le -\left[-1.1\frac{{X}_{1}}{\Delta x}\right],$

similarly, we have

$\left[0.9\frac{{X}_{2}}{\Delta y}\right]\le j\le -\left[-1.1\frac{{X}_{2}}{\Delta y}\right].$

The number of $\left(i\mathrm{,}j\right)$ which are in this neighborhood are defined as

$N=\left(-\left[-1.1\frac{{X}_{1}}{\Delta X}\right]-\left[0.9\frac{{X}_{1}}{\Delta X}\right]+1\right)\left(-\left[-1.1\frac{{X}_{2}}{\Delta y}\right]-\left[0.9\frac{{X}_{2}}{\Delta y}\right]+1\right).$ (20)

We also use the root mean square error (RMSE) on a specific region. The RMSE is defined as

$RMSE=\sqrt{\frac{1}{N}{\displaystyle \underset{i,j}{\overset{N}{\sum}}}{\left({u}_{i,j}^{e}-{\stackrel{^}{u}}_{ij}\right)}^{2}}={\Vert e\Vert}_{2,G}$

where N is the number of points on the gray region show in Figure 1.

6. All Cash or None

First, the option of two cash assets or none will be considered. We assume that by having two assets x, y the income of option is as follows [3] .

Figure 1. Limitation in asset price.

$\Lambda \left(x\mathrm{,}y\right)=\{\begin{array}{l}\text{cash}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\ge {X}_{1}\mathrm{,}y\ge {X}_{2}\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{unless}\text{\hspace{0.05em}}\end{array}$ (21)

where X_{1}, X_{2} are the prices of x, y.

The function figure is as follows.

The following values will be used for the numerical simulation of the parameters.

${\sigma}_{1}={\sigma}_{2}=0.3,\text{\hspace{0.17em}}\text{\hspace{0.17em}}r=0.03,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\rho =0.5,\text{\hspace{0.17em}}\text{cash}=1,\text{\hspace{0.17em}}{X}_{1}={X}_{2}=100,$

We consider the calculation domain as $\Omega =\left[0,300\right]\times \left[0,300\right]$ for this example, in Figure 2 with the mentioned parameters, the exact answer is

$V\left(x,y\right)=\text{cash}\mathrm{exp}\left(-rt\right){\Psi}_{\rho}\left({Y}_{1}\left(x,y\right),{Y}_{2}\left(x,y\right)\right)$

${Y}_{1}\left(x,y\right)=\frac{1}{{\sigma}_{1}\sqrt{T}}\left(\mathrm{log}\frac{x}{{X}_{1}}+\left(r-\frac{{\sigma}_{1}^{2}}{2}\right)T\right),$

${Y}_{2}\left(x\mathrm{,}y\right)=\frac{1}{{\sigma}_{2}\sqrt{T}}\left(log\frac{y}{{X}_{2}}+\left(r-\frac{{\sigma}_{2}^{2}}{2}\right)T\right)\mathrm{.}$

Note that ${\Psi}_{\rho}$ is a bivariate normal cumulative distribution function, i.e.,

${\Psi}_{\rho}\left(a,b\right):=\frac{1}{\sqrt{2\text{\pi}}}{\displaystyle {\int}_{-\infty}^{a}}{\displaystyle {\int}_{-\infty}^{b}}\mathrm{exp}\left(-\frac{{x}^{2}+{y}^{2}}{2\sqrt{1-{\rho}^{2}}}+\frac{\rho xy}{2\sqrt{1-{\rho}^{2}}}\right)\text{d}x\text{d}y.$

For this example, the errors of both ADI and LOD methods with different time and location step length are presented in Table 1 and Table 2.

Figure 2. The income function of option of two cash assets or none.

7. Result

As shown in Table 1, the ADI shows better convergence than the LOD method with relatively large space step sizes. However, with smaller space step size (equivalently large temporal step size), convergence the method ADI shows the results which have big error while the method LOD make convergent results. To investigate what made blowup solutions for the ADI scheme, we compare solu-

tions, ${u}^{\frac{1}{2}}$ ana ${u}^{1}$ , and source terms, f and g, generated from the ADI and LOD

methods. We used time step size, $\Delta t=0.5$ , and space step size $h=5$ . In Figure 3, the first and the second columns show the numerical results at each step of the ADI and LOD for a two-asset cash or nothing option, respectively. As we can see from the figure, the numerical result of the ADI with a relatively large time step shows oscillatory solution along the lines $x={X}_{1}$ and $y={X}_{2}$ , source terms in the first steps are shown.

Table 1. The results of ADI program.

Table 2. The results of LOD program.

(a) Matrix f in ADI method.(b) Matrix f in LOD method.(c) Matrix ${u}^{\frac{1}{2}}$ in ADI method.(d) Matrix ${u}^{\frac{1}{2}}$ in LOD method.(e) Matrix g in ADI method.(f) Matrix g in LOD method.(g) Matrix ${u}^{1}$ in ADI method.(h) Matrix ${u}^{1}$ in LOD method.

Figure 3. Numerical results of cash or nothing using the ADI and LOD. (a) source term f at step1, (b) solution
${u}^{\frac{1}{2}}$ at step 1, (c) source term g at step 2, and (d) solution u^{1} at step 2.

The source term in the ADI method exhibits oscillation around $y={X}_{2}$ which is from the y-derivatives in the source term. On the other hand, for the LOD method, we don’t have the y-derivatives in the source term and solution

${u}^{\frac{1}{2}}$ is monotone around $y={X}_{2}$ . Therefore, for the ADI we have an LOD

solution at the first step. After one complete time step, the result with the ADI shows nonsmooth numerical solution. However, the LOD method results in a smooth numerical solution [1] .

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

*Journal of Mathematical Finance*,

**7**, 275-290. doi: 10.4236/jmf.2017.72014.

[1] | Jeong, D. and Kim, J. (2013) A Comparison Study of ADI and Operator Splitting Methods on Option Pricing Models. Journal of Computational and Applied Mathematics, 247, 162-171. |

[2] | Shreve, S.E. (2004) Stochastic Calculus for Finance I, II. IEEE International Joint Conference on Computational Sciences and Optimization. |

[3] |
Black, F. and Scholes, M. (1973) The Pricing of Options and Corporate Liabilities. The Journal of Political Economy, 81, 637-654. https://doi.org/10.1086/260062 |

[4] | Eckner, A. and Comput, J. (2004) Operator Splitting Methods for American Option Pricing. Applied Mathematics Letters, 17, Article ID: 814809. |

[5] |
Carol, A. and Venkatramanan, A. (2012) Analytic Approximations for Multi Asset Option Pricing. Mathematical Finance, 22, 667-689.
https://doi.org/10.1111/j.1467-9965.2011.00481.x |

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.