Error Estimation and Assessment of an Approximation in a Wavelet Collocation Method

This article describes how to assess an approximation in a wavelet collocation method which minimizes the sum of squares of residuals. In a research project several different types of differential equations were approximated with this method. A lot of parameters must be adjusted in the discussed method here. For example one parameter is the number of collocation points. In this article we show how we can detect whether this parameter is too small and how we can assess the error sum of squares of an approximation. In an example we see a correlation between the error sum of squares and a criterion to assess the approximation.


Introduction
In the wavelet theory a scaling function  is used, which has properties that are defined in the MSA (multi scale analysis).Through the MSA we know, we can construct an orthonormal basis of a closed subspace j V , where j V belongs to a sequence of subspaces with the following property: The advantage of calculating by minimizing Q is that we can choose more collocation points i as shown in the following example.In that case we apply the least squares method to calculate .Many simulations had shown that if min was very small then the approximation y j would be good.An even better criterion for a good approximation 3)).Moreover, the equations have been ill-conditioned in several examples.

Q
Analogously we could use boundary conditions instead of the initial conditions.This method can be even used analogously for PDEs, ODEs of higher order or DAEs, which have the form   is an ODE system, then we use the approximation function: For the i-th component of the solution y, we use the notation j y as usual.We use for the i-th component of j y the notation   i j y , in order not to lead to a confusion with the approximation j y out of j V , so it will be always distinguished whether the approximation j y 0 or the i-th component of is used.y We use the collocation points , with Simulations have shown that even with max min we get good approximations.For the assessment of the approximation we use the value , with and is an integer.For big we should weight with ,c  0 Analogously for smaller .was very small.In the examples we use the Shannon wavelet.Although it has no compact support and no high order, in many examples and simulations we got a much better approximation than using other wavelets (f.e.Daubechies wavelets of order 5 to 8), even with a small n.The Meyer wavelet yields good results, too.
We even get a good extrapolation outside the interval Now we see a regression table (Table 1) of   ln sse on   2 ln Q , which shows a linear dependency in our example and the graph of the linear regression function.
Here is a graph of the regression function and the graphs of the functions y i and j for j = 0, k max = 15 and y y  30 m  on the approximation interval   0,1 (see Figures 2 and 3) and on the interval  

Error Estimation and Assessment of the Approximation
In the example we used the Shannon wavelet.For this wavelet we have additional information about the error in the Fourier space from the Shannon theorem.For a good approximation with a small j the behavior of   Y  with growing  is important, because (if y i is an orthogonal projection from y on V j and ) max min With the Riemann-Lebesgue theorem we get: For the approximation error the decay behaviour of the detail coefficients , , On the other side: we have got in many simulations with the Shannon wavelet better approximations (with the described collocation method) than with higher order wavelets.
Remarks 2: 1) For a theoretical multi resolution analysis we could consider instead of , because when is in , if we need an approximation on I .Here 1 I is the indicator function of the interval I .
2) For interpolating wavelets there are a number of publications with error estimates and also for the approximation of the solutions of initial value problems and boundary value problems (for ordinary and partial differential equations) see [1,2], as well as to the sinc collocation method (see [3][4][5]) with special collocation points ("sinc grid points", see [5]).
Theorem 1 (for the decay behaviour): The wavelet  has the order , with and is Lipschitz continuous.Then exists a independent from b with A proof is in [6].So we get for the detail coefficients an appraisal because 2 , 2 and so Now we saw that the decay of the detail coefficients depends on the order of a wavelet.
From the Gilbert-Strang Theory (see [7]) we know additionally an upper bound of the approximation error in dependency of the order : if the wavelet is of order then the approximation error has the order If a wavelet is of order the scaling function p  even has an interpolation property, because then we can construct the functions with over a linear combination of (see [7]).That's also a property of the so called interpolating wavelets.For interpolating wavelets we find error estimations in [8] and [9].
Remarks 3: 1) Error estimations for the sinc collocation with a transformation can be found in [4] and [5].
2) Although the approximation error is depended on the order of a wavelet in many simulations the Shannon wavelet led to much better approximations than Daubechies wavelets of higher order, if the approximation function j y was calculated by minimizing the sum of squares of residuals Q.Even when comparing the extrapolations the Shannon wavelet was significantly better.
The reason is, that we do not calculate an orthogonal projection on j V like in the appraisal above and the function y is in general case not quadratic integrabel on R (we consider only a compact interval I).
The following appraisal takes account of the fact that we calculate the approximation function by the minimization of Q.We first need a theorem, which follows from the Gronwall-Lemma.
Theorem 2: Assumptions: we have a initial value problem and Then we get for : For a proof see [10].

Theorem 3:
With the assumptions from Theorem 2 we get (if So we get the follow inequality for  ln sse , which is used in the example 2: points (we get a approximation function j y ) we must not calculate a second minimization for the calculation of .

   
  , j i j i i will be in general (for ) less than M, because we use the collocations points t i and so is very small at these points (see the next graphic).
was in many good simulations less than 10 −16 .
In many simulation is relative big between to collocation points (or at the edge of I if we start with i = 1 in the sum (1)).
In Figure 6 we see the graph of for one (or more) we get: ere a too small m sults in a very bad approximation.
We see that min Q could be very small a too small m, but 2 is very big here.In the graph we see that d is very small at the collocation points but between them d is very big.That's the reason because we could identify with 2 a worse approximation in any our simulations.On the other hand a big is an indicative of a too small j.
Here we get (see ( 6)): We now apply a linear regression of Using Theorem 2 we derive an estimate (see theorem 3).Then it is shown how to detect a too great step size using 2 Q .In example 2 we show that the deduced estimate represents a straight line (in the coordinate system with  

min 2 )
The sums in (1) and (3) could start with i  , too.3) could also be used as a constraint if the initial value should be fulfilled.But in all good approximations,

4
and 5).In Figures4 and 5we see that we get even a good extrapolation.

Figure 1 .
Figure 1.Linear regression plot of   sse ln against   Q 2 ln .

2 ln
M on the x-axes so we have a comparison with example 1 where we set   2 ln Q on the x-axes.
on the x-axes and   ln sse on the yaxes),uns approximately parallel to the regres-which r sion line (it is approximately parallel because the regression function is an estimation, theoretically it must be parallel because it cannot cross the upper bound line).In a research project we got analogous results in many with

Table 1 . Linear regression table of
With a we could assess in all simulations the quality of an approximation and in linear regressions from

Table 2 . Regression table of on
against 