Optimal Recovery of Holomorphic Functions from Inaccurate Information about Radial Integration ()
1. Introduction
Let W be a subset of a linear space X, let Z be a normed linear space, and T the linear operator
that we are trying to recover on
from given information. This information is provided by a linear operator
where Y is a normed linear space. For any
we know some
that is near
. That is, we know
such that
(1)
for some
. The value
is our inaccurate information. Now we try to approximate the value of
from
using an algorithm or method,
. Define a method to be any mapping
, and regard
as the approximation to
from the information
. Our goal is to minimize the difference of
and
in
, i.e. minimize 
However, the size of
varies since 
can be chosen to be any
satisfying (1). Furthermore
varies depending on the
chosen. So the error of any single method is defined as the worst case error

Now the optimal error is that of the method with the smallest error. Thus the error of optimal recovery is defined as
(2)
For the problems addressed in this paper, let
be linear spaces with semi-inner norms
and
linear operators,
. We want to recover
for

(where if
we let
), if we know the values
satisfying
for
.
Define the extremal problem
(3)
This problem is dual to (2).
2. Construction of Optimal Method and Error
The following results of G. G. Magaril-Il’yaev and K. Yu. Osipenko [1] are applied to several problems of optimal recovery.
Theorem 1: Assume that there exist
,
such that the solution of the extremal problem
(4)
is the same as in (3). Assume also that for each
there exists
which is a solution to
(5)
Then for all
,

and the method
(6)
is optimal.
Theorem 1 gives a constructive approach to finding an optimal method
from the information. It follows from results obtained in [1-7] (see also [8] where this theorem was proven for one particular case.)
In order to apply Theorem 1 the values of extremal problems (4) and the dual problem (3) must agree. The following result, also due to G. G. Magaril-Il’yaev and K. Yu Osipenko [1], provides conditions under which the solution of problems (3) and (4) will agree.
Typically, when one encounters extremal problems, one approach is to construct the Lagrange function
. For an extremal problem of the form of (4), the corresponding Lagrange function is

Furthermore,
is called an extremal element if
for
and thus admissible in (4)
and

Theorem 2: Let
and
be such that
for
and 1) 
2) 
Then
is an extremal element and

If we wish to combine Theorems 1 and 2 to determine an optimal error and method then we must show the posed problem is able to satisfy equating extremal problems (3) and (4). Through Theorem 2 we have such a means available.
3. Main Results
Consider the class of functions defined on the unit disc
given by
(7)
for
,
satisfying
(8)
and
(9)
Therefore, any
is holomorphic in the unit disc by (6). We define the semi-norm in
as

and
(10)
Let
,
, be a linear operator given by

That is,
is the radial integral of
. To see that
, by (7) we have for all but finitely many
,
for some
. Thus if
then
.
We assume to know
given with a level of accuracy. That is, for a given
, we know a
such that
(11)
The problem of optimal recovery is to find an optimal recovery method of the function
in the class
from the information
satisfying (9). The error of a given method is measured in the
norm defined by

Any method
is admitted as a recovery method. Let
be sequences of non-negative real numbers such that

Define
to be the convex hull. Define
for
by

Lemma 1: The piecewise linear function
with points of break
, with
for
given by
is such that
.
Proof. Assume that
It means that
. Since
and
as
there is a
such that
and
. Then the interval between
and
belongs to
. Consequently,
and
is not a point of break of
.
Assume that
. Since
the interval between
and
belongs to
. Geometrically, the line
to
will lie above the line
. It means that
contradicting that
is a point of break of
.
Note that as
then for any fixed
the slopes between points
and
also tends to 0 as

3.1. Inaccuracy in
Norm
Consider the points in
given by
and define the convex hull of the origin and this collection of points as
:
(12)
Let
(13)
thus
is a piecewise linear function. Let
,
be the points of break of
with
. By (7) the assumption for Lemma 1 is satisfied by
and
.
Theorem 3: Suppose that
with
. Let
(14)
Then the error of optimal recovery is
(15)
and
(16)
is an optimal method of recovery. If
then
and
is an optimal method.
Proof. Consider the dual extremal problem
(17)
which can be written as

where
. Define the corresponding Lagrange function as

Let the line segment between successive points
and
be given by
.
That is
. Thus
are given by (12). Take any
, then by definition of the function
we have

Thus for all 

and hence
for any
.
We proceed to the construction of a function
admissable in (15) that also satisfies
Assume
.
As
if and only if
and
then
if and only if
or
. Let
be the indices that satisfy

and
.
We let
for
, and choose
so that they satisfy the conditions

(18)
From these conditions let

(19)
and
(20)
Now if
with
or
and
the function
is admissible in (15) and
, that is
minimizes 
and condition 1) of Theorem 2 is satisfied. Furthermore, by construction,
satisfies condition 2) of Theorem 2.
If
, that is
and
, define
as in (17). Then as 

So let
and we have

Thus the function
is admissable in (15) and satisfies 1) and 2) of Theorem 2. It should be noted that in this case
are simply
and
.
Now we proceed to the extremal problem
(21)
This problem may be rewritten as

which has solution

So for
,
by Theorems 1 and 2, (14) is an optimal method and the error of optimal recovery is given by (13). If
then
and
is an optimal method. 
It should be noted that for fixed
, that is for a fixed
, the terms

will have the property,
and
as
. So
smooths approximate values of the coefficients of
by the filter
.
3.2. Inaccuracy in
Norm
Our next problem of optimal recovery remains to recover
from inaccurate information pertaining to the radial integral of f. However, the inaccurate information we are given are the values
such that

where
is the
coefficient of the radial integral
,

Denote

We again consider the space of functions
given by (5) and
and
defined by (10) and (11) respectively but now add the condition
(22)
The problem of optimal recovery on the class
given by (8) is to determine the optimal error
(23)
(24)
and an optimal method
obtaining this error.
Define
as the largest index such that
(25)
which by (7) exists, and
(26)
Theorem 4: Suppose
with
. If
let
and
.
Then the optimal error is
(27)
and
(28)
is an optimal method. If
then
and
is an optimal method.
If
and
then with
and
the error of optimal recovery is (22) and (23) is an optimal method. For
,
and
is an optimal method.
Proof. For the cases
with
we simply apply the same structure of proof as in Theorem 3. For the case
there remains some work.
Our construction will depend on whether or not
, that is whether or not
with
.
First we notice
. Assume not. Then if
we also know
since for all
we assumed
. Since
we know
. Then by definition of
we know for
,

and substituting
we have

which contradicts the definition of
. Therefore
and if
then
.
In either case,
or
, the dual problem is of the form
(29)

The corresponding Lagrange function is then

where
is the characteristic function of
.
Case 1): 
If
let
correspond to the index satisfying

To determine
let
be the line through the point
that is parallel to the line from the origin to
. That is, let
(30)
So for any point of break we have
and for any index
, we obtain

If
then

Thus for the chosen
and
and any
we have
.
To construct
admissable in (24), let
for
and define
by the system

and since
this becomes

So let
and
.
Then for
the function
is admissable in (24) with

Therefore
and by construction we have
and 
so that

and conditions (a) and (b) of Theorem 2 are satisfied.
Case 2): 
If
then
, and
, as this is the only point in the set
with a
-coordinate of
. Furthermore, as
is a point of break of
we know
for all
. Since
then by the definition of
we know
. As
(31)
then we obtain equality,
.
Define
by (25) so
and
. If we let
be
then

In addition
is admissable in extremal problem (24)
as
and
.
To justify
simply note that as
satisfies (26) and
for all
then
. So we have
. Since
then
minimizes
.
For both cases, we now consider extremal problem
(32)
This problem can be written as

which will have solution

So by Theorems 1 and 2 we have obtained the optimal error and an optimal method for all scenarios. In each case i and ii,
and
are given by (25). In each case, the error of optimal recovery is
which for case 2) simplifies to
. Also for each case, a method of optimal recovery is given by
where in case 2) this simplifies to
since in case 2),
.
One may be able to reduce the amount information needed without affecting the error of optimal recovery. Therefore, by reducing the number of terms in the optimal method we reduce the compututaions needed. The following ideas are in [9]. We consider the subset
,
as the set of all points whose slope to the origin is greater than the slope of
for
, that is the slope of the line segment between points
and
. Define the sets
(33)
for
where if
define
. Now consider the same problem as stated in Theorem 4 using only information
. For
, we have
and so
. In this situation,
with
, it was shown that the error of optimal recovery only involves the two points
then the reduction in information from
to
will not change the error. That is
and if
, an optimal method is
(34)
where
.
3.3. Varying Levels of Accuracy Termwise
In Theorems 3 and 4 the inaccuracy of the information given is a total inaccuracy. That is, the inaccuracy
is an upper bound on the sum total of the inaccuracies in each term, be it a finite or infinite sum. For Theorems 3 and 4 however, there is no way to tell how the inaccuracy is distributed. In particular, with regards to Theorem 4, the situations in which the given information
satisfies

or for some particular
satisfying 

are treated the same. For the next problem of optimal recovery we address this ambiguity. The problem of optimal recovery is to determine an optimal method and the optimal error of recovering
, from the information
satisfying

for some prescribed
and
.
To define
use conditions (6) and (20) as previously but impose an additional restriction. We add the condition

Define
where
are the levels of accuracy. If
define
(35)
So
and furthermore
. The case
will be treated seperately.
Theorem 5: If
let
(36)

then the error of optimal recovery is given by
(37)
and
(38)
is an optimal method.
If
then
and
is an optimal method.
Proof. The dual problem in this situation is
(39)
(40)
with the corresponding Lagrange function

The method of proof will be to first determine
with
and
admissable in (31) and satisfying 1) and 2) of Theorem 2.
If
, define
and
as follows:



To verify
assume
in which case
and hence

To show for the chosen
and any
,
, we consider the cases
or
.
For
we know by assumption
and hence

For 

Thus for any
,
. For the constructed
, it can be shown that
as desired. and thus
minimizes the Lagrange function.
To show
is admissable in (31) we can clearly see that for
,
. It remains to show
for
. Assume not, then

which occurs if and only if

which contradicts the definition of
unless
. If
then
and hence we no longer need the condition
in order for
to satisfy (31).
Furthermore

and so
is admissable in (31).
By the construction of
we also have the results
and
for
while 
for
. Thus
satisfies 2) of Theorem 2 as

We now proceed to the extremal problem

Notice the upper bound on the sum is
as
for any
. This extremal problem will have solution

Therefore the error of optimal recovery is given by

and

is an optimal method.
Now we proceed to the case
. Choose
and
for
. Then as
for all 

Thus
for all
. Let 
and
and notice
and clearly
so
is admissable in (31). Furthermore

and so
. Also,

Therefore
and 
is an optimal method. 
The optimal method may not use all of the information provided as
may be less than
. Thus increasing
may not change
and hence not change the error or the method. If
, then

and we can reduce the amount of information needed for a given optimal error.
If
we may be able to reduce the error of optimal recovery if we have more information available. Fix
. The greater number of terms we have of
then the better we may be able to approximate
, that is the smaller the optimal error of recovery. Let
(41)
and for 

for any
. If we know the first
terms with some errors, then further increasing the terms will not yield a decrease in the error of optimal recovery.
3.4. Applications: The Hardy-Sobolev and Bergman-Sobolev Classes
We now apply the general results to the Hardy-Sobolev and Bergman-Sobolev spaces of functions on the unit disc. Let
denote the set of functions holomorphic on the unit disc. Define the Hardy space of functions
as the set of all
,
with
where

The Hardy-Sobolev space of functions,
, are those
such that
and
is the class consisting of those 
with
. The Bergman space of functions
is the space of all
such that

That is,
is the space of all holomorphic functions in
. The Bergman-Sobolev space of functions,
, consists of
with
and
as the class of all
with
.
So each space can be considered as the space
with

For each space of functions we have the collection of points
. If
then for 

Therefore for 

In this case we consider the collection of points

It is easy to see that if
then the piecewise linear function
will have points of break
(42)
For the space
, the points to consider are

Again let
and thus the points of break of
will be precisely

For the special case of
, the function
has only a single point of break at the origin as

so that
for
. Furthermore,
does not satisfy (7) as

Thus, in the applications of the general results, this case will be treated separately.
For notational purposes, let
,
be the points of break of
for the space
.
Corollary 1. Let
or
. If
with
or
then the error of optimal recovery is given by (13) and (14) is an optimal method. If
and
then
and
is optimal.
Proof. For the spaces
or
,
if and only if
and
. Thus
if and only if
or
. Thus apply Theorem 3 to obtain the result for all spaces except
. The dual problem in the case
leads to a simple Lagrange function. The dual problem is specifically

Therefore the Lagrange function is simply given by

Now if we let
and
then
for any
. So now proceed as in Theorem 3. As any
will minimize
, choose
as in (18). The extremal problem (19) is solved similarly, and as
then
for
. 
It should be noted that the optimal method described is stable with respect to the inaccurate information data.
We now apply Theorem 4 to the Hardy-Sobolev spaces
and Bergman-Sobolev spaces
in which
is explicitly defined to be the smallest nonnegative integer satisfying

For the case
,
for all
. Thus
does not depend on
. So
and hence for any
we are in the case
.
Corollary 2. Let
or
. Suppose
with
. If
or
then let
be given by (12) and the optimal error is given by
(13) and (23) is an optimal method. If
and
then
and
is an optimal method.
Otherwise suppose
. If
or
then the optimal error is given by (13) and (23) is an optimal method with
and
. If
and
then
and
is an optimal method.
Proof. As previously stated, if
the only break point of
is
and furthermore as
then
given by (21) does not exist so we treat this special case. In this case, the dual extremal problem is

and the corresponding Lagrange function is simply

If
and
then
for any
. Now proceed as in the proof of Theorem 4 to obtain the result. 
We now apply Theorem 5 to the spaces
or
for
. In this situation
will be a non-decreasing sequence for all
. Also, for any
we have
and we are always in the case
. For
then for both the Hardy and Bergman spaces
and so the condition
will be satisfied if we know
satisfying

Corollary 3. Let
or
with
or
and
and
given by (27). Let
,
be given by (28). Then the error of optimal recovery is given by (29) and (38) is an optimal method. If
and
then
and
is an optimal method.
Proof. For Theorem 5 we simply used conditions (6) and (20), both of which are satisfied by
and
for all
.
As a direct consequence of Theorem 5, we consider the situation in which we have a uniform bound on the inaccuracy of each of the first
terms of
. That is we take
for every
. If
we define
similarly as

and the apriori information is given by the values
such that

Again we will only need the values
for an optimal method.
As previously noted, since the optimal method and error of optimal recovery only use up to the
term then any information beyond may be disregarded if
as additional information will not decrease the error of optimal recovery.