Error Analysis of ERM Algorithm with Unbounded and Non-Identical Sampling *

A standard assumption in the literature of learning theory is the samples which are drawn independently from an identical distribution with a uniform bounded output. This excludes the common case with Gaussian distribution. In this paper we extend these assumptions to a general case. To be precise, samples are drawn from a sequence of unbounded and non-identical probability distributions. By drift error analysis and Bennett inequality for the unbounded random variables, we derive a satisfactory learning rate for the ERM algorithm.


Introduction
In learning theory we study the problem of looking for a function or its approximation which reflects the relationship between the input and the output via samples.It can be considered as a mathematical analysis of artificial intelligence or machine learning.Since the exact distributions of the samples are usually unknown, we can only construct algorithms based on an empirical sample set.A typical setting of learning theory in mathematics can be like this: the input space X is a compact metric space, and the output space Y ⊂  for regression.(When

{ }
1, 1 , Y = + − , it can be regarded as a binary classification problem.)Then is the whole sample space.We assume a distribution ρ on Z, which can be decomposed to two parts: marginal distribution X ρ on X and conditional distribution ( ) given some x X ∈ .This implies for any integrable function ( ) g z [1].To evaluate the efficiency of a function : f X Y → we can choose the generalization error: for simplicity.[2] shows that From this we can see the regression function is our goal minimizing the generalization error.The empirical risk minimization (ERM) algorithm aims to find a function which approximates the goal function f ρ well.While ρ is always unknown beforehand, a sample set is accessible.Then ERM algorithm can be described as ( ) ( ) where function space  is the hypothesis space which will be chosen to be a compact subset of ( ) Then the error produced by ERM algorithm is ( ) f  z .We expect it is close to the optimal one ( ) which means the excess generalization error ( ) ( ) z should be small, while the sample size m tends to infinity.
Dependent sampling has considered in some literature such as [3] for concentration inequality and [4] [5] for learning.More recently, in [6] and [7], the authors studied learning with non-identical sampling and dependent sampling, and obtained satisfactory learning rates.
In this paper we concentrate on the non-identical setting that each sample i z is drawn according to a different distribution We assume a polynomial convergence condition for both sequences and ( ) ( ) Power index b measures quantitatively differences between the non-identical setting and the i.i.d.case.The distributions are more similar as b is larger, and when b = ∞ it is indeed i.i.d.sampling, i.e.
On the other hand, most literature assume the output space is uniformly bounded, that is, y M ≤ for some positive constant M and almost surely with respect to ρ .A typical kernel dependent result for the least-squares regularization algorithm under this assumption is [9].There the authors get a learning rate close to 1 under some capacity condition for the hypothesis space.However, the most common distribution-Gaussian distribution is not bounded.This requirement is from the bounded condition in Bernstein inequality and limits the application of algorithms.In [10]- [13], some unbounded conditions for the output space are discussed in different forms, which extends the classical bounded condition.Here we will follow the latter one which is more generalized and simple in expression, and this is the second novelty of this paper.We assume the moment incremental condition for the output space, an extension of that we proposed in [11]: and .
We can see the Gaussian distribution satisfies this setting.
and the condition distribution ( ) Next we need to introduce the covering number and interpolation space.Definition 1.The covering number ( ) for a subset  of ( ) C X and 0 η > is defined to be the minimal integer N such that there exist N balls with radius η covering  .
Let the hypothesis space

( )
H C X ⊆ , be a compact Banach space with inclusion ( ) bounded and compact.We follow the assumption [14] [15] that there exist some constants 0 r > and 0 r C > , such that the hypothesis space satisfies the capacity condition ( ) where Capacity condition describes the amount of functions in the hypothesis space.
The sample error will decrease but approximation error will increase when covering number of H is larger (or simply say H is larger).So how to choose an appropriate hypothesis space is the key problem of ERM algorithm.
We will demonstrate this in our main theorem.
Definition 2. The interpolation space ( ) Interpolation space is used to characterize the position of the regression function, and it is related with the approximation error.Now we can state our main result as follow.
Theorem 1.If ( ) , and satisfies (6) with r, 0 4) and (5).For any 1 p ≤ ∈ , choose the hypothesis space  to be the ball of H centered at 0 with radius ( ) ( ) , Moreover, we assume all functions in H and f ρ are Hölder continuous of order s, i.e., there is a constant 0 Then for any 0 1 δ < < , with confidence at least 1 δ − , we have Here C is a constant independent with m and δ .Remark 1.In [6], the authors pointed out that if we choose the hypothesis space to be the reproducing kernel Hilbert space (RKHS) K  on n X ⊂  , and the kernel ( ) , then our assumption (7) will hold true.In particular, if the kernel is chosen to be Gaussian kernel K σ , then (7) holds for any 0 1 s < ≤ .[16] discussed this in detail.
In all, we extend the polynomial convergence condition on the conditional distribution sequense and accordingly, set the moment inremental condition for the sequence in the least squares ERM algorithm.By error decomposition, truncate technique and unbounded concentration inequality, we can finally obtain the total error bound Theorem 1.
Compared with the non-identical settings in [6] and [17], our setting is more general since the conditional distribution sequence is also a polynomially convergence sequence, but not identical as in their settings.This together with unbounded y lead to the main difficulty for the error analysis in this paper.For the classical i.i.d. and bounded conditions, [9] indicates that for any 0 >  .[17] shows that under some conditions on kernel, object function f ρ , exponential convergence condition for distribution sequence and choose some special parameters, the optimal rate of online learning algorithm is close to ( ) ( ) . In [6], the best case occurs when . The rate of least square regularization algorithm can be close to ( ) ( ) . However, our result implicates that while 1 b > , θ tends to 1 and s r, tends to 0, since p can be any integer, the learning rate can be arbitrarily close to ( ) m , which is the same as in i.i.d.case [9], and better than the former results with non-identical settings.With this result, we can extend the application of learning algorithm to more situations and still keep the best learning rate.The explicit expression of C in the theorem can be found through the proof of the theorem below.

Error Decomposition
Our aim, the error ( ) ( ) is hard to bound directly, we need a transitional function for analyzing.By the compactness of  and continuity of functional  , we can denote ( ) Then the generalization error can be written as The first term on the right hand side is the sample error, and the second term approximation error which is independent with samples.[18] analyzed the approximation error by approximation theory.In the following we mainly study the sample error bound.Now we break the sample error to some parts which can be bounded using truncate technique and unbounded concentration inequality.We refer the error decomposition ( ) ( ) In the following, we call the first and fourth brackets drift errors, and the left sample errors.We will bound the two types of errors respectively in the following sections, and finally obtain the total error bounds.

Drift Errors
Firstly we consider the drift error involving f  in this section.To avoid handling two polynomial convergence sequences simultaneously, we break the drift errors to two parts.Meanwhile, a truncate technique is used to deal with the unbounded assumption.Since  is a subset of ( ) From the definition of m  and  , we know that , we can bound the first term inside the bracket as follow.
But for any 1 K ≥ and 1 p ≤ ∈ , there holds 12) in [6], we have ( ) Then we can bound the sum of the first term as Choose K to be ( ) ( ) For the second term, notice Combining the two bounds, we have And this is indeed the proposition.
For the drift error involving f z , we have the same result since f ∈  z as well, i.e., Proposition 2. Assume

Sample Error Estimate
We devote this section to the analysis of the sample errors.For the sample error term involving f  , we will use the Bennett inequality as in [11] and [19], which is initially introduced in [20].Since two polynomial convergence conditions are posed on the marginal and conditional distribution sequences, we have to modify the Bennett inequality to fit our setting.Denote ( ) for an integrable function g, the lemma can be stated as follow.
For our non-identical setting, we can have a similar result from the same idea of proof.By denoting Now we can bound the sample error term by applying this lemma.
Proposition 3.Under the moment incremental condition (4), (5) and notations above, with probability at least 1 3 and A  is the approximation error. Proof.Let .
, where In the same way, we have the following bounds as well.Then from Lemma 2 above, we have Set the right hand side to be 3 δ , we can solve that ( ) ( ) Therefore with confidence at least 1 3 δ − , there holds ( ) This proves the proposition.For the sample error term involving f z , analysis will be more involved since we need a concentration inequality for a set of functions.Firstly we have to introduce the ratio inequality [9].
in the Lemma 2, from the proof of the last proposition, we can conclude that and the lemma is proved.
Then we have the following result.
Lemma 4. For a set of functions { } 1 , with confidence at least 1 3 δ − , we have ( ) f is an element of  , from Lemma 3 we have ( ) then there holds ( ) Set the right hand side to be 3 δ and we have with probability at least 1 3 For the first term, since And for the third term, we need to bound ( ) and then ( ) Set the right hand side to be 3 δ and with confidence at least 1 3 δ − we have ( )

Approximation Error and Total Error
Combining the results above, we can derive the error bound for the generalization error ( ) ( ) is a constant independent on m or δ .For the approximation error A  , we can bound it by Theorem 3.1 of [18].Since the hypothesis space function which measures the difference between the prediction ( ) f x via f and the actual output y.It can be hinge loss in SVM (support vector machine) or pinball loss in quantile learning and etc..In this paper we focus on the classical least square loss And this proves the lemma.Now by a covering number argument we can bound the sample error term involving f z .

(
can be bounded by 4 above.That is, with confidence at least 1 parts above, we have the following bound with confidence at least 1

Proposition 5 . 1 δ<
Under the moment condition for the distribution of the sample and capacity condition for the hypothesis space  , for any 0 < and 1 p ≤ ∈ , with confidence at least 1 δ − ,