Matrices Associated with Moving Least-Squares Approximation and Corresponding Inequalities

In this short article, some properties of matrices of moving least-squares approximation have been proven.The used technique is based on singular-value decomposition and inequalities for singular-values. Some inequalities for the norm of coefficients-vector of the linear approximation have been proven.


Statement
Let us remind the definition of moving least-squares approximation and a basic result. Let: (1) D be a bounded domain in R d .
Following [1], [10], [11], [12], we will use the following definition. The moving least-squares approximation of order l at a fixed point x is the value of p * (x), where p * ∈ P l is minimizing the least-squares error The approximation is "local" if weight function W is fast decreasing as its argument tends to infinity and interpolation is achieved if W (0) = ∞. So, we define additional function w : [0, ∞) → [0, ∞), such taht: Some examples of W (r) and w(r), r ≥ 0: Here and below: · = · 2 is 2-norm, · 1 is 1-norm in R d ; the superscript t denotes transpose of real matrix; I is the identity matrix.
We introduce the notations: Through the article, we assume the following conditions (H1): Theorem 1.1 (see [10]). Let the conditions (H1) hold true. Then: (2) The approximation defined by the moving least-squares method isL where (3) If w( x i −x i ) = 0 for all i = 1, . . . , m, then the approximation is interpolatory.
For the approximation order of moving least-squares approximation (see [10] and [5]) it is not difficult to receive (for convenience we suppose d = 1 and standard polynomial basis, see [5]): and moreover (C=const.) It follows from (3) and (4) that the error of moving least-squares approximation is upper-bounded from the 2-norm of coefficients of approximation ( a 1 ≤ √ m a 2 ). That is why, the goal in this short note, is to discuss a method for majorization in the form Here the constants M and N depends on singular values of matrix E t , and numbers m and l (see Section 3). In Section 2 some properties of matrices associated with approximation (symmetry, positive semidefiniteness, and norm majorization by σ min (E t ) and σ max (E t )) are proven.
The main result in Section 3 is formulated in the case of exp-moving least-squares approximation, but it is not hard to receive analogous results in the different cases: Backus-Gilbert wight functions, McLain wight functions, etc.

Some Auxiliary Lemmas
Definition 2.1. We will call the matrices (1) All eigenvalues of A 1 are 1 and 0 with geometric multiplicity l and m − l, respectively. (2) All eigenvalues of A 2 are 0 and -1 with geometric multiplicity l and m − l, respectively.
Proof. Part 1. We will prove that the dimension of the null-space dim (null (A 2 )) is at least l.
Using the definition of Part 2. We will prove that −1 is eigenvalue of A 2 with geometric multiplicity m − l, or the system Obviously the systems and are equivalent. Indeed, if η 0 is a solution of (5), then On the other hand, if η 0 is a solution of (6), then i.e. η 0 is solution of (5). Therefore Part 3. It follows from parts 1 and 2 of the proof that 0 is an eigenvalue of A 2 with multiplicity exactly l and −1 is an eigenvalue of A 2 with multiplicity exactly m − l.
It remains to prove that 1 is eigenvalue of A 1 with multiplicity at least l, but this is analogous to the proven part 1 or it follows dirctly from the definition of A 1 = A 2 + I.
The following two results are proven in [13].
Theorem 2.1 (see [13], Theorem 2.2). Suppose U, V are (m × m) Hermitian matrices and either U or V is positive semi-definite. Let denote the eigenvalues of U and V , respectively. Let: (1) π(U) is the number of positive eigenvalues of U; (2) ν(U) is the nubver of negative eigenvalues of U; (3) ξ(U) is the number of zero eigenvalues of U. Then: Corollary 2.1 (see [13], Corollary 2.4). Suppose U, V are (m × m) Hermitian positive definite matrices.
Then for any 1 ≤ k ≤ m As a result of Lemma 2.1, Lemma 2.2 and Theorem 2.1, we may prove the following lemma.
Obviously, U is a symmetric positive definite matrix (in fact it is a diagonal matrix). Moreover π(U) = m, µ The matrix V is symmetric, see Lemma 2.1. From the cited theorem, for any index k (k = 1, . . . , m = π(U)) we have In particular, if k = m: Let us suppose that there exists index i 0 (i 0 = 1, . . . , m − 1) such that It fowollws from (8) and positive definiteness of U, that Therefore (see (7)) λ m (A 1 ) < 0. This contradiction (see Lemma 2.2) proves that the matrix A 1 D −1 is positive semi-definite. If we set U = D, V = −A 2 D −1 then by analogical arguments, we see that the matrix −A 2 D −1 is positive semi-definite.
In the following, we will need some results related to inequalities for singular values. So, we will list some necessary inequalities in the next lemma.

Lemma 2.5. Let the conditions (H1) hold true and let
Then: Proof. The matrix A 1 D −1 is simmetric and positive semi-definite (see Lemma 2.3(1)). Using the second statement of Lemma 2.3 and Lemma 2.4, we receive .

An Inequality for the Norm of Approximation Coefficients
We will use the following hypotheses: H2.1. The hypotheses (H1) hold true.
Then, there exist constants M 1 , M 2 > 0 such that We have (obviously D = D(x), H = H(x), and c = c(x)) Therefore, the function a(x) satisfies the differential equation
We will use Lemma 2.4 to obtain the norm of A 0 . Obviously A 0 E t = A 1 . Therefore by (12) i.e. .
Let the constant M 12 is choosen such that and let M 1 = M 11 M 12 .
Part 3. On the end, we have only to apply Lemma 4.1 form [7] to the equation (16): Remark 3.1. Let the hypotheses (H2) hold true and let moreover In such a case, we may replace the differentiation of vector-fuction . .
The singular values of the matrix∂ are: 0, 1, . . . , l − 1. Therefore ∂ = √ l − 1. That is why, we may chose Therefore, in such a case: If we suppose −1 ≤ x 1 ≤ x ≤ x m ≤ 1, then obviously, we may set