^{1}

^{*}

^{1}

In this note, we experimentally demonstrate, on a variety of analytic and nonanalytic functions, the novel observation that if the least squares polynomial approximation is repeated as weight in a second, now weighted, least squares approximation, then this new, second, approximation is nearly perfect in the uniform sense, barely needing any further, say, Remez correction.

Finding the min-max, or best L ∞ , polynomial approximation to a function, in some standard interval, is of the greatest interest in numerical analysis [

The usual procedure [

In this note, we bring ample and varied computational evidence in support of the novel, worthy of notice, empirical numerical observation that taking the error distribution of a least squares, L 2 , best polynomial fit to a function, squared, as weight in a second, weighted, least squares approximation, results in an error distribution that is remarkably close to the best L ∞ , or uniform, approximation.

The monic Chebyshev polynomial

T 2 ( x ) = x 2 − 1 2 , − 1 ≤ x ≤ 1 (1)

is the solution of the min-max problem

min a max x e ( x ) , e ( x ) = x 2 − a , − 1 ≤ x ≤ 1. (2)

This min-max solution, the least function in the L ∞ sense, is a polynomial that has two distinct roots, and oscillates with a constant amplitude in − 1 ≤ x ≤ 1 , e ( − 1 ) = − e ( 0 ) = e ( 1 ) . Indeed, say e 1 = x 2 + a 0 + a 1 x is such a polynomial, and e 2 = x 2 + p 0 + p 1 x is another quadratic polynomial, then e 1 ≤ e 2 in the interval, for otherwise e 1 and e 2 would intersect at two points, which is absurd; x 2 + a 0 + a 1 x = x 2 + p 0 + p 1 x is either an identity, or has but the one solution x = − ( p 0 − a 0 ) / ( p 1 − a 1 ) .

Thus, the monic Chebyshev polynomial of degree n is the least, uniform, or pointwise, error distribution in approximating x n by a polynomial of degree n − 1 .

To obtain a least squares, a best L 2 , approximation to T 2 ( x ) we first minimize I ( a )

I ( a ) = ∫ − 1 1 ( x 2 − a ) 2 d x , I ′ ( a ) = ∫ − 1 1 ( x 2 − a ) d x = 0 (3)

to have the value a = 1 / 3 = 0.3333 .

Minimizing next I ( p ) , under the weight ( x 2 − a ) 2 , a = 1 / 3

#Math_28# (4)

now with respect to p, we obtain p = 11 / 21 = 0.5238 , which is surprisingly much closer to the optimal value of one half.

We may replace the difficult L ∞ measure by the computationally easier L m measure with an even m ≫ 1 . Let a_{0} be a good approximation, and a 1 = a 0 + δ be an improved one. Minimization cum linearization produces the equation

∫ − 1 1 ( x 2 − a 0 ) n d x − n δ ∫ − 1 1 ( x 2 − a 0 ) n − 1 d x = 0 (5)

where n ≫ 1 is odd.

Starting with a 0 = 11 / 21 = 0.5238 , we obtain from the above equation, for n = 17 , the value a 1 = 0.495 , as compared with the optimal a = 0.5 .

Seeking to reproduce the optimal monic Chebyshev polynomial of degree three

T 3 ( x ) = x 3 − 3 4 x , − 1 ≤ x ≤ 1 (6)

we start by minimizing I ( a 1 )

I ( a 1 ) = ∫ − 1 1 ( x 3 − a 1 x ) 2 d x , I ′ ( a 1 ) = ∫ − 1 1 x ( x 3 − a 1 x ) d x = 0 (7)

and have a 1 = 3 / 5 = 0.6 .

Then we return to minimize the weighted I ( p 1 ) with respect to p 1

I ( p 1 ) = ∫ − 1 1 ( x 3 − a 1 x ) 2 ( x 3 − p 1 x ) 2 d x , I ′ ( p 1 ) = ∫ − 1 1 x ( x 3 − a 1 x ) 2 ( x 3 − p 1 x ) d x = 0 (8)

and obtain p 1 = 195 / 253 = 0.770751 , which is considerably closer to the optimal value of 0.75. See

We are ready now for a Remez-like correction to bring the error function closer to optimal. The minimum of e ( x ) = x 3 − 0.770751 x occurs at m = 0.50687. We write a new tentative e ( x ) = x 3 − a 1 x and request that − e ( m ) = e ( 1 ) , by which we have

a 1 = 1 + m 3 1 + m = 0.750047 (9)

as compared with the Chebyshev optimal value of a 1 = 3 / 4 = 0.75 .

Starting with

e ( x ) = x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 (10)

we minimize

I ( a 0 , a 1 , a 2 , a 3 ) = ∫ 0 1 e ( x ) 2 d x (11)

and obtain the best, in the L 2 sense, e ( x ) shown in

Then we return to minimize

I ( p 0 , p 1 , p 2 , p 3 ) = ∫ 0 1 e ( x ) 2 ( x 4 + p 3 x 3 + p 2 x 2 + p 1 x + p 0 ) 2 d x (12)

weighted by the previous e ( x ) squared, and obtain the new, nearly perfectly uniform e ( x ) of

By comparison, the amplitude of the monic Chebyshev polynomial of degree four in [0,1] is 1/128 = 0.0078125.

To facilitate the integrations we use the approximation

e x = 1 + x + 1 2 ! x 2 + 1 3 ! x 3 + 1 4 ! x 4 + 1 5 ! x 5 + 1 6 ! x 6 + 1 7 ! x 7 (13)

and minimize

I ( a 0 , a 1 , a 2 , a 3 ) = ∫ 0 1 e ( x ) 2 d x , e ( x ) = e x + a 0 + a 1 x + a 2 x 2 + a 3 x 3 (14)

with respect to a 0 , a 1 , a 2 , a 3 . The best e ( x ) obtained from this minimization is shown in

Then we use the square of the minimal e ( x ) just obtained, as weight in the next minimization of

I ( p 0 , p 1 , p 2 , p 3 ) = ∫ 0 1 e ( x ) 2 ( e x + p 0 + p 1 x + p 2 x 2 + p 3 x 3 ) 2 d x (15)

with respect to p 0 , p 1 , p 2 , p 3 .

The nearly perfect result of this last minimization is shown in

To facilitate the integrations we take

sin x = x − 1 3 ! x 3 + 1 5 ! x 5 − 1 7 ! x 7 + 1 9 ! x 9 (16)

and obtain the least squares error distribution as in

The subsequent nearly perfect weighted least squares error distribution is shown in

We start with

e ( x ) = x − ( a 0 + a 1 x + a 2 x 2 ) , 0 ≤ x ≤ 1 (17)

under the condition

e ( 0 ) = − e ( 1 ) , a 0 = 1 2 ( 1 − a 1 − a 2 ) (18)

and minimize

I ( a 1 , a 2 ) = ∫ 0 1 ( x − 1 2 − a 1 ( x − 1 2 ) − a 2 ( x 2 − 1 2 ) ) 2 d x (19)

with respect to a 1 and a 2 , to have

e ( x ) = x − ( 1 10 + 121 70 x − 13 14 x 2 ) , 0 ≤ x ≤ 1 (20)

shown as curve a in

Next we minimize

I ( p 1 , p 2 ) = ∫ 0 1 ( x − 1 2 − p 1 ( x − 1 2 ) − p 2 ( x 2 − 1 2 ) ) 2 ⋅ ( x − ( 1 10 + 121 70 x − 13 14 x 2 ) ) 2 d x (21)

and obtain

e ( x ) = x − ( 0.064 + 1.949 x − 1.077 x 2 ) , 0 ≤ x ≤ 1 (22)

shown as graph b in

e ( x ) = x − ( 0.0674385 + 1.93059 x − 1.06547 x 2 ) , 0 ≤ x ≤ 1. (23)

We start with

e ( x ) = x 1 / 4 + a 0 + a 1 x + a 2 x 2 + a 3 x 3 , 0 ≤ x ≤ 1 (24)

under the restriction e ( 0 ) = e ( 1 ) , or a 3 = − 1 − a 1 − a 2 , and minimize

I ( a 0 , a 1 , a 2 ) = ∫ 0 1 ( x 1 / 4 − x 3 + a 0 + a 1 ( x − x 3 ) + a 2 ( x 2 − x 3 ) ) 2 d x (25)

with respect to a 0 , a 1 , a 2 to have the minimal e ( x ) shown in

Then we minimize

I ( p 0 , p 1 , p 2 ) = ∫ 0 1 e ( x ) 2 ( x 1 / 4 − x 3 + p 0 + p 1 ( x − x 3 ) + p 2 ( x 2 − x 3 ) ) 2 d x (26)

and obtain the nearly optimal error distribution as in

We now look at the error distribution

e ( x ) = ln ( 1.001 + x ) − ( a 3 x 3 + a 2 x 2 + a 1 x + a 0 ) , − 1 ≤ x ≤ 1 (27)

under the condition that e ( 1 ) = e ( − 1 ) , or a 3 = 3.8007012 − a 1 .

Least squares minimization of e ( x ) yields the error distribution in

Next we minimize

#Math_100# (28)

under the restriction that p 3 = 3.8007012 − p 1 , and obtain the nearly perfect error distribution shown in

We experimentally demonstrate, on a variety of continuous, analytic and nonanalytic functions, the remarkable observation that if the least squares polynomial approximation is taken as weight in a repeated, now weighted, least squares approximation, then this new, second, approximation is nearly perfect in the sense of Chebyshev, barely needing any further correction procedure.

Fried, I. and Feng, Y. (2017) Weighted Least-Squares for a Nearly Perfect Min-Max Fit. Applied Mathematics, 8, 645-654. https://doi.org/10.4236/am.2017.85051