Open Journal of Statistics
Vol.08 No.01(2018), Article ID:82248,20 pages
10.4236/ojs.2018.81005

The Method of Finite Difference Regression

Arjun Banerjee

Computer Science Department, Purdue University, West Lafayette, IN, USA

Copyright © 2018 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 23, 2017; Accepted: January 30, 2018; Published: February 2, 2018

ABSTRACT

In this paper1 I present a novel polynomial regression method called Finite Difference Regression for a uniformly sampled sequence of noisy data points that determines the order of the best fitting polynomial and provides estimates of its coefficients. Unlike classical least-squares polynomial regression methods in the case where the order of the best fitting polynomial is unknown and must be determined from the R2 value of the fit, I show how the t-test from statistics can be combined with the method of finite differences to yield a more sensitive and objective measure of the order of the best fitting polynomial. Furthermore, it is shown how these finite differences used in the determination of the order, can be reemployed to produce excellent estimates of the coefficients of the best fitting polynomial. I show that not only are these coefficients unbiased and consistent, but also that the asymptotic properties of the fit get better with increasing degrees of the fitting polynomial.

Keywords:

polynomial Regression, t-Test, Finite differences

1. Introduction

In this paper I present a novel polynomial regression method for a uniformly sampled sequence of noisy data points that I call the method of Finite Difference Regression. I show how the statistical t-test2 can be combined with the method of finite differences to provide a more sensitive and objective measure of the order of the best fitting polynomial and produce unbiased and consistent estimates of its coefficients.

Regression methods, also known as fitting models to data, have had a long history. Starting from Gauss (see [1] pp. 249-273) who used it to fit astronomical data, it is one of the most useful techniques in the arsenal of science and technology. While historically it has been used in the “hard” sciences, both natural and biological, and in engineering, it is finding increasing use in all human endeavors, even in the “soft” sciences like social sciences (see for example [2] ), where data is analyzed in order to find an underlying model that may explain or describe it.

Polynomial regression methods are methods that determine the best polynomial that describes a sequence of noisy data samples. In classical polynomial regression methods when the order of the underlying model is known, the coefficients of the terms are estimated by minimizing the sum of squares of the residual errors to the fit. For example, in classical linear regression (see [2] pp. 675-730) one assumes that the order of the fitting polynomial is 1 and then proceeds to find the coefficients of the best fitting line as those that minimize the sum of the squares of its deviations from the noisy samples. The same method can be extended to polynomial regression. On the other hand, when the model order is unknown, the order of the best fitting polynomial must be determined before finding the coefficients of the polynomial terms. In classical regression methods this order is determined from the R2 value of its fit. More on these two different aspects of polynomial regression are provided below.

1) Determining the order of the best fitting polynomial (Model order unknown)

In classical regression, the goodness of the order of the fitting polynomial is given by the R2 value of the best fitting polynomial―the greater the R2 value the better the fit. However, this means that one can go on fitting with higher degree polynomials and thereby increasing R2 values. Given the insensitivity of these R2 values (see for example Section 2.4) it is indeed hard to find a stopping criterion. To get around this problem, heuristic methods like AIC, BIC, etc. (see [3] for example) that penalize too many fitting parameters have been proposed. In contrast, in the method of Finite Difference Regression described below, the order of the best fitting polynomial is determined by using a t-test, namely determining the index of a finite difference row (or column) that has a high probability (beyond a certain significance level) of being zero. The sensitivity of this method is high compared to classical regression methods as I shall demonstrate in Section 2.5.

2) Determining the coefficients of the best fitting polynomial (Model order known/determined)

In classical polynomial regression, the polynomial coefficients of the fit that minimizes the sum of squares of the residual errors are found by a computation that is equivalent to inverting a square matrix. In contrast in the method I describe below, I once again employ the finite differences I used to find the order, to iteratively determine the polynomial coefficients. In Section 2.7, I show that the results of this method are remarkable―not only are the coefficients unbiased and consistent, but also that the asymptotic properties of the fit get better with increasing degrees of the fitting polynomial.

Before describing the Finite Difference Regression method in Section 2, I briefly recapitulate the general method of finite differences whose formulation can be traced all the way back to Sir Isaac Newton (see [4] ). The finite differences method (see [5] ) is used for curve fitting with polynomial models. On its 2nd row, Table 1 shows the y-value outputs for the sequence of regularly spaced -value inputs at points 1 , 2 , , 10 shown on the 1st row. A plot of this data is shown in Figure 1. The problem is to determine the order of the polynomial that can generate these outputs and to find its coefficients.

The remaining rows in Table 1 show the successive differences of the y-values. The fact that the 3rd difference row in Table 1 is composed of all zeros implies that a quadratic polynomial best describes this data. In fact this data was generated by the quadratic polynomial: y = 2 x 2 + x + 3 .

I assume throughout this paper that the data are sampled at regular intervals. Without loss of generality these intervals can be assumed to be of unit length. Then the following general observations can be made:

1) If the ( m + 1 ) st difference row is all zero, then the data is best described by an mth order polynomial: a m x m + a m 1 x m 1 + + a 1 x + a 0

2) If the ( m + 1 ) st difference row is all zero, then m ! a m is the constant in the mth row. From this observation the coefficient of the highest term can be found. For example in Table 1 since the 3rd difference row is all zero then 2 ! a 2 is the constant in the 2nd difference row. Since this constant is 4, it immediately follows that the highest term should be 4 x 2 / 2 ! = 2 x 2

3) The lower order terms can be found successively by back substituting and subtracting the highest term. In this method, the data y in Table 1 is replaced by y where y =y a m x m , and then reapplying the method of finite differences on the y data. Using this approach for the data in Table 1, one obtains the next term as follows:

Figure 1. Graph of data in Table 1.

In Table 2 the 2nd difference row is all zero. Hence 1 ! a 1 is the constant in the 1st difference row. Since this constant is 1, the next highest term should be 1 x / 1 ! = x .

Again, the final term can be found by replacing y in Table 2 by y where y = y a m 1 x m 1 :

The computation in Table 3 yields 3 as the constant term for the polynomial fit.

Hence, by successive applications of the finite difference method in this manner, the best fit polynomial to the data in Table 1 is found to be 2 x 2 + x + 3 .

2. The Method of Finite Difference Regression

In this section I describe the Finite Difference Regression method, compare the results obtained from this method to those from classical regression, and show unbiasedness and consistency properties of the Finite Difference Regression coefficients.

The intuitive idea behind the method of Finite Difference Regression is simple. Suppose one were to add normally distributed noise with mean 0 independently to each of the y-values for example in Table 1 to make them noisy, then unlike the non-noisy situation, one cannot expect all entries in a difference row to be all

Table 1. Finite Differences on y-value outputs for the sequence of regularly spaced x-value inputs.

Table 2. Second iteration of finding terms to fit the model.

Table 3. Final iteration of finding terms to fit the model.

identically equal to zero. However, since differences between normally distributed random variables are also normally distributed (albeit with higher variance), each of the samples in a difference row (as in the rows of Table 1) would also be normally distributed with different means until one hits a difference row where the mean is zero with high probability. It is this highly probable zero-mean row that should be the all zero difference row in the absence or elimination of noise. The condition that the mean is zero with high probability can be objectively tested on each difference row using a t-test on the samples in that row, where the null hypothesis H0 tests for zero population mean and the alternative hypothesis Ha tests for non-zero population mean. This is the main idea behind the method of Finite Difference Regression―the replacement of the notion of an all zero difference row in the non-noisy case by the notion of a highly probable row of all zeros in the noisy case and usage of the statistical t-test to test for the zero population mean hypothesis.

2.1. Statistical independence and Its Relation to the degrees of Freedom

While it is true that successive differences will produce normally distributed samples, not all the samples so produced are statistically independent of one another. For example as shown in Figure 2, the noisy data samples y 1 , y 2 , y 3 , y 4 generate the next difference sequence y 1 , y 2 , y 3 ―not all of these values are independent. Since y 1 and y 2 both depend upon the value of y2 they are not independent. Similarly, y 2 and y 3 are not independent because both depend upon y 3 . However, y 1 and y 3 having no point in common, are independent!

This observation can be generalized by the following process. Suppose I had K observations to begin with. If I were to produce successive differences by skipping over every other term, in the manner of a binary tree as shown in Figure 3, I would obtain independent normally distributed samples. Each level of the tree would have independent normally distributed finite differences denoted in Figure 3 (shown with K = 8 ) by y , y , and so on. This method would retain all the properties of finite differences, in particular the highly probable zero mean row; plus it would have the added benefit of not having any dependent samples. If this method were implemented, and a t-test invoked at every level of the binary tree, then the number of samples in the test would be halved at each level, and consequently the degrees of freedom would be affected. In other words, if k denotes the number of samples3 at a particular level of the tree, then in the next level one would get k/2 samples. Then according to the theory of t-tests, the degrees of freedom for the first level would be k 1 , in the next level it would be k / 2 1 , and so on. However, discarding good data will have negative consequences for coefficient estimation (see Section 2.6). While some of the data at every level of a successive finite difference table (Table 1 for example) are not independent, they could still be used for t-tests. The only thing to remember is

Figure 2. Successive finite differences.

Figure 3. A binary tree of successive differences.

that at every stage the number of independent samples is halved and therefore the degrees of freedom is one less than this number.

2.2. An Example of Order Finding

Table A1 in Appendix 2 gives an example of how the Finite Difference Regression method works to find the order of the best fitting polynomial. The 1st two columns of Table A1 are the pivoted rows of Table 1. Independent zero-mean normally distributed random noise with standard deviation of 30.0 is generated in the 3rd column and added to the values of y in the 4th column. Thus, the 4th column represents noisy values of the data y. A graph of this noisy data is shown in Figure 4. Finally, columns 5 through 9 represent successive finite differences from 1st to 5th, computed in the same way as in Table 1. The last 6 rows of Table 9 highlighted in yellow show the statistical computations associated with the method. These are described below in Section 2.3.

2.3. The Statistical Computations

The last 6 rows of Table A1 use the one-sample t-test for a population mean as described in [2] pp. 547-548. Note that the Null Hypothesis is H 0 : μ = 0 because the hypothesized value is 0. The Alternative Hypothesis is H a : μ 0 . The descriptions of these rows are:

Figure 4. Graph of the noisy data in Table 5 with insert showing magnified view for 10 x 20 .

Header titled n = Number of samples

Header titled x ¯ = Sample mean

Header titled s = Sample standard deviation

Header titled t = Test statistic = x μ s / n = x s / n (substituting μ = 0 )

Header titled df = Degrees of freedom computed as detailed in Section 2.1

Header titled P-value = { 2 × Areatorightof t withgiven d f for t > 0 2 × Areatoleftof t withgiven d f for t < 0

2.4. Conclusions from the t-test

One of the assumptions of the t-test is that the sample size n is large. Typically, n 30 is required. This assumption is violated in some of the columns of Table A1. However the table is merely for illustrative purposes. Even so, it shows the power of the method to correctly ascertain the order of the best fitting polynomial because one can hardly fail to notice the precipitous drop in P-value in the 3rd difference column (titled Diff 3). The plot in Figure 5 showing the P-values of the 5 difference columns graphically displays this drop.

The insignificant P-value in the 3rd difference column implies that with high probability the order of the best fitting polynomial should be m = 2 , i.e. a quadratic polynomial is the best fit polynomial to the data in Table A1.

2.5. Comparison to Classical regression

In classical polynomial regression, R 2 values instead of P-values represent the goodness of fit. Using Microsoft Excel’s built-in program to generate R 2 values to polynomial fits, I generated Table 4 by restricting the highest degree m of the best fitting polynomial a m x m + a m 1 x m 1 + + a 1 x + a 0 for the noisy data in Table A1.

Figure 5. Graph of P-value vs. Diff.

Table 4. r2 values in classical regression as a function of degree m.

Table 4 shows that the R2 values used in classical regression do not have the sensitivity that the P-values with t-tests possess. The clear drop that occurs in the P-values as shown in Figure 5 does not have a corresponding sharp rise in R2 values in Table 4. Also, unlike classical regression which requires R2 values to be fitted with every polynomial the Finite Difference Regression is a one-shot test that gives a clear estimate of the order of the best fitting polynomial by testing the significance of the probabilities (P-values). The first successive difference column whose P-value is less than the significance level (a belief threshold for the viable model) comes out to be the best fitting polynomial model.

Unlike classical regression, however, this method requires a large sample size n, preferably one that is a power of 2. Also, because the degrees of freedom get halved at each successive difference and the degrees of freedom have to remain positive, the highest order polynomial that can be fit by this method is of the order log 2 n . This is generally acceptable because the order of the best fitting polynomials should by definition be small constants. Other classical model order selection heuristics like AIC or BIC (see [3] ) would also heavily penalize high order models.

2.6. Determining the Coefficients

Once the order of the best fitting polynomial has been found using the t-test in combination with finite differences, I will evaluate the coefficients of the best fitting polynomial using the method of successive finite differences as detailed in Section 1.

In the non-noisy situation described in Section 1, the matter was simple: there was a difference row of all zeros and therefore the previous difference row was a constant that could be used to determine the coefficient of the highest degree. In the noisy case, the issue is complicated because there is no such row that consists of all zeros and therefore the previous difference row is not a constant. However, the intuitive idea behind the method of Finite Difference Regression (see the 2nd paragraph of Section 2), led me to assume that the sample mean x ¯ would be an excellent choice for the previous difference row constant, since it is the unique number that minimizes the sum of squares of the residuals. From Table A1, the all zero column is the 3rd difference column, and therefore x ¯ = 4.29 (the sample mean of the 2nd difference column in Table A1) corresponds to the constant difference. Thus as described in Section 1, the highest term should be x ¯ x 2 / 2 ! = 2.145 x 2 ; the next term determined from the sample mean of the “constant” column after subtracting the values of the highest term yields −7.464x; and finally the constant term is 78.69. Thus y ^ = 2.145 x 2 7.464 x + 78.69 is the estimated polynomial model.

Table a2 in Appendix 2 compares the noisy data to the best fit polynomial estimate using the method of successive finite differences. Note that the residuals are not very small; however, their average turns out to be −0.00078 indicating a good fit. The polynomial fit is overlaid on the noisy y data in Figure 6, in which the blue line shows the noisy data, and the dashed red line depicts the values of the estimated y ^ .

Figure 6. Graph of best fitting polynomial y ^ = 2.145 x 2 7.464 x + 78.69 overlaid on the noisy observations of Table A1.

2.7. Unbiasedness and Consistency of the Coefficient Estimates

In this section I prove that the coefficients estimated by the Finite Difference Regression method as detailed in Section 2.6 are unbiased and consistent. The properties of unbiasedness and consistency are very important for estimators (see [6] p. 393). The unbiasedness property shows that there is no systematic offset between the estimate and the actual value of a parameter. It is proved by showing that the mean of the difference between the estimate and the actual value is zero. The consistency property shows that the estimates get better with more observations. It is proved by showing that a non-zero difference between the estimate and the actual value becomes highly improbable with a large number of observations. The analysis focuses on the estimate of the leading coefficient i.e. that associated with highest degree because the asymptotic properties of the polynomial fit will be most sensitive to that value.

Consider the following table that shows the symbolic formulas for the successive differences for the n observations y 1 , y 2 , , y n :

For subsequent ease of notation, I introduce a change of variables z i = def y i + 1 for 0 i n 1 . With this renaming/re-indexing, Table 5 can be rewritten as (note that the row indices have also changed):

For any m, 0 m n 1 , the mth Diff column consists of expressions in rows indexed from m through n 1 . Using mathematical induction, the mth Diff column can be shown to be:

If the underlying polynomial is an mth order polynomial f ( x ) = a m x m + a m 1 x m 1 + + a 1 x + a 0 , then in the case of noiseless observations, every row of Table 7 evaluates to the constant m ! a m . In particular, for the kth row in the mth Diff column shown in Table 7 one gets:

Table 5. Successive differences for n observations y 1 , y 2 , , y n .

Table 6. Changing y i + 1 from Table 5 to z i for 0 i n 1 .

Table 7. The mth diff column.

( m 0 ) z k ( m 1 ) z k1 +( m 2 ) z k2 ++ ( 1 ) m ( m m ) z km =( m 0 ) y k+1 ( m 1 ) y k +( m 2 ) y k1 ++ ( 1 ) m ( m m ) y k+1m =( m 0 )f( k+1 )( m 1 )f( k )+( m 2 )f( k1 )++ ( 1 ) m ( m m )f( k+1m ) =m! a m (1)

In case of noisy observations, let each re-indexed observation z now be corrupted by a sequence of additive zero-mean independent identically distributed random variables N 0 , N 1 , , N n 1 , each with variance σ 2 . In other words:

E [ N i ] = 0 and E [ N i 2 ] = σ 2 for 0 i < n

E [ N i N j ] = E [ N i ] E [ N j ] = 0 for 0 i < j < n

where E [ ] denotes the expectation operator. (See for example [7] pp. 220-223.)

This addition of random variables makes the z’s also random variables denoted now as Z , and so Z i = f ( i + 1 ) + N i for 0 i < n . In this case, the kth row in the mth Diff column shown in Table 7 is the random variable R k where:

R k = ( m 0 ) Z k ( m 1 ) Z k 1 + ( m 2 ) Z k 2 + + ( 1 ) m ( m m ) Z k m = ( m 0 ) N k ( m 1 ) N k 1 + ( m 2 ) N k 2 + + ( 1 ) m ( m m ) N k m + ( m 0 ) f ( k + 1 ) ( m 1 ) f ( k ) + ( m 2 ) f ( k 1 ) + + ( 1 ) m ( m m ) f ( k + 1 m ) = ( m 0 ) N k ( m 1 ) N k 1 + ( m 2 ) N k 2 + + ( 1 ) m ( m m ) N k m + m ! a m (2)

where, the last step of Equation (2) follows from the identity in Equation (1).

Let the random variable A ^ m ( n ) denote the estimate of a m with the n observations. Then as described in Section 2.6, since there are ( n m ) rows in the mth Diff column, the sample mean of the R’s divided by m ! provides the required estimate. In other words,

A ^ m ( n ) = 1 m ! ( n m ) k = m n 1 R k (3)

Now, define the reduced row random variable Q k for the mth Diff column shown in Table 7 as:

Q k = def R k m ! a m for m k n 1 (4)

Then, substituting back into Equation (3):

A ^ m ( n ) = 1 m ! ( n m ) k = m n 1 Q k + 1 m ! ( n m ) k = m n 1 m ! a m = 1 m ! ( n m ) k = m n 1 Q k + m ! a m ( n m ) m ! ( n m ) = 1 m ! ( n m ) k = m n 1 Q k + a m

Which, upon rearranging yields:

A ^ m ( n ) a m = 1 m ! ( n m ) k = m n 1 Q k (5)

Note that it is the statistics of the expression A ^ m ( n ) a m on the left-hand side of equation (5) that will prove the properties of unbiasedness and consistency of the estimate A ^ m ( n ) . Also note that by definition (4) and from equation (2), the reduced row random variable Q k for the mth Diff column can be explicitly written as:

Q k = ( m 0 ) N k ( m 1 ) N k 1 + ( m 2 ) N k 2 + + ( 1 ) m ( m m ) N k m (6)

2.7.1. Result 1: A ^ m ( n ) Is an Unbiased estimate of a m

Taking the expectation operator E [ ] on both sides of equation (6):

E [ Q k ] = E [ ( m 0 ) N k ( m 1 ) N k 1 + ( m 2 ) N k 2 + + ( 1 ) m ( m m ) N k m ] = ( m 0 ) E [ N k ] ( m 1 ) E [ N k 1 ] + + ( 1 ) m ( m m ) E [ N k m ] = 0 (7)

The last step in equation (7) follows from the fact that the random variables N i are all zero-mean. Then, taking the expectation operator on both sides of equation (5), and using the result from Equation (7):

E [ A ^ m ( n ) a m ] = 1 m ! ( n m ) k = m n 1 E [ Q k ] = 0 (8)

Hence, E [ A ^ m ( n ) a m ] = 0 for all n. This proves the unbiasedness property of A ^ m ( n ) .

2.7.2. Result 2: A ^ m ( n ) Is a Consistent estimate of a m

Since the random variable A ^ m ( n ) a m has been shown to be of zero-mean by virtue of its unbiasedness property, the variance of this random variable is

simply E [ ( A ^ m ( n ) a m ) 2 ] . I show the consistency property of the estimate A ^ m ( n ) by way of deriving an upper-bound for E [ ( A ^ m ( n ) a m ) 2 ] and showing that this upper-bound vanishes to 0 in the limit n .

Writing out the reduced row random variables Q k for each row k for the mth Diff column from Equation (6) one gets the following series of ( n m ) equations:

Q m = ( m 0 ) N m ( m 1 ) N m 1 + ( m 2 ) N m 2 + + ( 1 ) m ( m m ) N 0

Q m + 1 = ( m 0 ) N m + 1 ( m 1 ) N m + ( m 2 ) N m 1 + + ( 1 ) m ( m m ) N 1

Q k = ( m 0 ) N k ( m 1 ) N k 1 + ( m 2 ) N k 2 + + ( 1 ) m ( m m ) N k m

Q n 1 = ( m 0 ) N n 1 ( m 1 ) N n 2 + ( m 2 ) N n 3 + + ( 1 ) m ( m m ) N n 1 m

Summing up these ( n m ) equations column by column:

k = m n 1 Q k = ( m 0 ) S m n 1 ( m 1 ) S m 1 n 2 + ( m 2 ) S m 2 n 3 + + ( 1 ) m ( m m ) S 0 n 1 m (9)

where for ( n 1 m ) i ( n 1 ) and 0 j m define:

S j i = def k = j i N k (10)

Squaring both sides of Equation (10), and then taking expectations:

E [ ( S j i ) 2 ] = E [ ( k = j i N k ) 2 ] = E [ k = j i N k 2 + S C T ] = k = j i ( E [ N k 2 ] + E [ S C T ] ) = k = j i σ 2 = ( i j + 1 ) σ 2 (11)

In equation (11) I have invoked the zero-mean, independent nature of the N’s to cause the “sum of cross-terms” term SCT to vanish.

Define a new sequence of ( m + 1 ) random variables T 0 , T 1 , , T m as follows:

T i = def ( 1 ) i ( m i ) S m i n 1 i for 0 i m (12)

By squaring both sides of definition (12), and then taking expectations:

E [ T i 2 ] = E [ ( 1 ) 2 i ( m i ) 2 ( S m i n 1 i ) 2 ] = ( m i ) 2 E [ ( S m i n 1 i ) 2 ] = ( m i ) 2 ( n m ) σ 2 (13)

where, the final step in Equation (13) follows from Equation (11).

Using definition (12), Equation (9) can be rewritten as:

k = m n 1 Q k = T 0 + T 1 + + T m (14)

Then as E [ ( T 0 + T 1 + + T m ) 2 ] ( m + 1 ) ( E [ T 0 2 ] + E [ T 1 2 ] + + E [ T m 2 ] ) (see Appendix 1, squaring both sides of Equation (14), and then taking expectations:

E [ ( k = m n 1 Q k ) 2 ] = E [ ( T 0 + T 1 + + T m ) 2 ] ( m + 1 ) ( E [ T 0 2 ] + E [ T 1 2 ] + + E [ T m 2 ] ) = ( m + 1 ) ( n m ) σ 2 ( ( m 0 ) 2 + ( m 1 ) 2 + + ( m m ) 2 ) [ FromEquation ( 13 ) ] = ( m + 1 ) ( n m ) σ 2 ( 2 m m ) [ Frombinomialidentity ( see [ 8 ] pg . 64 ) ] (15)

Finally, by squaring both sides of Equation (5), and then taking expectations I obtain the following inequality from inequality (15):

E [ ( A ^ m ( n ) a m ) 2 ] = 1 ( m ! ) 2 ( n m ) 2 E [ ( k = m n 1 Q k ) 2 ] ( m + 1 ) ( 2 m ) ! σ 2 ( m ! ) 4 ( n m ) (16)

where, the last step in (16) is a rewriting of the binomial coefficient ( 2 m m ) in terms of factorials and subsequent simplification.

It can be seen that the factor ( m + 1 ) ( 2 m ) ! / ( m ! ) 4 on the right hand side of inequality (16) is a super-exponentially decreasing function of m (above a constant) due to presence of the large ( m ! ) 4 factor in its denominator. This factor is evaluated for a few values of m in Table 8.

Table 8 shows that:

E [ ( A ^ m ( n ) a m ) 2 ] 4.5 × σ 2 ( n m ) (17)

For the Finite Difference Regression method, m log 2 n (see Section 0 and the comments in Section 0). Thus, the right-hand-side denominator n m n log 2 n in inequality (17) and therefore,

E [ ( A ^ m ( n ) a m ) 2 ] 4.5 × σ 2 ( n log 2 n ) (18)

Taking limits on both sides of inequality (18),

lim n E [ ( A ^ m ( n ) a m ) 2 ] lim n 4.5 × σ 2 ( n log 2 n ) = 0 (19)

Table 8. The factor ( m + 1 ) ( 2 m ) ! / ( m ! ) 4 evaluated for a few values of m.

However, as E [ ( A ^ m ( n ) a m ) 2 ] 0 , inequality (19) implies that:

lim n E [ ( A ^ m ( n ) a m ) 2 ] = 0 (20)

Then invoking Chebyshev’s inequality (see [7] p. 233, [9] p. 151) yields:

lim n P r o b [ | A ^ m ( n ) a m | > ε ] lim n E [ ( A ^ m ( n ) a m ) 2 ] ε 2 = 0 , for any ε > 0

This proves the consistency property of A ^ m ( n ) .

2.7.3. A Note on the Asymptotic Properties of the Fit

Inequality (16) and the values in Table 8 show that the higher the degree m of the polynomial, the lower is the variance of the estimate of its leading coefficient. This excellent property implies that the asymptotic properties of the fit (that are most sensitive to this leading coefficient) are stable, in the sense that the value of the asymptote becomes less varying with increasing degrees of the fitting polynomial.

3. Conclusions and Further Work

In this paper I have presented a novel polynomial regression method for uniformly sampled data points called the method of Finite Difference Regression. Unlike classical regression methods in which the order of the best fitting polynomial model is unknown and is estimated from the R2 values, I have shown how the t-test can be combined with the method of finite differences to provide a more sensitive and objective measure of the order of the best fitting polynomial. Furthermore, I have carried forward the method of Finite Difference Regression, reemploying the finite differences obtained during order determination, to provide estimates of the coefficients of the best fitting polynomial. I have shown that not only are these coefficients unbiased and consistent, but also that the asymptotic properties of the fit get better with increasing degrees of the fitting polynomial.

At least three other avenues of further research remain to be explored:

1) The polynomial coefficients obtained by this method do not minimize the sum of the squared residuals. Is there an objective function that they do minimize?

2) The classical regression methods work on non-uniformly sampled data sets. Extending this method to non-uniformly sampled data should be possible.

3) Automatically finding a good t-test significance level i.e. the precipitously low P-value that sets the order of the best fitting polynomial (as described in Section 2.4) remains an open problem. For this, heuristics in the spirit of AIC or BIC methods (see [3] ) in the classical case may perhaps be required.

Acknowledgements

I thank Prof. Brad Lucier for his interest and for the helpful discussions and comments that made this a better paper. I am also grateful to Dr. Jan Frydendahl for encouraging me to pursue this research activity.

Cite this paper

Banerjee, A. (2018) The Method of Finite Difference Regression. Open Journal of Statistics, 8, 49-68. https://doi.org/10.4236/ojs.2018.81005

References

  1. 1. Gauss, C.F. (1857) Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections. Davis, C.H., Trans., Little, Brown & Co., Boston, MA.

  2. 2. Peck, R., Olsen, C. and Devore, J. (2005) Introduction to Statistics and Data Analysis. 2nd Edition, Thomson Brooks/Cole, Belmont, CA.

  3. 3. Burnham, K.P. and Anderson, D.R. (2004) Multimodel Inference: Understanding AIC and BIC in Model Selection. Sociological Methods and Research, 33, 261-304. https://doi.org/10.1177/0049124104268644

  4. 4. Newton, I. (1848) Newton’s Principia: The Mathematical Principles of Natural Philosophy. Motte, A., Trans., Daniel Adee, New York.

  5. 5. Hammerlin, G. and Hoffmann, K.-H. (1991) Numerical Mathematics. Schumaker, L.L., Trans., Springer, New York. https://doi.org/10.1007/978-1-4612-4442-4

  6. 6. Cover, T.M. and Thomas, J.A. (2006) Elements of Information Theory. 2nd Edition, John Wiley & Sons, Inc., New York.

  7. 7. Feller, W. (1968) An Introduction to Probability Theory and Its Applications. 3rd Edition, Vol. I, John Wiley & Sons, Inc., New York.

  8. 8. Tucker, A. (1980) Applied Combinatorics. John Wiley & Sons, Inc., New York.

  9. 9. Feller, W. (1971) An Introduction to Probability Theory and Its Applications. 2nd Edition, Vol. II, John Wiley & Sons, Inc., New York.

  10. 10. Student (1908) The Probable Error of a Mean. Biometrika, 6, 1-25. https://doi.org/10.1093/biomet/6.1.1

Appendix 1

To show that for any sequence of ( m + 1 ) random variables T 0 , T 1 , , T m :

E [ ( T 0 + T 1 + + T m ) 2 ] ( m + 1 ) ( E [ T 0 2 ] + E [ T 1 2 ] + + E [ T m 2 ] )

The proof follows from a variance-like consideration. Given the random variables T 0 , T 1 , , T m and any random variable T ¯ , the sum of squares random variable i = 0 m ( T i T ¯ ) 2 is always non-negative. Its expected value E [ i = 0 m ( T i T ¯ ) 2 ] is therefore always non-negative. Choosing the random variable T ¯ = def i = 0 m T i / ( m + 1 ) i.e. the “mean” of T 0 , T 1 , , T m proves the inequality as follows:

0 E [ i = 0 m ( T i T ¯ ) 2 ] = E [ i = 0 m ( T i 2 2 T ¯ T i + T ¯ 2 ) ] = E [ i = 0 m T i 2 ] 2 E [ i = 0 m T ¯ T i ] + E [ i = 0 m T ¯ 2 ] = i = 0 m E [ T i 2 ] 2 E [ T ¯ i = 0 m T i ] + E [ T ¯ 2 i = 0 m 1 ] = i = 0 m E [ T i 2 ] 2 E [ T ¯ × ( m + 1 ) T ¯ ] + E [ T ¯ 2 × ( m + 1 ) ] ( fromdefn . of T ¯ )

= i = 0 m E [ T i 2 ] 2 ( m + 1 ) E [ T ¯ 2 ] + ( m + 1 ) E [ T ¯ 2 ] = i = 0 m E [ T i 2 ] ( m + 1 ) E [ T ¯ 2 ] = i = 0 m E [ T i 2 ] ( m + 1 ) E [ ( i = 0 m T i m + 1 ) 2 ] ( fromdefn . of T ¯ ) = i = 0 m E [ T i 2 ] ( m + 1 ) ( m + 1 ) 2 E [ ( T 0 + T 1 + + T m ) 2 ]

Multiplying both sides of the above inequality by ( m + 1 ) and rearranging yields the required result.

Appendix 2

Table A1. Example showing how the finite difference regression method works.

Table A2. The noisy data compared to the best fit polynomial estimate y ^ = 2.145 x 2 7.464 x + 78.69 determined from the successive finite differences method.

NOTES

1Portions of this independent research were submitted to the 2013 Siemens Math, Science, and Technology Competition, and to the 2014 Intel Science Talent Search Competition for scholarship purposes. It received the Scientific Research Report Badge from the latter competition.

2The t-test often called Student’s t-test was introduced in statistics by William S. Gosset in 1908 and published under the author’s pseudonym “Student” (see [10] ).

3Assume that K and k are powers of 2 for the purposes of this discussion.