The Method of Finite Difference Regression

In this paper I present a novel polynomial regression method called Finite Difference Regression for a uniformly sampled sequence of noisy data points that determines the order of the best fitting polynomial and provides estimates of its coefficients. Unlike classical least-squares polynomial regression methods in the case where the order of the best fitting polynomial is unknown and must be determined from the R value of the fit, I show how the t-test from statistics can be combined with the method of finite differences to yield a more sensitive and objective measure of the order of the best fitting polynomial. Furthermore, it is shown how these finite differences used in the determination of the order, can be reemployed to produce excellent estimates of the coefficients of the best fitting polynomial. I show that not only are these coefficients unbiased and consistent, but also that the asymptotic properties of the fit get better with increasing degrees of the fitting polynomial.


Introduction
In this paper I present a novel polynomial regression method for a uniformly sampled sequence of noisy data points that I call the method of Finite Difference Regression.I show how the statistical t-test 2 can be combined with the method of finite differences to provide a more sensitive and objective measure of the order of the best fitting polynomial and produce unbiased and consistent estimates of its coefficients.and consistent, but also that the asymptotic properties of the fit get better with increasing degrees of the fitting polynomial.
Before describing the Finite Difference Regression method in Section 2, I briefly recapitulate the general method of finite differences whose formulation can be traced all the way back to Sir Isaac Newton (see [4]).The finite differences method (see [5]) is used for curve fitting with polynomial models.On its 2 nd row, Table 1 shows the y-value outputs for the sequence of regularly spaced -value inputs at points 1, 2, ,10  shown on the 1 st row.A plot of this data is shown in Figure 1.The problem is to determine the order of the polynomial that can generate these outputs and to find its coefficients.
The remaining rows in Table 1 show the successive differences of the y-values.
The fact that the 3 rd difference row in Table 1 is composed of all zeros implies that a quadratic polynomial best describes this data.In fact this data was generated by the quadratic polynomial: I assume throughout this paper that the data are sampled at regular intervals.
Without loss of generality these intervals can be assumed to be of unit length.
Then the following general observations can be made: 1) If the ( ) difference row is all zero, then the data is best described by an m th order polynomial: difference row is all zero, then !m m a is the constant in the m th row.From this observation the coefficient of the highest term can be found.For example in Table 1 since the 3 rd difference row is all zero then 2 2!a is the constant in the 2 nd difference row.Since this constant is 4, it immediately follows that the highest term should be 3) The lower order terms can be found successively by back substituting and subtracting the highest term.In this method, the data y in Table 1 is replaced by y′ where m m y y a x ′ = − , and then reapplying the method of finite differences on the y′ data.Using this approach for the data in   Again, the final term can be found by replacing y′ in Table 2 by y′′ where The computation in Table 3 yields 3 as the constant term for the polynomial fit.
Hence, by successive applications of the finite difference method in this manner, the best fit polynomial to the  data in Table 1 is found to be

The Method of Finite Difference Regression
In this section I describe the Finite Difference Regression method, compare the results obtained from this method to those from classical regression, and show unbiasedness and consistency properties of the Finite Difference Regression coefficients.
The intuitive idea behind the method of Finite Difference Regression is simple.
Suppose one were to add normally distributed noise with mean 0 independently to each of the y-values for example in Table 1 to make them noisy, then unlike the non-noisy situation, one cannot expect all entries in a difference row to be all Table 1.Finite Differences on y-value outputs for the sequence of regularly spaced x-value inputs.identically equal to zero.However, since differences between normally distributed random variables are also normally distributed (albeit with higher variance), each of the samples in a difference row (as in the rows of Table 1) would also be normally distributed with different means until one hits a difference row where the mean is zero with high probability.It is this highly probable zero-mean row that should be the all zero difference row in the absence or elimination of noise.The condition that the mean is zero with high probability can be objectively tested on each difference row using a t-test on the samples in that row, where the null hypothesis H 0 tests for zero population mean and the alternative hypothesis H a tests for non-zero population mean.This is the main idea behind the method of Finite Difference Regression-the replacement of the notion of an all zero difference row in the non-noisy case by the notion of a highly probable row of all zeros in the noisy case and usage of the statistical t-test to test for the zero population mean hypothesis.

Statistical Independence and Its Relation to the Degrees of Freedom
While it is true that successive differences will produce normally distributed samples, not all the samples so produced are statistically independent of one another.For example as shown in Figure 2, the noisy data samples 1 2 3 4 , , , y y y y generate the next difference sequence 1 2 3 , , y y y ′ ′ ′-not all of these values are in- dependent.Since 1 y′ and 2 y′ both depend upon the value of y 2 they are not independent.Similarly, 2 y′ and 3 y′ are not independent because both depend upon 3 y .However, 1 y′ and 3 y′ having no point in common, are independent!This observation can be generalized by the following process.Suppose I had K observations to begin with.If I were to produce successive differences by skipping over every other term, in the manner of a binary tree as shown in Figure 3, I would obtain independent normally distributed samples.Each level of the tree would have independent normally distributed finite differences denoted in Fig- ure 3 (shown with 8 K = ) by , , y y ′ ′′  and so on.This method would retain all the properties of finite differences, in particular the highly probable zero mean row; plus it would have the added benefit of not having any dependent samples.If this method were implemented, and a t-test invoked at every level of the binary tree, then the number of samples in the test would be halved at each level, and consequently the degrees of freedom would be affected.In other words, if k denotes the number of samples3 at a particular level of the tree, then in the next level one would get k/2 samples.Then according to the theory of t-tests, the degrees of freedom for the first level would be 1 k − , in the next level it would be 2 1 k − , and so on.However, discarding good data will have negative conse- quences for coefficient estimation (see Section 2.6).While some of the data at every level of a successive finite difference table (Table 1 for example) are not independent, they could still be used for t-tests.The only thing to remember is  that at every stage the number of independent samples is halved and therefore the degrees of freedom is one less than this number.

An Example of Order Finding
Table A1 in Appendix 2 gives an example of how the Finite Difference Regression method works to find the order of the best fitting polynomial.The 1 st two columns of Table A1 are the pivoted rows of Table 1.Independent zero-mean normally distributed random noise with standard deviation of 30.0 is generated in the 3 rd column and added to the values of y in the 4 th column.Thus, the 4 th column represents noisy values of the data y.A graph of this noisy data is shown in Figure 4. Finally, columns 5 through 9 represent successive finite differences from 1 st to 5 th , computed in the same way as in Table 1.The last 6 rows of Table 9 highlighted in yellow show the statistical computations associated with the method.These are described below in Section 2.3.

The Statistical Computations
The last 6 rows of Table A1 use the one-sample t-test for a population mean as described in [2] pp.547-548.Note that the Null Hypothesis is 0 : 0 The descriptions of these rows are: 2 Area to left of with given for 0

Conclusions from the t-Test
One of the assumptions of the t-test is that the sample size n is large.Typically, This assumption is violated in some of the columns of Table A1.However the table is merely for illustrative purposes.Even so, it shows the power of the method to correctly ascertain the order of the best fitting polynomial because one can hardly fail to notice the precipitous drop in P-value in the 3 rd difference column (titled Diff 3).The plot in Figure 5 showing the P-values of the 5 difference columns graphically displays this drop.
The insignificant P-value in the 3 rd difference column implies that with high probability the order of the best fitting polynomial should be 2 m = , i.e. a quadratic polynomial is the best fit polynomial to the data in Table A1.

Comparison to Classical Regression
In classical polynomial regression,   Table 4 shows that the R 2 values used in classical regression do not have the sensitivity that the P-values with t-tests possess.The clear drop that occurs in the P-values as shown in Figure 5 does not have a corresponding sharp rise in R 2 values in Table 4. Also, unlike classical regression which requires R 2 values to be fitted with every polynomial the Finite Difference Regression is a one-shot test that gives a clear estimate of the order of the best fitting polynomial by testing the significance of the probabilities (P-values).The first successive difference column whose P-value is less than the significance level (a belief threshold for the viable model) comes out to be the best fitting polynomial model.Unlike classical regression, however, this method requires a large sample size n, preferably one that is a power of 2. Also, because the degrees of freedom get halved at each successive difference and the degrees of freedom have to remain positive, the highest order polynomial that can be fit by this method is of the order 2 log n .This is generally acceptable because the order of the best fitting po- lynomials should by definition be small constants.Other classical model order selection heuristics like AIC or BIC (see [3]) would also heavily penalize high order models.

Determining the Coefficients
Once the order of the best fitting polynomial has been found using the t-test in combination with finite differences, I will evaluate the coefficients of the best fitting polynomial using the method of successive finite differences as detailed in Section 1.
In the non-noisy situation described in Section 1, the matter was simple: there was a difference row of all zeros and therefore the previous difference row was a constant that could be used to determine the coefficient of the highest degree.In the noisy case, the issue is complicated because there is no such row that consists of all zeros and therefore the previous difference row is not a constant.However, the intuitive idea behind the method of Finite Difference Regression (see the 2 nd paragraph of Section 2), led me to assume that the sample mean x would be an excellent choice for the previous difference row constant, since it is the unique number that minimizes the sum of squares of the residuals.From Table A1, the all zero column is the 3 rd difference column, and therefore 4.29 x = (the sample mean of the 2 nd difference column in Table A1) corresponds to the constant difference.Thus as described in Section 1, the highest term should be    A1.

Unbiasedness and Consistency of the Coefficient Estimates
In this section I prove that the coefficients estimated by the Finite Difference Regression method as detailed in Section 2.6 are unbiased and consistent.The properties of unbiasedness and consistency are very important for estimators (see [6] p. 393).The unbiasedness property shows that there is no systematic offset between the estimate and the actual value of a parameter.It is proved by showing that the mean of the difference between the estimate and the actual value is zero.The consistency property shows that the estimates get better with more observations.It is proved by showing that a non-zero difference between the estimate and the actual value becomes highly improbable with a large number of observations.The analysis focuses on the estimate of the leading coefficient i.e. that associated with highest degree because the asymptotic properties of the polynomial fit will be most sensitive to that value.
Consider the following table that shows the symbolic formulas for the successive differences for the n observations 1 2 , , , n y y y  : For subsequent ease of notation, I introduce a change of variables , then in the case of noiseless observations, every row of Table 7 evaluates to the constant !m m a .In particular, for the k th row in the m th Diff column shown in Table 7 6.Changing Table 7.The m th diff column.
Diff m m th Row ( ) ( ) In case of noisy observations, let each re-indexed observation z now be corrupted by a sequence of additive zero-mean independent identically distributed random variables [ ] where [ ]  denotes the expectation operator.(See for example [7] pp.220-223.) This addition of random variables makes the z's also random variables denoted now as Z , and so ( ) In this case, the k th row in the m th Diff column shown in Table 7 is the random variable k R where: where, the last step of Equation ( 2) follows from the identity in Equation ( 1 Now, define the reduced row random variable k Q for the m th Diff column shown in Table 7 as: Then, substituting back into Equation ( 3): Note that it is the statistics of the expression ( )

A n Is an Unbiased Estimate of m a
Taking the expectation operator [ ]  on both sides of Equation ( 6): The last step in Equation ( 7) follows from the fact that the random variables i N are all zero-mean.Then, taking the expectation operator on both sides of Equation ( 5), and using the result from Equation ( 7): and showing that this upper-bound vanishes to 0 in the limit n → ∞ .
Writing out the reduced row random variables k Q for each row k for the m th Diff column from Equation ( 6) one gets the following series of ( ) ( ) Summing up these ( ) n m − equations column by column: ( ) where for ( ) ( ) Squaring both sides of Equation (10), and then taking expectations: In Equation (11) I have invoked the zero-mean, independent nature of the N's to cause the "sum of cross-terms" term SCT to vanish.
Define a new sequence of ( ) as follows: ( ) By squaring both sides of definition (12), and then taking expectations: where, the final step in Equation ( 13) follows from Equation (11).
Using definition (12), Equation ( 9) can be rewritten as: Then as ) ( ) Appendix 1, squaring both sides of Equation ( 14), and then taking expectations: From Equation 13[From binomial ident 0 1 Finally, by squaring both sides of Equation ( 5), and then taking expectations I obtain the following inequality from inequality (15): where, the last step in ( 16) is a rewriting of the binomial coefficient 2m m  in terms of factorials and subsequent simplification.
It can be seen that the factor ( )( ) ( ) on the right hand side of inequality ( 16) is a super-exponentially decreasing function of m (above a constant) due to presence of the large ( ) factor in its denominator.This factor is evaluated for a few values of m in Table 8.


This proves the consistency property of ( )

A Note on the Asymptotic Properties of the Fit
Inequality ( 16) and the values in Table 8 show that the higher the degree m of the polynomial, the lower is the variance of the estimate of its leading coefficient.
This excellent property implies that the asymptotic properties of the fit (that are most sensitive to this leading coefficient) are stable, in the sense that the value of the asymptote becomes less varying with increasing degrees of the fitting polynomial.

Conclusions and Further Work
In this paper I have presented a novel polynomial regression method for uniformly sampled data points called the method of Finite Difference Regression.
Unlike classical regression methods in which the order of the best fitting polynomial model is unknown and is estimated from the R 2 values, I have shown how the t-test can be combined with the method of finite differences to provide a more sensitive and objective measure of the order of the best fitting polynomial.Furthermore, I have carried forward the method of Finite Difference Regression, reemploying the finite differences obtained during order determination, to provide estimates of the coefficients of the best fitting polynomial.I have shown that not only are these coefficients unbiased and consistent, but also that the asymptotic properties of the fit get better with increasing degrees of the fitting polynomial.
At least three other avenues of further research remain to be explored: 1) The polynomial coefficients obtained by this method do not minimize the sum of the squared residuals.Is there an objective function that they do minimize?

Appendix 1
To show that for any sequence of ( ) The proof follows from a variance-like consideration.Given the random variables 0 1 , , , m T T T  and any random variable T , the sum of squares random variable

1
Portions of this independent research were submitted to the 2013 Siemens Math, Science, and Technology Competition, and to the 2014 Intel Science Talent Search Competition for scholarship purposes.It received the Scientific Research Report Badge from the latter competition.

Figure 3 .
Figure 3.A binary tree of successive differences.

Figure 4 .
Figure 4. Graph of the noisy data in Table 5 with insert showing magnified view for 10 20 x ≤ ≤ .
2 compares the noisy data to the best fit polynomial estimate using the method of successive finite differences.Note that the residuals are not very small; however, their average turns out to be −0.00078indicating a good fit.The polynomial fit is overlaid on the noisy y data in Figure6, in which the blue line shows the noisy data, and the dashed red line depicts the values of the estimated ŷ .

Figure 6 .
Figure 6.Graph of best fitting polynomial this renaming/re-indexing, Table 5 can be rewritten as (note that the row indices have also changed): For any m, 0 1 m n ≤ ≤ − , the m th Diff column consists of expressions in rows indexed from m through 1 n − .Using mathematical induction, the m th Diff column can be shown to be: If the underlying polynomial is an m th order polynomial (5) that will prove the properties of unbiasedness and consistency of the estimate ( )ˆm n A. Also note that by definition (4) and from Equation (2), the reduced row random variable k Q for the m th Diff column can be explicitly written as: Multiplying both sides of the above inequality by ( )

Table
In Table2the 2 nd difference row is all zero.Hence , one obtains the next term as follows: Figure 1.Graph of data in Table 1. A. Banerjee DOI: 10.4236/ojs.2018.8100552 Open Journal of Statistics

Table 2 .
Second iteration of finding terms to fit the model.

Table 3 .
Final iteration of finding terms to fit the model.

Table 4 .
R 2 values in classical regression as a function of degree m.