Onto Orthogonal Projections in the Space of Polynomials Pn[x]

Abstract

In this article, I consider projection groups on function spaces, more specifically the space of polynomials Pn[x]. I will show that a very similar construct of projection operators allows us to project into the subspaces of Pn[x] where the function h Pn[x] represents the closets function to f Pn[x] in the least square sense. I also demonstrate that we can generalise projections by constructing operators i.e. in Rn+1 using the metric tensor on Pn[x]. This allows one to project a polynomial function onto another by mapping it to its coefficient vector in Rn+1. This can be also achieved with the Kronecker Product as detailed in this paper.

Share and Cite:

Niglio, J. (2023) Onto Orthogonal Projections in the Space of Polynomials Pn[x]. Journal of Applied Mathematics and Physics, 11, 22-45. doi: 10.4236/jamp.2023.111003.

1. Introduction

This paper is a continuation of the first two papers [1] [2] published which focuses on the projections in polynomials spaces and constructs an operator expressed in terms of the Kronecker Product to allow for a projection from the subspace ${ℙ}_{k}\left[x\right]$ onto the subspace ${ℙ}_{j}\left[x\right]$ where $j\le k$ . This is also motivated by the calculations performed in [3] . Below, we first start with a motivating example from this book [3] and go on to develop a more general theory.

2. Projections in a Polynomial Space ${P}_{n}\left[x\right]$ : A Motivating Example [3]

Let ${P}_{n}\left[x\right]$ be the vector space of nth degree polynomials over some arbitrary closed interval $\left[a,b\right]$ . We will choose $\mathbb{K}=ℝ$ and define ${P}_{n}\left[x\right]$ with its standard ordered basis $\mathcal{B}\left(x\right)$ , that is

$\mathcal{B}\left(x\right):=\left\{1,x,{x}^{2},{x}^{3},\cdots ,{x}^{n}\right\}$

Traditionally, we can define the projection of a function in the following way.

Let $f\left(x\right),g\left(x\right)\in {P}_{n}\left[x\right]$ and $h\left(x\right)$ be the projection of $f\left(x\right)$ onto $g\left(x\right)$ .

Then we can define the function $h\left(x\right)$ as follows

$h\left(x\right):=\frac{{\int }_{a}^{b}\text{\hspace{0.17em}}f\left(x\right)g\left(x\right)\text{d}x}{{\int }_{a}^{b}{\left[g\left(x\right)\right]}^{2}\text{d}x}g\left(x\right)$ (2.1)

Let us consider an example.

Example 2.1 (Motivating Example [3] ). Let $f\left(x\right),g\left(x\right)\in {P}_{2}\left[x\right]$ such that $f\left(x\right)={x}^{2}$ and $g\left(x\right)=x$ , we calculate the function $h\left( x \right)$

$h\left(x\right)=\frac{{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{3}\text{d}x}{{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{2}\text{d}x}x=\frac{\frac{1}{4}{\left[{x}^{4}\right]}_{a}^{b}}{\frac{1}{3}{\left[{x}^{3}\right]}_{a}^{b}}x$ (2.2)

Suppose $\left[a,b\right]=\left[0,1\right]$ then we have

$\frac{\frac{1}{4}{\left[{x}^{4}\right]}_{0}^{1}}{\frac{1}{3}{\left[{x}^{3}\right]}_{0}^{1}}x=\frac{3}{4}x=h\left(x\right)$ (2.3)

We know that $\mathcal{B}\left(x\right)=\left\{1,x,{x}^{2}\right\}$ , clearly $h\left(x\right)\in Span\left\{x\right\}$ .

However we note that the basis $\mathcal{B}\left(x\right)$ is not orthonormal, hence we now use the Gram-Schmidt procedure.

Let ${u}_{1}\left(x\right)=1$ , ${u}_{2}\left(x\right)=x-pro{j}_{1}\left(x\right)$ , ${u}_{3}\left(x\right)={x}^{2}-pro{j}_{1}\left({x}^{2}\right)-pro{j}_{{u}_{2}}\left({x}^{2}\right)$ .

${u}_{2}\left(x\right)=x-\frac{{\int }_{0}^{1}\text{\hspace{0.17em}}x\text{d}x}{{\int }_{0}^{1}\text{\hspace{0.17em}}\text{d}x}=x-\frac{1}{2}$ (2.4)

${u}_{3}={x}^{2}-\frac{1}{3}-pro{j}_{{u}_{2}}\left({x}^{2}\right)$ (2.5)

therefore, we need to calculate the latter

$pro{j}_{{u}_{2}}\left({x}^{2}\right)=\frac{{\int }_{0}^{1}\text{\hspace{0.17em}}{x}^{2}\left(x-\frac{1}{2}\right)\text{d}x}{{\int }_{0}^{1}{\left(x-\frac{1}{2}\right)}^{2}\text{d}x}\left(x-\frac{1}{2}\right)$ (2.6)

$=\frac{\frac{1}{12}}{\frac{1}{12}}\left(x-\frac{1}{2}\right)$ (2.7)

$=x-\frac{1}{2}$ (2.8)

Therefore, we have

${u}_{3}\left(x\right)={x}^{2}-\frac{1}{3}-\left(x-\frac{1}{2}\right)={x}^{2}-\frac{1}{3}-x+\frac{1}{2}={x}^{2}-x+\frac{1}{6}$ (2.9)

It is that the ordered basis ${\mathcal{B}}^{*}$ defined as

${\mathcal{B}}^{*}=\left\{1,x-\frac{1}{2},{x}^{2}-x+\frac{1}{6}\right\}$

is an orthogonal basis of ${P}_{2}\left[x\right]$ . This means that $f\left(x\right)$ can be expressed as a linear combination using the orthogonal basis ${\mathcal{B}}^{*}$ .

Let ${e}_{1}\left(x\right)=1,{e}_{2}\left(x\right)=x-\frac{1}{2},{e}_{3}\left(x\right)={x}^{2}-x+\frac{1}{6}$ . We wish to project $f\left(x\right)$ in the direction of ${e}_{i},i=1,2,3$ .

1) We project $pro{j}_{\left(1\right)}\left( x 2 \right)$

$pro{j}_{\left(1\right)}\left({x}^{2}\right)=\frac{{\int }_{0}^{1}\text{\hspace{0.17em}}{x}^{2}\text{d}x}{{\int }_{0}^{1}\text{\hspace{0.17em}}\text{d}x}\cdot 1$ (2.10)

$=\frac{1}{3}$ (2.11)

2) We project $pro{j}_{{e}_{2}\left(x\right)}\left( x 2 \right)$

$pro{j}_{{e}_{2}\left(x\right)}\left({x}^{2}\right)=\frac{{\int }_{0}^{1}\text{\hspace{0.17em}}{x}^{2}\left(x-\frac{1}{2}\right)\text{d}x}{{\int }_{0}^{1}{\left(x-\frac{1}{2}\right)}^{2}\text{d}x}\left(x-\frac{1}{2}\right)$ (2.12)

$=x-\frac{1}{2}$ (2.13)

3) We now project $pro{j}_{{e}_{3}}\left( x 2 \right)$

$pro{j}_{{e}_{3}\left(x\right)}\left({x}^{2}\right)=\frac{{\int }_{0}^{1}\text{\hspace{0.17em}}{x}^{2}\left({x}^{2}-x+\frac{1}{6}\right)\text{d}x}{{\int }_{0}^{1}{\left({x}^{2}-x+\frac{1}{6}\right)}^{2}\text{d}x}\left({x}^{2}-x+\frac{1}{6}\right)$ (2.14)

$=\frac{\frac{1}{180}}{\frac{1}{180}}\left({x}^{2}-x+\frac{1}{6}\right)$ (2.15)

$={x}^{2}-x+\frac{1}{6}$ (2.16)

Hence, it should be true that

${x}^{2}=\frac{1}{3}+\left(x-\frac{1}{2}\right)+\left({x}^{2}-x+\frac{1}{6}\right)$

Hence, the coefficient vector is $\left(\frac{1}{3},1,1\right)$ .

This concludes our motivating example. We now want to find an operator which achieves the same result.

We can now consider a different way of getting to the result using an optimization technique in the following way.

We know that projecting the function $f\left(x\right)={x}^{2}$ along $g\left(x\right)=x$ must be in $Span\left\{x\right\}$ . Hence, we are looking for an optimized solution (in the least square sense) of the form $\stackrel{˜}{y}\left(x\right)=\alpha x$ .

To derive the constant $\alpha$ we can use variations of the ideal function by a parameter $\epsilon$ as follows

$\stackrel{˜}{y}\left(x\right)+\epsilon \alpha x,\stackrel{˜}{y}\left(x\right)={\alpha }^{*}x$

where ${\alpha }^{*}$ is the optimum choice for $\alpha$ in the least square sense.

Let $I\left(x,\epsilon \right)$ be LS Error integral

$I\left(x,\epsilon \right)={\int }_{0}^{1}{\left({x}^{2}-\left(\stackrel{˜}{y}\left(x\right)+\epsilon \alpha x\right)\right)}^{2}\text{d}x={\int }_{0}^{1}{\left({x}^{2}-{\alpha }^{*}x-\epsilon \alpha x\right)}^{2}\text{d}x$

This integral represents the squared error. We want to find the value ${\alpha }^{*}$ which minimizes $I\left(x,\epsilon \right)$ with respect to $\epsilon$ . That is we want to calculate

${\frac{\text{d}I}{\text{d}\epsilon }|}_{\epsilon =0}={\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\int }_{0}^{1}{\left({x}^{2}-{\alpha }^{*}x-\epsilon \alpha x\right)}^{2}\text{d}x$

and set it equal to 0 to derive the optimal coefficients.

${\frac{\text{d}I}{\text{d}\epsilon }|}_{\epsilon =0}={\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\int }_{0}^{1}{\left({x}^{2}-{\alpha }^{*}x-\epsilon \alpha x\right)}^{2}\text{d}x$ (2.17)

$={\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\int }_{0}^{1}{\left({x}^{2}-x\left({\alpha }^{*}+\epsilon \alpha \right)\right)}^{2}\text{d}x$ (2.18)

$={\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\int }_{0}^{1}\text{\hspace{0.17em}}{x}^{4}-2{x}^{3}\left({\alpha }^{*}+\epsilon \alpha \right)+{x}^{2}{\left({\alpha }^{*}+\epsilon \alpha \right)}^{2}\text{d}x$ (2.19)

$={\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\frac{{x}^{5}}{5}-\frac{{x}^{4}}{2}\left({\alpha }^{*}+\epsilon \alpha \right)+\frac{{x}^{3}}{3}{\left({\alpha }^{*}+\epsilon \alpha \right)}^{2}|}_{0}^{1}$ (2.20)

$={\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}\frac{1}{5}-\frac{1}{2}\left({\alpha }^{*}+\epsilon \alpha \right)+\frac{1}{3}{\left({\alpha }^{*}+\epsilon \alpha \right)}^{2}$ (2.21)

$=-\frac{\alpha }{2}+\frac{2{\alpha }^{*}\alpha }{3}$ (2.22)

Setting and solving

${\frac{\text{d}I}{\text{d}\epsilon }|}_{\epsilon =0}=0$

We get

$-\frac{\alpha }{2}+\frac{2{\alpha }^{*}\alpha }{3}=0⇒\frac{2{\alpha }^{*}\alpha }{3}=\frac{\alpha }{2}$

We can now algebraically solve for ${\alpha }^{*}$

${\alpha }^{*}=\frac{3}{4}$

We therefore conclude that the projection of $f\left(x\right)$ along $g\left(x\right)$ is given by

$\stackrel{˜}{y}\left(x\right)=\frac{3}{4}x$

which is clearly in the span of $g\left(x\right)$ .

Thinking of $I\left(x,\epsilon \right)$ as an operator, we postulate that ${I}^{2}$ is idempotent. What do we mean by that? It means that we project in the direction of some polynomial and repeating the projection one more time will leave the operation invariant. That is

$\begin{array}{c}{\frac{\text{d}{I}^{2}\left(x,\epsilon \right)}{\text{d}\epsilon }|}_{\epsilon =0}={\frac{\text{d}}{\text{d}\epsilon }\left({\frac{\text{d}I\left(x,\epsilon \right)}{\text{d}\epsilon }|}_{\epsilon =0}\right)|}_{\epsilon =0}\end{array}={\frac{\text{d}I\left(x,\epsilon \right)}{\text{d}\epsilon }|}_{\epsilon =0}$ (2.23)

where $I\left(x,\epsilon \right)={\int }_{a}^{b}\left({x}^{2}-{\alpha }^{*}ax-\epsilon \alpha x\right)\text{d}x$ .

Applying this to our example, we should find that this operation is idempotent. We already know that

${\frac{\text{d}I\left(x,\epsilon \right)}{\text{d}\epsilon }|}_{\epsilon =0}=-\frac{\alpha }{2}+2{\alpha }^{*}\alpha$

By setting this to 0 we find $\stackrel{˜}{y}\left(x\right)=\frac{3}{4}x\in Spn\left\{x\right\}$ .

All we need to do now is project this function again, hence we compute ${\frac{\text{d}I\left(\stackrel{˜}{y}\left(x\right),\epsilon \right)}{\text{d}\epsilon }|}_{\epsilon =0}$ . Hence, we get

${\frac{\text{d}I\left(\stackrel{˜}{y}\left(x\right),\epsilon \right)}{\text{d}\epsilon }|}_{\epsilon =0}=\frac{\text{d}}{\text{d}\epsilon }{\int }_{0}^{1}{\left(\frac{3x}{4}-{\alpha }^{*}x-\epsilon \alpha x\right)}^{2}\text{d}x$ (2.24)

$={\int }_{0}^{1}\frac{\text{d}}{\text{d}\epsilon }{\left(\frac{3x}{4}-{\alpha }^{*}x-\epsilon \alpha x\right)}^{2}\text{d}x$ (2.25)

$={\int }_{0}^{1}\text{\hspace{0.17em}}2\left(\frac{3x}{4}-{\alpha }^{*}x-\epsilon \alpha x\right)-\alpha x\text{d}x$ (2.26)

$=-2{\int }_{0}^{1}\frac{3\alpha {x}^{2}}{4}-{\alpha }^{*}\alpha {x}^{2}\text{d}x$ (2.27)

$=-2\left[{\frac{3\alpha {x}^{3}}{12}|}_{0}^{1}-{{\alpha }^{*}\alpha \frac{{x}^{3}}{3}|}_{0}^{1}\right]$ (2.28)

$=-2\left[\frac{3\alpha }{12}-\frac{{\alpha }^{*}\alpha }{3}\right]$ (2.29)

$=-\frac{\alpha }{2}+\frac{2{\alpha }^{*}\alpha }{3}$ (2.30)

Setting our to result to 0, we get the following result

${\frac{\text{d}I\left(\stackrel{˜}{y}\left(x\right),\epsilon \right)}{\text{d}\epsilon }|}_{\epsilon =0}=0⇒-\frac{\alpha }{2}+\frac{2{\alpha }^{*}\alpha }{3}=0⇒{\alpha }^{*}=\frac{3}{4}$

Hence, we conclude that $\stackrel{˜}{\stackrel{˜}{y}}=\stackrel{˜}{y}=\frac{3}{4}x\in spn\left(x\right)$ .

3. The General Theory

In this section, we try to develop a more general theory of projection operators over Polynomial Rings of arbitrary degree. The main idea is to investigate the properties of the operator defined as follows

$I\left(f\left(x\right)\stackrel{proj}{\to }g\left(x\right),\epsilon \right)\triangleq {\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\int }_{a}^{b}{\left(f\left(x\right)-\stackrel{˜}{y}\left(x\right)-\epsilon g\left(x,\alpha \right)\right)}^{2}\text{d}x,\left[a,b\right]\in ℝ$ (3.1)

where $\stackrel{˜}{y}\left(x\right)$ is the best function which represents the projection of $f\left(x\right)$ onto $g\left(x\right)$ and $\alpha ,{\alpha }^{*}\in {ℝ}^{\mathrm{deg}\left(g\left(x\right)\right)+1}$ . We can, of course, see that $\stackrel{˜}{y}\left(x\right)\in Spn\left\{g\left(x\right)\right\}$ .

Example 3.1. Suppose $f\left(x\right)=a{x}^{2}+bx+c\in {ℙ}_{2}\left[x\right]$ and $g\left(x\right)=mx+d\in {ℙ}_{1}\left[x\right]$ we wish to project $f\left(x\right)$ onto $g\left(x\right)$ i.e. $f\left(x\right)\stackrel{proj}{\to }g\left(x\right)$ . Hence, we need to solve

${\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\int }_{0}^{1}{\left(a{x}^{2}+bx+c-\left({\alpha }_{1}^{*}x+{\alpha }_{0}^{*}+\epsilon \left({\alpha }_{1}x+{\alpha }_{0}\right)\right)\right)}^{2}\text{d}x$

We can proceed in the following way

$\begin{array}{l}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\int }_{0}^{1}{\left(a{x}^{2}+bx+c-\left({\alpha }_{1}^{*}x+{\alpha }_{0}^{*}+\epsilon \left({\alpha }_{1}x+{\alpha }_{0}\right)\right)\right)}^{2}\text{d}x\\ ={\int }_{0}^{1}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(a{x}^{2}+bx+c-\left({\alpha }_{1}^{*}x+{\alpha }_{0}^{*}+\epsilon \left({\alpha }_{1}x+{\alpha }_{0}\right)\right)\right)}^{2}\text{d}x\end{array}$ (3.2)

Differentiating (3.1) we get

$\begin{array}{l}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(a{x}^{2}+bx+c-\left({\alpha }_{1}^{*}x+{\alpha }_{0}^{*}+\epsilon \left({\alpha }_{1}x+{\alpha }_{0}\right)\right)\right)}^{2}\\ =-2\left(a{x}^{2}+bx+c-{\alpha }_{1}^{*}x-{\alpha }_{2}^{*}\right)\left({\alpha }_{1}x+{\alpha }_{0}\right)\end{array}$ (3.3)

therefore,

$\begin{array}{l}-2{\int }_{0}^{1}\left(a{x}^{2}+bx+c-{\alpha }_{1}^{*}x-{\alpha }_{0}^{*}\right)\left({\alpha }_{1}x+{\alpha }_{0}\right)\text{d}x\\ =-2\left[\frac{a{\alpha }_{1}}{4}+\frac{b{\alpha }_{1}}{3}+\frac{c{\alpha }_{1}}{2}-\frac{{\alpha }_{1}^{*}{\alpha }_{1}}{3}-\frac{{\alpha }_{1}{\alpha }_{0}^{*}}{2}+\frac{b{\alpha }_{0}}{2}+\frac{a{\alpha }_{0}}{3}+c{\alpha }_{0}-{\alpha }_{1}^{*}{\alpha }_{0}-{\alpha }_{0}^{*}{\alpha }_{0}\right]\end{array}$ (3.4)

Setting (3.4) to zero we get a square system of the form

$\frac{{\alpha }_{1}^{*}}{3}+\frac{{\alpha }_{0}^{*}}{2}=\frac{a}{4}+\frac{b}{3}+\frac{c}{2}$ (3.5)

${\alpha }_{1}^{*}+{\alpha }_{0}^{*}=\frac{a}{2}+\frac{b}{2}+c$ (3.6)

Equations (3.5) and (3.6) can be written in matrix form as follows

$\left[\begin{array}{cc}\frac{1}{3}& \frac{1}{2}\\ 1& 1\end{array}\right]\left[\begin{array}{c}{\alpha }_{1}^{*}\\ {\alpha }_{0}^{*}\end{array}\right]=\left[\begin{array}{c}\frac{a}{4}+\frac{b}{3}+\frac{c}{2}\\ \frac{a}{2}+\frac{b}{2}+c\end{array}\right]$ (3.7)

The above system has a unique non-trivial solution since the matrix determinant is non-zero.

Theorem 1. Let $I\left(f\left(x\right)\stackrel{proj}{\to }g\left(x\right),\epsilon \right)$ be some projection from $f\left(x\right)\in {ℙ}_{n}\left[x\right]$ and $g\left(x\right)\in {ℙ}_{j}\left[x\right]$ , $0\le j

${I}^{2}\left(f\left(x\right)\stackrel{proj}{\to }g\left(x\right),\epsilon \right)=I\left(I\left(f\left(x\right)\stackrel{proj}{\to }g\left(x\right),\epsilon \right),\epsilon \right)=I\left(f\left(x\right)\stackrel{proj}{\to }g\left( x \right)\right)$

This is means that the operator Idempotent.

Proof. We first show that

$\begin{array}{l}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}\left({\int }_{a}^{b}{\left(f\left(x\right)-\stackrel{˜}{y}\left(x\right)-\epsilon g\left(x,\alpha \right)\right)}^{2}\right)\text{d}x\\ ={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(f\left(x\right)-\stackrel{˜}{y}\left(x\right)-\epsilon g\left(x,\alpha \right)\right)}^{2}\text{d}x\end{array}$ (3.8)

$=-2{\int }_{a}^{b}\left(f\left(x\right)-\stackrel{˜}{y}\left(x\right)\right)g\left(x,\alpha \right)\text{d}x$ (3.9)

$=-2{\int }_{a}^{b}\text{\hspace{0.17em}}f\left(x\right)g\left(x,\alpha \right)-\stackrel{˜}{y}\left(x\right)g\left(x,\alpha \right)\text{d}x$ (3.10)

$=-2{\int }_{a}^{b}\text{\hspace{0.17em}}f\left(x\right)g\left(x,\alpha \right)+\left(\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{x}^{i}\right)g\left(x,\alpha \right)\text{d}x$ (3.11)

$=-2{\int }_{a}^{b}\text{\hspace{0.17em}}f\left(x\right)g\left(x,\alpha \right)\text{d}x+2\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{i+k}\text{d}x\right)$ (3.12)

Suppose that $f\left(x\right)={\sum }_{i=0}^{n}\text{\hspace{0.17em}}{\beta }_{i}{x}^{i}$ then we get

$\begin{array}{l}-2{\int }_{a}^{b}\text{\hspace{0.17em}}f\left(x\right)g\left(x,\alpha \right)\text{d}x+2\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{i+k}\text{d}x\right)\\ =-2{\int }_{a}^{b}\left(\underset{i=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}{\beta }_{i}{x}^{i}\right)g\left(x,\alpha \right)\text{d}x+2\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{i+k}\text{d}x\right)\text{d}x\end{array}$ (3.13)

$=-2\underset{k=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}\underset{i=0}{\overset{n}{\sum }}\left({\beta }_{i}{\alpha }_{k}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{i+k}\text{d}x\right)+2\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{i+k}\text{d}x\right)$ (3.14)

$=-2\underset{k=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}\underset{i=0}{\overset{n}{\sum }}\left({\beta }_{i}{\alpha }_{k}{\left[\frac{{x}^{i+k+1}}{i+k+1}\right]}_{a}^{b}\right)+2\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}{\left[\frac{{x}^{i+k+1}}{i+k+1}\right]}_{a}^{b}\right)$ (3.15)

Evaluating the limits, we get the following result

$-2\underset{k-0}{\overset{j}{\sum }}\text{\hspace{0.17em}}\underset{i=0}{\overset{n}{\sum }}\left({\beta }_{i}{\alpha }_{k}\left[\frac{{b}^{i+k+1}}{i+k+1}-\frac{{a}^{i+k+1}}{i+k+1}\right]\right)+2\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}\left[\frac{{b}^{i+k+1}}{i+k+1}-\frac{{a}^{i+k+1}}{i+k+1}\right]\right)$ (3.16)

Setting Equation (3.16) = 0 we get

$\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}{\left[\frac{{b}^{i+k+1}}{i+k+1}-\frac{{a}^{i+k+1}}{i+k+1}\right]}_{k,i}\right)=\underset{k=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}\underset{i=0}{\overset{n}{\sum }}\left({\beta }_{i}{\alpha }_{k}{\left[\frac{{b}^{i+k+1}}{i+k+1}-\frac{{a}^{i+k+1}}{i+k+1}\right]}_{k,i}\right)$

We now show that, this reduces to a square system as follows

$j+1\text{\hspace{0.17em}}\text{rows}\text{\hspace{0.17em}}\left(\begin{array}{l}\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{\gamma }_{\left(k,i\right)}=\underset{i=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}{\beta }_{i}{k}_{i}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ \underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{\gamma }_{\left(j,i\right)0}=\underset{i=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}{\beta }_{i}{k}_{i}\end{array}$

therefore,

$\left[\begin{array}{cccc}{\gamma }_{\left(0,0\right)}& {\gamma }_{\left(0,1\right)}& \cdots & {\gamma }_{\left(0,j\right)}\\ {\gamma }_{\left(1,0\right)}& {\gamma }_{\left(1,1\right)}& \cdots & {\gamma }_{\left(1,j\right)}\\ ⋮& ⋮& \ddots & ⋮\\ {\gamma }_{\left(j,0\right)}& {\gamma }_{\left(j,1\right)}& \cdots & {\gamma }_{\left(j,j\right)}\end{array}\right]\left[\begin{array}{c}{\alpha }_{0}^{*}\\ {\alpha }_{1}^{*}\\ ⋮\\ {\alpha }_{j}^{*}\end{array}\right]=\left[\begin{array}{c}{\beta }_{i}{k}_{\left(0,i\right)}\\ {\beta }_{i}{k}_{\left(1,i\right)}\\ ⋮\\ {\beta }_{i}{k}_{\left(j,i\right)}\end{array}\right]$

Writing this in matrix form we have $\Gamma {\alpha }^{*}=K$ . This always will have a non-trivial solution provided that $det\left(\Gamma \right)\ne 0$ . Hence, the optimum vector can be computed. and the projection polynomial can be written as

$\begin{array}{c}\stackrel{˜}{y}\left(x\right)=\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{x}^{i}\end{array}$

To show that the operator is idempotent, we assume that $\stackrel{˜}{y}$ is not optimal and there exists out there a polynomial ${\stackrel{˜}{y}}^{\prime }\left(x\right)={\sum }_{i=0}^{j}\text{\hspace{0.17em}}{{\alpha }^{\prime }}_{i}^{*}{x}^{i}$ which represents a better projection. Hence, we apply the operator again noting that $deg\left({\stackrel{˜}{y}}^{\prime }\left(x\right)\right)=deg\left(\stackrel{˜}{y}\right)\left(x\right)=deg\left(g\left(x,\alpha \right)\right)$ .

$\begin{array}{l}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}\left({\int }_{a}^{b}{\left({\stackrel{˜}{y}}^{\prime }\left(x\right)-\stackrel{˜}{y}\left(x\right)-\epsilon g\left(x,\alpha \right)\right)}^{2}\right)\text{d}x\\ ={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left({\stackrel{˜}{y}}^{\prime }\left(x\right)-\stackrel{˜}{y}\left(x\right)-\epsilon g\left(x,\alpha \right)\right)}^{2}\text{d}x\end{array}$ (3.17)

$={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{{\alpha }^{\prime }}_{i}^{*}{x}^{i}-\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{x}^{i}-\epsilon \underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}{x}^{i}\right)}^{2}\text{d}x$ (3.18)

$={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(\underset{i=0}{\overset{j}{\sum }}\left({{\alpha }^{\prime }}_{i}^{*}-{\alpha }_{i}^{*}-\epsilon {\alpha }_{i}\right){x}^{i}\right)}^{2}\text{d}x$ (3.19)

$=-2{\int }_{a}^{b}\left(\underset{i=0}{\overset{j}{\sum }}\left({{\alpha }^{\prime }}_{i}^{*}-{\alpha }_{i}^{*}\right){x}^{i}\right)\left(\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}{x}^{i}\right)\text{d}x$ (3.20)

$=-2{\int }_{a}^{b}\underset{i=1}{\overset{{\left(j+1\right)}^{2}}{\sum }}\left({{\alpha }^{\prime }}_{i}^{*}-{\alpha }_{i}^{*}\right){\alpha }_{i}{x}^{i-1}\text{d}x$ (3.21)

$=-2\underset{i=1}{\overset{{\left(j+1\right)}^{2}}{\sum }}{\int }_{a}^{b}\left({{\alpha }^{\prime }}_{i}^{*}-{\alpha }_{i}^{*}\right){\alpha }_{i}{x}^{i-1}\text{d}x$ (3.22)

$=-2\underset{i=1}{\overset{{\left(j+1\right)}^{2}}{\sum }}\left({{\alpha }^{\prime }}_{i}^{*}-{\alpha }_{i}^{*}\right){\alpha }_{i}{\frac{{x}^{i}}{i}|}_{a}^{b}$ (3.23)

$=-2\underset{i=1}{\overset{{\left(j+1\right)}^{2}}{\sum }}\left({{\alpha }^{\prime }}_{i}^{*}-{\alpha }_{i}^{*}\right){\alpha }_{i}\left[\frac{{b}^{i}}{i}-\frac{{a}^{i}}{i}\right]$ (3.24)

Setting Equation (3.24) = 0 gives us

$\underset{i=1}{\overset{{\left(j+1\right)}^{2}}{\sum }}\left({{\alpha }^{\prime }}_{i}^{*}-{\alpha }_{i}^{*}\right){\alpha }_{i}\left[\frac{{b}^{i}}{i}-\frac{{a}^{i}}{i}\right]=0$

We see that this leads to the conclusion that ${{\alpha }^{\prime }}_{i}^{*}-{\alpha }_{i}^{*}=0⇒{{\alpha }^{\prime }}_{i}^{*}={\alpha }_{i}^{*}$ . Hence, our optimum vector is unique and the operator is Idempotent.

Lemma 2. In the polynomial ring ${ℙ}_{n}\left[x\right]$ , let $f\left(x\right)=0$ , the zero polynomial projected over $g\left(x\right)\in {ℙ}_{n}\left[x\right]$ over some interval $\left[a,b\right]$ is the optimum function $\stackrel{˜}{y}\left(x\right)=0$ . This implies the vector ${\alpha }^{*}=0\in {ℝ}^{j}$ , $0\le j\le n$ .

Proof. $f\left(x\right)\stackrel{proj}{\to }g\left(x\right)$ where $g\left(x\right)\in {ℙ}_{n}\left[x\right]$ over $\left[a,b\right]$ . Therefore, we compute

$I\left(0\stackrel{proj}{\to }g\left(x\right)\right)={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(0-\left(\stackrel{˜}{y}\left(x\right)+\epsilon g\left(x,\alpha \right)\right)\right)}^{2}\text{d}x$ (3.25)

$={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(-\stackrel{˜}{y}\left(x\right)-\epsilon g\left(x,\alpha \right)\right)}^{2}\text{d}x$ (3.26)

$=-{\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{x}^{i}+\epsilon \underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}{x}^{i}\right)}^{2}\text{d}x$ (3.27)

$=-2{\int }_{a}^{b}\left(\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{x}^{i}\right)\left(\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}{x}^{i}\right)\text{d}x$ (3.28)

$=-2{\int }_{a}^{b}\text{\hspace{0.17em}}\underset{i=1}{\overset{{\left(j+1\right)}^{2}}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{\alpha }_{i}{x}^{i-1}\text{d}x$ (3.29)

$=-2\underset{i=1}{\overset{{\left(j+1\right)}^{2}}{\sum }}\text{\hspace{0.17em}}{\int }_{a}^{b}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{\alpha }_{i}{x}^{i-1}\text{d}x$ (3.30)

$=-2\underset{i=1}{\overset{{\left(j+1\right)}^{2}}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{\alpha }_{i}\left[\frac{{b}^{i}-{a}^{i}}{i}\right]$ (3.31)

Given that $b\ne a$ implies that ${\alpha }_{i}^{*}=0$ $\forall i=1,\cdots ,j$ .

Lemma 3. Let $-f\left(x\right)\in {ℙ}_{n}\left[x\right]$ be some polynomial of degree $deg\left(-f\left(x\right)\right)\le n$ then

$\begin{array}{l}I\left(-f\left(x\right)\stackrel{proj}{\to }g\left(x\right)\right)=-I\left(f\left(x\right)\stackrel{proj}{\to }g\left(x\right)\right),\\ g\left(x\right)\in {ℙ}_{n}\left[x\right],\text{\hspace{0.17em}}deg\left(g\left(x\right)\right)\le deg\left(-f\left(x\right)\right)\le n\end{array}$

Proof. We start with some polynomial $-f\left(x\right)\in {ℙ}_{n}\left[x\right]$ and some $g\left(x\right)$ then $I\left(-f\left(x\right)\stackrel{proj}{\to }g\left(x\right)\right)$ over some interval $\left[a,b\right]$ . We shall write $I\left(f,g\right)$ for short.

$I\left(f,g\right)={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(-f\left(x\right)-\left(\stackrel{˜}{y}\left(x\right)+\epsilon g\left(x,\alpha \right)\right)\right)}^{2}\text{d}x$ (3.32)

$={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(\underset{i=0}{\overset{n}{\sum }}-{\beta }_{i}{x}^{i}-\left(\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{x}^{i}+\epsilon \underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}{x}^{i}\right)\right)}^{2}\text{d}x$ (3.33)

$=-2{\int }_{a}^{b}\left(\underset{i=0}{\overset{n}{\sum }}-{\beta }_{i}{x}^{i}-\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{x}^{i}\right)\left(\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}{x}^{i}\right)\text{d}x$ (3.34)

$=2{\int }_{a}^{b}\left(\underset{i=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}{\beta }_{i}{x}^{i}+\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}^{*}{x}^{i}\right)\left(\underset{i=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{i}{x}^{i}\right)\text{d}x$ (3.35)

$=2\underset{k=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}\underset{i=0}{\overset{n}{\sum }}\left({\beta }_{i}{\alpha }_{k}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{i+k}\text{d}x\right)+2\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{i+k}\text{d}x\right)$ (3.36)

$=2\underset{k-0}{\overset{j}{\sum }}\text{\hspace{0.17em}}\underset{i=0}{\overset{n}{\sum }}\left({\beta }_{i}{\alpha }_{k}{\left[\frac{{x}^{i+k+1}}{i+k+1}\right]}_{a}^{b}\right)+2\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}{\left[\frac{{x}^{i+k+1}}{i+k+1}\right]}_{a}^{b}\right)$ (3.37)

$=2\underset{k-0}{\overset{j}{\sum }}\text{\hspace{0.17em}}\underset{i=0}{\overset{n}{\sum }}\left({\beta }_{i}{\alpha }_{k}\left[\frac{{b}^{i+k+1}}{i+k+1}-\frac{{a}^{i+k+1}}{i+k+1}\right]\right)+2\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}\left[\frac{{b}^{i+k+1}}{i+k+1}-\frac{{a}^{i+k+1}}{i+k+1}\right]\right)$ (3.38)

Setting the above equation to 0 as before we get

$\underset{k,i=0}{\overset{j}{\sum }}\left({\alpha }_{i}^{*}{\alpha }_{k}{\left[\frac{{b}^{i+k+1}}{i+k+1}-\frac{{a}^{i+k+1}}{i+k+1}\right]}_{k,i}\right)=-\underset{k=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}\underset{i=0}{\overset{n}{\sum }}\left({\beta }_{i}{\alpha }_{k}{\left[\frac{{b}^{i+k+1}}{i+k+1}-\frac{{a}^{i+k+1}}{i+k+1}\right]}_{k,i}\right)$

This leads to the same linear system other than the fact that ${\alpha }^{*}=-{\alpha }^{*}$ hence we get $-\stackrel{˜}{y}$ . This implies that $I\left(-f,g\right)=-I\left(f,g\right)$ . Where ${\alpha }^{*},-{\alpha }^{*}\in {ℝ}^{j}$ .

Theorem 4. For some fixed $g\in {ℙ}_{n}\left[x\right]$ where $deg\left(g\right)\le n$ . Let $f,{f}^{\prime }$ be distinct polynomials in ${ℙ}_{n}\left[x\right]$ such that $deg\left(f\right),deg\left({f}^{\prime }\right)\le n$ . then we have

$I\left(f,g\right)+I\left({f}^{\prime },g\right)=I\left(f+{f}^{\prime },g\right)$

Proof.

$\begin{array}{c}I\left(f,g\right)+I\left({f}^{\prime },g\right)={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(f\left(x\right)-\left(\stackrel{˜}{y}\left(x\right)+\epsilon g\left(x,\alpha \right)\right)\right)}^{2}\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left({f}^{\prime }\left(x\right)-\left(\stackrel{˜}{{y}^{\prime }}\left(x\right)+\epsilon g\left(x,\alpha \right)\right)\right)}^{2}\text{d}x\end{array}$ (3.39)

$={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(f\left(x\right)-\left(\stackrel{˜}{y}\left(x\right)+\epsilon g\left(x,\alpha \right)\right)\right)}^{2}+{\left({f}^{\prime }\left(x\right)-\left(\stackrel{˜}{{y}^{\prime }}\left(x\right)+\epsilon g\left(x,\alpha \right)\right)\right)}^{2}\text{d}x$ (3.40)

$=-2{\int }_{a}^{b}\left(f\left(x\right)-\stackrel{˜}{y}\left(x\right)\right)g+\left({f}^{\prime }\left(x\right)-\stackrel{˜}{{y}^{\prime }}\left(x\right)\right)g\text{d}x$ (3.41)

$=-2{\int }_{a}^{b}\left(f\left(x\right)-\stackrel{˜}{y}\left(x\right)+{f}^{\prime }\left(x\right)-\stackrel{˜}{{y}^{\prime }}\left(x\right)\right)g\text{d}x$ (3.42)

$=-2{\int }_{a}^{b}\left(f\left(x\right)+{f}^{\prime }\left(x\right)\right)-\left(\stackrel{˜}{y}\left(x\right)+\stackrel{˜}{{y}^{\prime }}\left(x\right)\right)g\text{d}x$ (3.43)

Let $\stackrel{˜}{\psi }=\stackrel{˜}{y}\left(x\right)+\stackrel{˜}{{y}^{\prime }}\left(x\right)$ such that $deg\left(\stackrel{˜}{\psi }\right)=deg\left(\stackrel{˜}{y}\left(x\right)\right)=deg\left(\stackrel{˜}{{y}^{\prime }}\left(x\right)\right)$ . Hence, we have

$-2{\int }_{a}^{b}\left(f\left(x\right)+{f}^{\prime }\left(x\right)\right)-\left(\stackrel{˜}{y}\left(x\right)+\stackrel{˜}{{y}^{\prime }}\left(x\right)\right)g\text{d}x=-2{\int }_{a}^{b}\left(\left(f\left(x\right)+{f}^{\prime }\left(x\right)\right)-\stackrel{˜}{\psi }\right)g\text{d}x$ (3.44)

$={\int }_{a}^{b}{\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\left(\left(f\left(x\right)+{f}^{\prime }\left(x\right)\right)-\left(\stackrel{˜}{\psi }+\epsilon g\left(x,\alpha \right)\right)\right)}^{2}\text{d}x$ (3.45)

$={\frac{\text{d}}{\text{d}\epsilon }|}_{\epsilon =0}{\int }_{a}^{b}{\left(\left(f\left(x\right)+{f}^{\prime }\left(x\right)\right)-\left(\stackrel{˜}{\psi }+\epsilon g\left(x,\alpha \right)\right)\right)}^{2}\text{d}x$ (3.46)

$=I\left(f+{f}^{\prime },g\right)$ (3.47)

We have that $\stackrel{˜}{\psi }\in {ℙ}_{n}\left[x\right]$ and the optimum vector $\left({\psi }_{1}^{*},\cdots ,{\psi }_{m}^{*}\right)\in {ℝ}^{m}$ where $m=deg\left(\stackrel{˜}{\psi }\right)$ .

By the above lemmas and theorems, we clearly have the following results

• Commutativity clearly it is commutative $I\left(f+{f}^{\prime },g\right)=I\left({f}^{\prime }+f,g\right)$ .

• Associativity It should also clear the sum is associative since the sum of functions in ${ℙ}_{n}\left[x\right]$ is associative.

• Identity As demonstrated before we have shown that choosing ${f}^{\prime }=0$ implies that ${\stackrel{˜}{y}}^{\prime }\left(x\right)=0$ therefore we can conclude that $I\left(f+{f}^{\prime },g\right)=I\left(f+0,g\right)=I\left(f,g\right)$ .

• Inverse We have also shown that choosing ${f}^{\prime }=-f$ implies we get $I\left({f}^{\prime },g\right)=I\left(-f,g\right)=-I\left(f,g\right)$ therefore $I\left(f,g\right)+I\left(-f,g\right)=I\left(f,g\right)-I\left(f,g\right)=I\left(f-f,g\right)=I\left(0,g\right)$ .

Hence, we are now in a position to talk about a group structure for this projectors on polynomials rings.

Question 2: What about projections on orthogonal subspace.

To answer this question we will think of ${ℙ}_{n}\left[x\right]$ as a vector space with standard basis as before taken to be $\mathcal{B}=\left\{1,x,\cdots ,{x}^{n}\right\}$ . We know that for any $f,{f}^{\prime }\in {ℙ}_{n}\left[x\right]$ then $f+{f}^{\prime }\in {ℙ}_{n}\left[x\right]$ since $deg\left(f+{f}^{\prime }\right)=max\left\{deg\left(f\right),deg\left({f}^{\prime }\right)\right\}$ and $\forall c\in ℝ$ then $cf\in {ℙ}_{n}\left[x\right]$ . Next, we define the following map

$\phi :{ℙ}_{n}\left[x\right]\to {ℝ}^{n+1}\text{\hspace{0.17em}};\text{\hspace{0.17em}}\phi \left(f\right)=\phi \left(\underset{k=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}{\alpha }_{k}{x}^{k}\right)=\left({\alpha }_{0},\cdots ,{\alpha }_{n}\right)\in {ℝ}^{n+1}$

Clearly, $\phi$ is a bijection. Now given some element of $\mathcal{B}$ , we wish to construct its orthogonal subspace i.e. given some ${x}^{k}\in \mathcal{B},0\le 0\le n$ , we construct the subspace ${x}_{\perp }^{k}$ which we define as follows

${x}_{\perp }^{k}:=\left\{g\in {ℙ}_{n}\left[x\right]:{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}g\text{d}x=0\text{\hspace{0.17em}}\forall g\in {ℙ}_{n}\left[x\right]\right\}$

Working with integral, we get the following result

${\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}g\text{d}x={\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}\left(\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}{x}^{q}\right)\text{d}x\text{\hspace{0.17em}},\text{\hspace{0.17em}}1\le j\le n$ (3.48)

$={\int }_{a}^{b}\left(\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}{x}^{k+q}\right)\text{d}x$ (3.49)

$=\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k+q}\text{d}x$ (3.50)

$=\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}{\left[\frac{{x}^{k+q+1}}{k+q+1}\right]}_{a}^{b}$ (3.51)

$=\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}\left[\frac{{b}^{k+q+1}}{k+q+1}-\frac{{a}^{k+q+1}}{k+q+1}\right]$ (3.52)

$=\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}\left[\frac{{b}^{k+q+1}-{a}^{k+q+1}}{k+q+1}\right]$ (3.53)

Hence, we seek to solve the equation

$\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}\left[\frac{{b}^{k+q+1}-{a}^{k+q+1}}{k+q+1}\right]=0\text{\hspace{0.17em}},\text{\hspace{0.17em}}\forall 1\le j\le n$

To make the notation a bit lighter, we set ${\gamma }_{\left(k,q\right)}=\frac{{b}^{k+q+1}-{a}^{k+q+1}}{k+q+1}$ , so we solve the more consise equation

$\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}{\gamma }_{\left(k,q\right)}=0\text{\hspace{0.17em}},\text{\hspace{0.17em}}\forall 0\le j\le n$

Hence, we derive the required vector coefficients $\beta =\left({\beta }_{0},\cdots ,{\beta }_{j}\right)$ given some j, as shown in Table 1. Therefore, given the basis $\mathcal{B}=\left\{1,x,{x}^{2},\cdots ,{x}^{k},\cdots ,{x}^{n}\right\}$ , we can assign to each element in $\mathcal{B}$ a set M such that M is defined as follows

${M}_{k}:=\left\{h\in {ℙ}_{n}\left[x\right]:0\le \mathrm{deg}\left(h\right)\le n\text{\hspace{0.17em}};\text{\hspace{0.17em}}1\le j\le n\text{\hspace{0.17em}};\text{\hspace{0.17em}}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}h\text{d}x=0\right\}$

The polynomials in ${M}_{k}$ are all orthogonal to the basis elements ${x}^{k}\in \mathcal{B}$ . Polynomial functions in ${M}_{k}$ can be mapped via ${\phi }^{-1}$ , from the subspaces in the table above, into the coefficients of $h\in {M}_{k}$ .

Theorem 5. Each set ${M}_{k}$ forms a $ℝ$ -free module [4] in ${ℙ}_{n}\left[x\right]$ over $ℝ$ .

Proof. We know that each h in ${M}_{k}$ for some $0\le k\le n$ has the form ${\sum }_{q=0}^{j}\text{\hspace{0.17em}}{\beta }_{q}{x}^{q}=h$ such that ${\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}h\text{d}x=0,\text{\hspace{0.17em}}\forall h\in {M}_{k}$ . Let $h,{h}^{\prime }\in {M}_{k}$ then we have

$\begin{array}{c}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}\left(h+{h}^{\prime }\right)\text{d}x={\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}h+{x}^{k}{h}^{\prime }\text{d}x\\ ={\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}h\text{d}x+{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}{h}^{\prime }\text{d}x\\ =\text{\hspace{0.17em}}0\end{array}$

This implies that $h+{h}^{\prime }\in {M}_{k}$ $\forall h,h$ . It is easy to verify the other properties, hence we can conclude that $\left({M}_{k},+\right)$ is abelian. Defining $ℝ×{M}_{k}\to {M}_{k}$ such that $\left(r,h\right)\to rh$ implies that

${\int }_{a}^{b}\text{\hspace{0.17em}}r{x}^{k}h\text{d}x=r{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}h\text{d}x=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall h\in {M}_{k}$

• Then it is true that $r\left(h+{h}^{\prime }\right)\in {M}_{k}$ since

$r{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}\left(h+{h}^{\prime }\right)\text{d}x=r{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}h\text{d}x+r{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}{h}^{\prime }\text{d}x=0$

$\left(r+s\right)\stackrel{˙}{h}=rh+sh\in {M}_{k}$ since

$\left(r+s\right){\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}h\text{d}x=r{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}h\text{d}x+s{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}h\text{d}x=0$

$\left(rs\right)\stackrel{˙}{h}=r\left(s\stackrel{˙}{h}\right)\in {M}_{k}$ since $s{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{k}h\text{d}x=0$

${1}_{ℝ}h=h\in {M}_{k}$

The generating set for ${M}_{k}$ is the set $\mathcal{B}$ is linearly independent and therefore ${M}_{k}$ is a free module whose base ring is a field hence ${M}_{k}$ is a vector speace.

Table 1. j degree equations and their solution subspaces in ${ℝ}^{n+1}$ .

We, let $M$ be the set

$M:=\underset{k=0}{\overset{n}{\cup }}\text{\hspace{0.17em}}{M}_{k}$

Let $f\in {P}_{n}\left[x\right]$ , we wish to project f into each ${M}_{k}\text{\hspace{0.17em}},\text{\hspace{0.17em}}\forall k=1,\cdots ,n$ such that we minimize the error squared for each k. We will then take the infimum of these errors. Hence, we can define a vector rejection

$Rej\left(f\right):=\left\{{\stackrel{˜}{y}}_{\perp }\in M:Se\left(f,{\stackrel{˜}{y}}_{\perp }\right)\text{\hspace{0.17em}}\text{isminimum}\right\}$

where

$Se\left(f,{\stackrel{˜}{y}}_{\perp }\right):=inf\left\{S{e}_{k}\left(f,h\right):S{e}_{k}\left(f,h\right)=〈f-h,f-h〉,\forall h\in {M}_{k},k=1,\cdots ,n\right\}$

Suppose we have some f such that

$f=\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{x}^{p}\in {ℙ}_{n}\left[x\right]$

then we seek

${\int }_{a}^{b}{\left(\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{x}^{p}\right)}_{k}{\left(\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}{x}^{q}\right)}_{k}\text{d}x=0\text{\hspace{0.17em}}\forall j,k=0,\cdots ,n\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{s}\text{.t}\text{\hspace{0.17em}}{\int }_{a}^{b}{\left[f-h\right]}^{2}\text{d}x\text{\hspace{0.17em}}\text{isminforeach}\text{\hspace{0.17em}}{M}_{k}$

${\int }_{a}^{b}{\left(\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{x}^{p}\right)}_{k}{\left(\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}{x}^{q}\right)}_{k}\text{d}x={\int }_{a}^{b}{\left(\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{\beta }_{q}{x}^{p+q}\right)}_{k}\text{d}x$ (3.54)

$=\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{\beta }_{q}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{p+q}\text{d}x$ (3.55)

$=\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{\beta }_{q}{\frac{{x}^{p+q+1}}{p+q+1}|}_{a}^{b}$ (3.56)

$=\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{\beta }_{q}{\gamma }_{\left(p,q\right)}$ (3.57)

Setting ${\sum }_{p=0}^{s}\text{\hspace{0.17em}}{\sum }_{q=0}^{j}\text{\hspace{0.17em}}{\alpha }_{p}{\beta }_{q}{\gamma }_{\left(p,q\right)}=0$ implies that we have

$\varphi \left({\beta }_{0},\cdots ,{\beta }_{j}\right)\subset {ℝ}^{j+1}\text{\hspace{0.17em}},\text{\hspace{0.17em}}0\le j\le n$

$\begin{array}{l}{\int }_{a}^{b}{\left[{\left(\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{x}^{p}\right)}_{k}-{\left(\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}{x}^{q}\right)}_{k}\right]}^{2}\text{d}x\\ ={\int }_{a}^{b}{\left(\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{x}^{p}\right)}^{2}\text{d}x-2{\int }_{a}^{b}\left(\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{x}^{p}\right)\left(\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}{x}^{q}\right)\text{d}x\end{array}$ (3.58)

$+{\int }_{a}^{b}{\left(\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{q}{x}^{q}\right)}^{2}\text{d}x$ (3.59)

$={\int }_{a}^{b}\text{\hspace{0.17em}}\underset{p,d=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{\alpha }_{d}{x}^{p+d}\text{d}x-2{\int }_{a}^{b}\text{\hspace{0.17em}}\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{\beta }_{q}{x}^{p+q}\text{d}x$ (3.60)

$+{\int }_{a}^{b}\text{\hspace{0.17em}}\underset{m,q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{m}{\beta }_{q}{x}^{m+q}\text{d}x$ (3.61)

$=\underset{p,d=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{\alpha }_{d}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{p+d}\text{d}x-2\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{\beta }_{q}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{p+q}\text{d}x$ (3.62)

$+\underset{m,q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{m}{\beta }_{q}{\int }_{a}^{b}\text{\hspace{0.17em}}{x}^{m+q}\text{d}x$ (3.63)

$=\underset{p,d=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{\alpha }_{d}{\gamma }_{\left(p,d\right)}-2\underset{p=0}{\overset{s}{\sum }}\text{\hspace{0.17em}}\underset{q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\alpha }_{p}{\beta }_{q}{\gamma }_{\left(p,q\right)}+\underset{m,q=0}{\overset{j}{\sum }}\text{\hspace{0.17em}}{\beta }_{m}{\beta }_{q}{\gamma }_{\left(m,q\right)}$ (3.64)

$=\text{\hspace{0.17em}}\psi \left({\beta }_{q},{\beta }_{m}\right)$ (3.65)

By using the j-dimensional gradient operator $\nabla ={\left({\partial }_{{\beta }_{0}},{\partial }_{{\beta }_{1}},\cdots ,{\partial }_{{\beta }_{j}}\right)}^{\text{T}}$ . Calculating $\nabla \psi \left({\beta }_{q},{\beta }_{m}\right)=0$ leads to a $j+1$ homogeneous system of the form

$\nabla \psi \left({\beta }_{q},{\beta }_{m}\right)={\left[\begin{array}{c}{\sum }_{m=0}^{j}{\beta }_{m}{\gamma }_{\left(m,0\right)}-{\sum }_{p=0}^{s}{\alpha }_{p}{\gamma }_{\left(p,0\right)}\\ {\sum }_{m=0}^{j}{\beta }_{m}{\gamma }_{\left(m,1\right)}-{\sum }_{p=0}^{s}{\alpha }_{p}{\gamma }_{\left(p,1\right)}\\ ⋮\\ {\sum }_{m=0}^{j}{\beta }_{m}{\gamma }_{\left(m,j\right)}-{\sum }_{p=0}^{s}{\alpha }_{p}{\gamma }_{\left(p,j\right)}\end{array}\right]}_{\left[j+1,1\right]}=\left[\begin{array}{c}0\\ 0\\ ⋮\\ 0\end{array}\right]$

It is clear that putting together $\nabla \left({\beta }_{q},{\beta }_{m}\right)=0$ and the linear equation $\varphi \left({\beta }_{0},\cdots ,{\beta }_{j}\right)$ leads a $j+2×j+1$ linear system. Given the linear system generated by $\nabla \left({\beta }_{q},{\beta }_{m}\right)$ , it is feasible to reduce this to a square system by combining anyone pair of the $j+1$ equations in $\nabla \left({\beta }_{q},{\beta }_{m}\right)$ , thereby reducing the overall system to a square system of $\left[j+1\right]×\left[j+1\right]$

$\left[\begin{array}{c}{\sum }_{p=0}{\alpha }_{p}{\gamma }_{\left(p,0\right)},\cdots ,{\sum }_{p=0}{\alpha }_{p}{\gamma }_{\left(p,j\right)}\\ {\gamma }_{0,1},{\gamma }_{1,1},{\gamma }_{2,1},\cdots ,{\gamma }_{j,1}\\ {\gamma }_{0,2},{\gamma }_{1,2},{\gamma }_{2,2},\cdots ,{\gamma }_{j,2}\\ ⋮,\cdots ,\cdots ,\cdots ,⋮\\ {\gamma }_{0,j},{\gamma }_{1,j},{\gamma }_{2,j},\cdots ,{\gamma }_{j,j}\end{array}\right]\left[\begin{array}{c}{\beta }_{0}\\ {\beta }_{1}\\ ⋮\\ {\beta }_{j}\end{array}\right]=\left[\begin{array}{c}0\\ {\sum }_{p=0}^{s}{\alpha }_{p}{\gamma }_{\left(p,0\right)}\\ ⋮\\ {\sum }_{p=0}^{s}{\alpha }_{p}{\gamma }_{\left(p,j\right)}\end{array}\right]$

The system is a square system i.e. a $j+1×j+1$ system, hence there exists a solution provided coefficient determinant is non-zero.

4. A Different Approach

In this section, we will show that performining projections over polynomial spaces ${ℙ}_{n}\left(x\right)$ can also be done via an injective map in ${ℝ}^{n+1}$ . We can start by define the following mapping

$\xi :{ℙ}_{n}\left[ℝ\right]\to {ℝ}^{n+1}$

such that

$\xi \left({p}_{n}\left(x\right)\right)=\xi \left(\underset{i=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}{a}_{i}{x}^{i}\right)={\left({a}_{0},{a}_{1},\cdots ,{a}_{n}\right)}^{\text{T}}\in {ℝ}^{n+1}$

Lemma 6. The mapping $\xi$ is Bijective.

Proof. $\xi$ is bijective since

$\xi \left(\underset{i=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}{a}_{i}{x}^{i}\right)=\xi \left(\underset{i=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}{{a}^{\prime }}_{i}{x}^{i}\right){\left({{a}^{\prime }}_{0},{{a}^{\prime }}_{1},\cdots ,{{a}^{\prime }}_{n}\right)}^{\text{T}}={\left({a}_{0},{a}_{1},\cdots ,{a}_{n}\right)}^{\text{T}}$

Also, given any vector $\left({a}_{0},\cdots ,{a}_{n}\right)$ in ${ℝ}^{n+1}$ $\exists \text{\hspace{0.17em}}{p}_{n}\left(x\right)\in {ℙ}_{n}\left[ℝ\right]$ such that ${p}_{n}\left(x\right)={\sum }_{i=0}^{n}\text{\hspace{0.17em}}{{a}^{\prime }}_{i}{x}^{i}$ . Hence, $\xi$ is bijective.

Theorem 7. First Theorem

Given ${\left[{p}_{n}\left(x\right)\right]}_{\mathcal{B}}$ and ${\left[{p}_{k}\left(x\right)\right]}_{\mathcal{B}}$ where $k w.r.t basis $\mathcal{B}$ . The projection of ${\left[{p}_{n}\left(x\right)\right]}_{\mathcal{B}}$ onto ${\left[{p}_{k}\left(x\right)\right]}_{\mathcal{B}}$ relative to basis $\mathcal{B}$ is given by

${\xi }^{-1}\left(\frac{{g}_{ij}{\xi }^{i}\left({p}_{n}\left(x\right)\right){\xi }^{j}\left({p}_{k}\left(x\right)\right)}{{g}_{ij}{\xi }^{i}\left({p}_{k}\left(x\right)\right){\xi }^{j}\left({p}_{k}\left(x\right)\right)}\xi \left({p}_{k}\left(x\right)\right)\right)$

where we define

${g}_{ij}\triangleq {\int }_{0}^{1}\text{\hspace{0.17em}}{f}_{i}\left(x\right){f}_{j}\left(x\right)\text{d}x\text{\hspace{0.17em}};\text{\hspace{0.17em}}i,j=0,\cdots ,n$

where ${f}_{i}\left(x\right),{f}_{j}\left(x\right)\in \mathcal{B}=\left\{1,x,\cdots ,{x}^{n}\right\}$ .

Proof. Let ${p}_{n}\left(x\right),{p}_{k}\left(x\right)\in {ℙ}_{n}\left[ℝ\right]$ such that ${p}_{n}\left(x\right)={\sum }_{r=0}^{n}\text{\hspace{0.17em}}{\alpha }_{r}{x}^{r}$ and ${p}_{k}\left(x\right)={\sum }_{h=0}^{k}\text{\hspace{0.17em}}{\beta }_{h}{x}^{h}$ . Therefore we have

$\xi \left(\underset{r=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}{\alpha }_{r}{x}^{r}\right)={\left({\alpha }_{0},\cdots ,{\alpha }_{n}\right)}^{\text{T}}\in {ℝ}^{n+1}$

$\xi \left(\underset{h=0}{\overset{k}{\sum }}\text{\hspace{0.17em}}{\beta }_{h}{x}^{h}\right)={\left({\beta }_{0},\cdots {\beta }_{k},{\beta }_{k+1}=0,\cdots ,{\beta }_{n}=0\right)}^{\text{T}}\in {ℝ}^{n+1}\text{\hspace{0.17em}},\text{\hspace{0.17em}}k

First, let’s use the standard basis on ${ℙ}_{n}\left[ℝ\right]$ to be $\mathcal{B}:=\left\{1,x,\cdots ,{x}^{n}\right\}$ . This implies that

1) ${g}_{00}={\int }_{0}^{1}\text{\hspace{0.17em}}1\cdot 1\text{d}x=1$ 2) ${g}_{01}={\int }_{0}^{1}\text{\hspace{0.17em}}1\cdot x\text{d}x=\frac{1}{2}={g}_{10}$ 3) ${g}_{02}={\int }_{0}^{1}\text{\hspace{0.17em}}1\cdot {x}^{2}\text{d}x=\frac{1}{3}={g}_{20}$

4) ${g}_{11}={\int }_{0}^{1}\text{\hspace{0.17em}}x\cdot {x}^{2}\text{d}x=\frac{1}{4}$ 5) ${g}_{12}={\int }_{0}^{1}\text{\hspace{0.17em}}{x}^{2}\cdot {x}^{2}\text{d}x=\frac{1}{5}={g}_{21}$ 6) ${g}_{22}={\int }_{0}^{1}\text{\hspace{0.17em}}x\cdot x\text{d}x=\frac{1}{5}$

7) $\cdots$ 8) $\cdots$ 9) $\cdots$

We can define the following recurrence relation for the values of the metric tensor

${g}_{i,j},i,j=1,\cdots n\triangleq \frac{1}{i+j+1},i,j=0,\cdots ,n$

where we have used the fact that ${g}_{ij}={g}_{ji}$ for $i\ne j$ i.e. the symmetric property of the metric tensor. Hence, we can write

$\begin{array}{l}\frac{{g}_{ij}{\xi }^{i}\left({p}_{n}\left(x\right)\right){\xi }^{j}\left({p}_{k}\left(x\right)\right)}{{g}_{ij}{\xi }^{i}\left({p}_{k}\left(x\right)\right){\xi }^{j}\left({p}_{k}\left(x\right)\right)}\xi \left({p}_{k}\left(x\right)\right)\\ =\frac{{g}_{00}{\xi }_{n}^{0}{\xi }_{k}^{0}+{g}_{01}{\xi }_{n}^{0}{\xi }_{k}^{1}+\cdots +{g}_{21}{\xi }_{n}^{2}{\xi }_{k}^{1}+\cdots +{g}_{n,n}{\xi }_{n}^{n}{\xi }_{k}^{n}}{{g}_{0}{\xi }_{k}^{0}{\xi }_{k}^{0}+{g}_{01}{\xi }_{k}^{0}{\xi }_{k}^{1}+\cdots +{g}_{10}{\xi }_{k}^{1}{\xi }_{k}^{0}+\cdots +{g}_{n,n}{\xi }_{k}^{n}{\xi }_{k}^{n}}\xi \left({p}_{k}\left(x\right)\right)\end{array}$ (4.1)

$=\frac{{g}_{00}{\alpha }_{0}{\beta }_{0}+{g}_{01}{\alpha }_{0}{\beta }_{1}+\cdots +{g}_{10}{\alpha }_{1}{\beta }_{0}+\cdots +{g}_{n,n}{\alpha }_{n}{\beta }_{n}}{{g}_{00}{\beta }_{0}{\beta }_{0}+{g}_{01}{\beta }_{0}{\beta }_{1}+\cdots +{g}_{10}{\beta }_{1}{\beta }_{0}+\cdots +{g}_{n,n}{\beta }_{n}{\beta }_{n}}\xi \left({p}_{k}\left(x\right)\right)$ (4.2)

$=\frac{{g}_{00}{\alpha }_{0}{\beta }_{0}+{g}_{01}{\alpha }_{0}{\beta }_{1}+\cdots +{g}_{10}{\alpha }_{1}{\beta }_{0}+\cdots +{g}_{n,k}{\alpha }_{n}{\beta }_{k}}{{g}_{00}{\beta }_{0}{\beta }_{0}+{g}_{01}{\beta }_{0}{\beta }_{1}+\cdots +{g}_{10}{\beta }_{1}{\beta }_{0}+\cdots +{g}_{k,k}{\beta }_{k}{\beta }_{k}}\xi \left({p}_{k}\left(x\right)\right)$ (4.3)

$=\frac{{\alpha }_{0}{\beta }_{0}+1/2{\alpha }_{0}{\beta }_{1}+\cdots +1/2{\alpha }_{1}{\beta }_{0}+\cdots +1/\left(n+k+1\right){\alpha }_{n}{\beta }_{k}}{{\beta }_{0}{\beta }_{0}+1/2{\beta }_{0}{\beta }_{1}+\cdots +1/2{\beta }_{1}{\beta }_{0}+\cdots +1/\left(2k+1\right){\beta }_{k}{\beta }_{k}}\xi \left({p}_{k}\left(x\right)\right)$ (4.4)

$=\frac{{\alpha }_{0}{\beta }_{0}+1/2{\alpha }_{0}{\beta }_{1}+\cdots +1/2{\alpha }_{1}{\beta }_{0}+\cdots +1/\left(n+k+1\right){\alpha }_{n}{\beta }_{k}}{{\beta }_{0}{\beta }_{0}+1/2{\beta }_{0}{\beta }_{1}+\cdots +1/2{\beta }_{1}{\beta }_{0}+\cdots +1/\left(2k+1\right){\beta }_{k}{\beta }_{k}}\left\{\begin{array}{c}{\beta }_{0}\\ {\beta }_{1}\\ ⋮\\ {\beta }_{k}\\ 0\\ ⋮\\ 0\end{array}\right\}$ (4.5)

$=\left\{\begin{array}{c}\frac{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\alpha }_{i}{\beta }_{j}{\beta }_{0}}{i+j+1}}{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\beta }_{i}{\beta }_{j}}{i+j+1}}\\ \frac{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\alpha }_{i}{\beta }_{j}{\beta }_{1}}{i+j+1}}{{\sum }_{i=0}^{k}{\sum }_{j=0}^{k}\frac{{\beta }_{i}{\beta }_{j}}{i+j+1}}\\ ⋮\\ \frac{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\alpha }_{i}{\beta }_{j}{\beta }_{k}}{i+j+1}}{{\sum }_{i=0}^{k}{\sum }_{j=0}^{k}\frac{{\beta }_{i}{\beta }_{j}}{i+j+1}}\\ 0\\ ⋮\\ 0\end{array}\right\}$ (4.6)

Now, we can compute the intergral version of this projection as follows

$\frac{{\int }_{0}^{1}\left({\sum }_{r=0}^{n}{\alpha }_{r}{x}^{r}\right)\left({\sum }_{h=0}^{k}{\beta }_{h}{x}^{h}\right)\text{d}x}{{\int }_{0}^{1}{\left({\sum }_{h=0}^{k}{\beta }_{h}{x}^{h}\right)}^{2}\text{d}x}\left({\sum }_{h=0}^{k}{\beta }_{h}{x}^{h}\right)$

$\begin{array}{l}\frac{{\int }_{0}^{1}\left({\sum }_{r=0}^{n}{\alpha }_{r}{x}^{r}\right)\left({\sum }_{h=0}^{k}{\beta }_{h}{x}^{h}\right)\text{d}x}{{\int }_{0}^{1}{\left({\sum }_{h=0}^{k}{\beta }_{h}{x}^{h}\right)}^{2}\text{d}x}\left({\sum }_{h=0}^{k}{\beta }_{h}{x}^{h}\right)\\ =\frac{{\int }_{0}^{1}\text{\hspace{0.17em}}{\sum }_{r=0}^{n}{\sum }_{h=0}^{k}{\alpha }_{r}{\beta }_{h}{x}^{r+h}\text{d}x}{{\int }_{0}^{1}\text{\hspace{0.17em}}{\sum }_{h,s=0}^{k}{\beta }_{h}{\beta }_{s}{x}^{h+s}\text{d}x}\left({\sum }_{h=0}^{k}{\beta }_{h}{x}^{h}\right)\end{array}$ (4.7)

$=\frac{{\sum }_{r=0}^{n}{\sum }_{h=0}^{k}{\alpha }_{r}{\beta }_{h}{\int }_{0}^{1}\text{\hspace{0.17em}}{x}^{r+h}\text{d}x}{{\sum }_{h,s=0}^{k}{\beta }_{h}{\beta }_{s}{\int }_{0}^{1}\text{\hspace{0.17em}}{x}^{h+s}\text{d}x}\left({\sum }_{h=0}^{k}{\beta }_{h}{x}^{h}\right)$ (4.8)

$=\frac{{\sum }_{r=0}^{n}{\sum }_{h=0}^{k}{\alpha }_{r}{\beta }_{h}\cdot \frac{1}{r+h+1}}{{\sum }_{h,s=0}^{k}{\beta }_{h}{\beta }_{s}\cdot \frac{1}{h+s+1}}\left({\sum }_{h=0}^{k}{\beta }_{h}{x}^{h}\right)$ (4.9)

$=\frac{{\sum }_{r=0}^{n}{\sum }_{h=0}^{k}\frac{{\alpha }_{r}{\beta }_{h}}{r+h+1}}{{\sum }_{h,s=0}^{k}\frac{{\beta }_{h}{\beta }_{s}}{h+s+1}}\left({\sum }_{h=0}^{k}{\beta }_{h}{x}^{h}\right)$ (4.10)

Using the bijective mapping $\xi :{ℙ}_{n}\left[ℝ\right]\to {ℝ}^{n+1}$ we get the desired result by comparison.

$\begin{array}{l}\xi \left(\frac{{\sum }_{r=0}^{n}{\sum }_{h=0}^{k}\frac{{\alpha }_{r}{\beta }_{h}}{r+h+1}}{{\sum }_{h,s=0}^{k}\frac{{\beta }_{h}{\beta }_{s}}{h+s+1}}\left(\underset{h=0}{\overset{k}{\sum }}\text{\hspace{0.17em}}{\beta }_{h}{x}^{h}\right)\right)\\ =\left(\frac{{\sum }_{r=0}^{n}{\sum }_{h=0}^{k}\frac{{\alpha }_{r}{\beta }_{h}}{r+h+1}}{{\sum }_{h,s=0}^{k}\frac{{\beta }_{h}{\beta }_{s}}{h+s+1}}\right)\left[\begin{array}{c}{\beta }_{0}\\ {\beta }_{1}\\ ⋮\\ {\beta }_{k}\\ 0\\ ⋮\\ 0\end{array}\right]=\left\{\begin{array}{c}\frac{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\alpha }_{i}{\beta }_{j}{\beta }_{0}}{i+j+1}}{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\beta }_{i}{\beta }_{j}}{i+j+1}}\\ \frac{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\alpha }_{i}{\beta }_{j}{\beta }_{1}}{i+j+1}}{{\sum }_{i=0}^{k}{\sum }_{j=0}^{k}\frac{{\beta }_{i}{\beta }_{j}}{i+j+1}}\\ ⋮\\ \frac{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\alpha }_{i}{\beta }_{j}{\beta }_{k}}{i+j+1}}{{\sum }_{i=0}^{k}{\sum }_{j=0}^{k}\frac{{\beta }_{i}{\beta }_{j}}{i+j+1}}\\ 0\\ ⋮\\ 0\end{array}\right\}\end{array}$

Theorem 8. Second Theorem

The first theorem can be written with the Kronecker Product as follows

${\xi }^{-1}\left(\frac{\left({\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}\right)G}{{g}_{ij}{\xi }_{k}^{i}{\xi }_{k}^{j}}{\stackrel{¯}{\xi }}_{n}\right)$

where ${\left({\stackrel{¯}{\xi }}_{n}\otimes {\stackrel{¯}{\xi }}_{n}\right)}^{\text{T}}$ is the Kronecker Product, $G=\left({g}_{ij}\right)={g}_{ji}$ due to symmetry.

${g}_{ij}\triangleq \left\{\begin{array}{cccc}{\int }_{0}^{1}\text{\hspace{0.17em}}{f}_{0}\left(x\right){f}_{0}\left(x\right)\text{d}x& {\int }_{0}^{1}\text{\hspace{0.17em}}{f}_{0}\left(x\right){f}_{1}\left(x\right)\text{d}x& \cdots & {\int }_{0}^{1}\text{\hspace{0.17em}}{f}_{0}\left(x\right){f}_{n}\left(x\right)\text{d}x\\ {\int }_{0}^{1}\text{\hspace{0.17em}}{f}_{1}\left(x\right){f}_{0}\left(x\right)\text{d}x& {\int }_{0}^{1}\text{\hspace{0.17em}}{f}_{1}\left(x\right){f}_{1}\left(x\right)\text{d}x& \cdots & {\int }_{0}^{1}\text{\hspace{0.17em}}{f}_{1}\left(x\right){f}_{n}\left(x\right)\text{d}x\\ ⋮& ⋮& \ddots & ⋮\\ {\int }_{0}^{1}\text{\hspace{0.17em}}{f}_{n}\left(x\right){f}_{0}\left(x\right)\text{d}x& {\int }_{0}^{1}\text{\hspace{0.17em}}{f}_{n}\left(x\right){f}_{1}\left(x\right)\text{d}x& \cdots & {\int }_{0}^{1}\text{\hspace{0.17em}}{f}_{n}\left(x\right){f}_{n}\left(x\right)\text{d}x\end{array}\right\}$

where ${f}_{i}\left(x\right),{f}_{j}\left(x\right)\in \mathcal{B}$ .

Proof. Let ${p}_{n}\left(x\right),{p}_{k}\left(x\right)\in {ℙ}_{n}\left[ℝ\right]$ such that ${p}_{n}\left(x\right)={\sum }_{r=0}^{n}\text{\hspace{0.17em}}{\alpha }_{r}{x}^{r}$ and ${p}_{k}\left(x\right)={\sum }_{h=0}^{k}\text{\hspace{0.17em}}{\beta }_{h}{x}^{h}$ . Therefore we have

$\xi \left(\underset{r=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}{\alpha }_{r}{x}^{r}\right)={\left({\alpha }_{0},\cdots ,{\alpha }_{n}\right)}^{\text{T}}\in {ℝ}^{n+1}$

$\xi \left(\underset{h=0}{\overset{k}{\sum }}\text{\hspace{0.17em}}{\beta }_{h}{x}^{h}\right)={\left({\beta }_{0},\cdots {\beta }_{k},{\beta }_{k+1}=0,\cdots ,{\beta }_{n}=0\right)}^{\text{T}}\in {ℝ}^{n+1}\text{\hspace{0.17em}},\text{\hspace{0.17em}}k

Then we have

$\begin{array}{c}{\left[{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}\right]}_{n×n}=\left[\begin{array}{ccccccc}{\beta }_{0}{\beta }_{0}& {\beta }_{0}{\beta }_{1}& \cdots & {\beta }_{0}{\beta }_{k}& {0}_{k+1}& \cdots & {0}_{n}\\ {\beta }_{1}{\beta }_{0}& {\beta }_{1}{\beta }_{1}& \cdots & {\beta }_{1}{\beta }_{k}& {0}_{k+1}& \cdots & {0}_{n}\\ ⋮& ⋮& \ddots & ⋮& ⋮& \ddots & ⋮\\ {\beta }_{k}{\beta }_{0}& {\beta }_{k}{\beta }_{1}& \cdots & {\beta }_{k}{\beta }_{k}& {0}_{k+1}& \cdots & {0}_{n}\\ 0& 0& \cdots & 0& {0}_{k+1}& \cdots & {0}_{n}\\ ⋮& ⋮& \ddots & ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & 0& {0}_{k+1}& \cdots & {0}_{n}\end{array}\right]\\ \equiv \left(\begin{array}{cc}{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}& {M}_{k+1,n-k}=0\\ {M}_{n-k,k+1}=0& {M}_{n-k,n-k}=0\end{array}\right)\end{array}$

${\left[{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}\right]}_{n×n}^{\text{T}}=\left(\begin{array}{cc}{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}& {M}_{k,n-k}=0\\ {M}_{n-k,k}=0& {M}_{n-k,n-k}=0\end{array}\right)={\left[{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}\right]}_{n×n}$

We know that G is an $n×n$ matrix given

$\begin{array}{c}G=\left[\begin{array}{ccccc}{g}_{00}& {g}_{01}& {g}_{02}& \cdots & {g}_{0n}\\ {g}_{10}& {g}_{11}& {g}_{12}& \cdots & {g}_{1n}\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ {g}_{n0}& {g}_{n1}& {g}_{n2}& \cdots & {g}_{nn}\end{array}\right]\\ =\left[\begin{array}{ccccc}1& 1/2& 1/3& \cdots & 1/\left(n+1\right)\\ 1/2& 1/3& 1/4& \cdots & 1/\left(n+2\right)\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ 1/\left(n+1\right)& 1/\left(n+2\right)& 1/\left(n+3\right)& \cdots & 1/\left(2n+1\right)\end{array}\right]\end{array}$

Therefore we have

$\begin{array}{c}{\left[{\stackrel{¯}{\xi }}_{n}\otimes {\stackrel{¯}{\xi }}_{n}\right]}_{n×n}G={\left[\begin{array}{ccccccc}{\beta }_{0}{\beta }_{0}& {\beta }_{0}{\beta }_{1}& \cdots & {\beta }_{0}{\beta }_{k}& {0}_{k+1}& \cdots & {0}_{n}\\ {\beta }_{1}{\beta }_{0}& {\beta }_{1}{\beta }_{1}& \cdots & {\beta }_{1}{\beta }_{k}& {0}_{k+1}& \cdots & {0}_{n}\\ ⋮& ⋮& \ddots & ⋮& ⋮& \ddots & ⋮\\ {\beta }_{k}{\beta }_{0}& {\beta }_{k}{\beta }_{1}& \cdots & {\beta }_{k}{\beta }_{k}& {0}_{k+1}& \cdots & {0}_{n}\\ 0& 0& \cdots & 0& {0}_{k+1}& \cdots & {0}_{n}\\ ⋮& ⋮& \ddots & ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & 0& {0}_{k+1}& \cdots & {0}_{n}\end{array}\right]}_{n×n}\\ \text{\hspace{0.17em}}×{\left[\begin{array}{cccc}1& 1/2& \cdots & 1/\left(n+1\right)\\ 1/2& 1/3& \cdots & 1/\left(n+2\right)\\ ⋮& ⋮& \ddots & ⋮\\ 1/\left(n+1\right)& 1/\left(n+2\right)& \cdots & 1/\left(2n+1\right)\end{array}\right]}_{n×n}\end{array}$ (4.11)

$={\left[\begin{array}{cccc}{\sum }_{r=0}^{k}{\beta }_{0}{\beta }_{r}{g}_{r0}& {\sum }_{r=0}^{k}{\beta }_{0}{\beta }_{r}{g}_{r1}& \cdots & {\sum }_{r=0}^{k}{\beta }_{0}{\beta }_{r}{g}_{rn}\\ {\sum }_{r=0}^{k}{\beta }_{1}{\beta }_{r}{g}_{r0}& {\sum }_{r=0}^{k}{\beta }_{1}{\beta }_{r}{g}_{r1}& \cdots & {\sum }_{r=0}^{k}{\beta }_{1}{\beta }_{r}{g}_{rn}\\ ⋮& ⋮& \ddots & ⋮\\ {\sum }_{r=0}^{k}{\beta }_{k}{\beta }_{r}{g}_{r0}& {\sum }_{r=0}^{k}{\beta }_{k}{\beta }_{r}{g}_{r1}& \cdots & {\sum }_{r=0}^{k}{\beta }_{k}{\beta }_{r}{g}_{rn}\\ 0& 0& 0& 0\\ ⋮& ⋮& ⋮& ⋮\\ 0& 0& 0& 0\end{array}\right]}_{n×n}$ (4.12)

$={\left[\begin{array}{cccc}{\sum }_{r=0}^{k}\frac{{\beta }_{0}{\beta }_{r}}{r+1}& {\sum }_{r=0}^{k}\frac{{\beta }_{0}{\beta }_{r}}{r+2}& \cdots & {\sum }_{r=0}^{k}\frac{{\beta }_{0}{\beta }_{r}}{k+r+1}\\ {\sum }_{r=0}^{k}\frac{{\beta }_{1}{\beta }_{r}}{r+1}& {\sum }_{r=0}^{k}\frac{{\beta }_{1}{\beta }_{r}}{r+2}& \cdots & {\sum }_{r=0}^{k}\frac{{\beta }_{1}{\beta }_{r}}{k+r+1}\\ ⋮& ⋮& \ddots & ⋮\\ {\sum }_{r=0}^{k}\frac{{\beta }_{k}{\beta }_{r}}{r+1}& {\sum }_{r=0}^{k}\frac{{\beta }_{k}{\beta }_{r}}{r+2}& \cdots & {\sum }_{r=0}^{k}\frac{{\beta }_{k}{\beta }_{r}}{k+r+1}\\ 0& 0& 0& 0\\ ⋮& ⋮& ⋮& ⋮\\ 0& 0& 0& 0\end{array}\right]}_{n×n}$ (4.13)

therefore, we have

$\frac{1}{{g}_{ij}{\xi }^{i}{\xi }^{j}}\left[\begin{array}{cccc}{\sum }_{r=0}^{k}\frac{{\beta }_{0}{\beta }_{r}}{r+1}& {\sum }_{r=0}^{k}\frac{{\beta }_{0}{\beta }_{r}}{r+2}& \cdots & {\sum }_{r=0}^{k}\frac{{\beta }_{0}{\beta }_{r}}{k+r+1}\\ {\sum }_{r=0}^{k}\frac{{\beta }_{1}{\beta }_{r}}{r+1}& {\sum }_{r=0}^{k}\frac{{\beta }_{1}{\beta }_{r}}{r+2}& \cdots & {\sum }_{r=0}^{k}\frac{{\beta }_{1}{\beta }_{r}}{k+r+1}\\ ⋮& ⋮& \ddots & ⋮\\ {\sum }_{r=0}^{k}\frac{{\beta }_{k}{\beta }_{r}}{r+1}& {\sum }_{r=0}^{k}\frac{{\beta }_{k}{\beta }_{r}}{r+2}& \cdots & {\sum }_{r=0}^{k}\frac{{\beta }_{k}{\beta }_{r}}{k+r+1}\\ 0& 0& 0& 0\\ ⋮& ⋮& ⋮& ⋮\\ 0& 0& 0& 0\end{array}\right]\left[\begin{array}{c}{\alpha }_{0}\\ {\alpha }_{1}\\ ⋮\\ {\alpha }_{k}\\ ⋮\\ {\alpha }_{n}\end{array}\right]=\left\{\begin{array}{c}\frac{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\alpha }_{i}{\beta }_{0}{\beta }_{j}}{i+j+1}}{{\sum }_{i=0}^{k}{\sum }_{j=0}^{k}\frac{{\beta }_{i}{\beta }_{j}}{i+j+1}}\\ \frac{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\alpha }_{i}{\beta }_{1}{\beta }_{j}}{i+j+1}}{{\sum }_{i=0}^{k}{\sum }_{j=0}^{k}\frac{{\beta }_{i}{\beta }_{j}}{i+j+1}}\\ ⋮\\ \frac{{\sum }_{i=0}^{n}{\sum }_{j=0}^{k}\frac{{\alpha }_{i}{\beta }_{k}{\beta }_{j}}{i+j+1}}{{\sum }_{i=0}^{k}{\sum }_{j=0}^{k}\frac{{\beta }_{i}{\beta }_{j}}{i+j+1}}\\ 0\\ ⋮\\ 0\end{array}\right\}$

Lemma 9. The matrix

$\frac{\left({\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}\right)G}{{g}_{ij}{\xi }_{k}^{i}{\xi }_{k}^{j}}$

in Theorem 9 is normalised

Proof.

Suppose that we have two polynomials $g,{g}^{\prime }$ of order $k\le n$ such that $g=\gamma {g}^{\prime }$ for some $\gamma \in ℝ\\left\{0\right\}$ . Then, we propose that

$\frac{\left({\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}\right)G}{{g}_{ij}{\xi }_{k}^{i}{\xi }_{k}^{j}}=\frac{\left({\stackrel{¯}{{\xi }^{\prime }}}_{k}\otimes {\stackrel{¯}{{\xi }^{\prime }}}_{k}\right)G}{{g}_{ij}{{\xi }^{\prime }}_{k}^{i}{{\xi }^{\prime }}_{k}^{j}}$

and further such that projectors can be contructed by normalising the coefficient vector ${\stackrel{¯}{\xi }}_{k}$ , which I will denote by ${\stackrel{^}{\xi }}_{k}$ , with ${\stackrel{^}{\xi }}_{k}\in {S}^{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}$ with ${S}^{k}$ begin the Hypersphere in ${ℝ}^{k+1}$ .

${S}^{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}=\left\{{\stackrel{^}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}\in {ℝ}^{n+1}\text{\hspace{0.17em}}|\text{\hspace{0.17em}}\underset{r=0}{\overset{k}{\sum }}\text{\hspace{0.17em}}{\stackrel{^}{\beta }}_{r}^{2}=1\right\}$

where ${\stackrel{^}{\beta }}_{k}^{2}$ are the squares of the normalised coefficient vector.

Proof. Let $f,g\in {ℙ}_{n}\left[ℝ\right]$ such that $deg\left(g\right)\le deg\left(f\right)$ . $f\left(x\right)={\sum }_{k=0}^{n}\text{\hspace{0.17em}}{\alpha }_{k}{x}^{k}$ and $g\left(x\right)={\sum }_{r=0}^{k}\text{\hspace{0.17em}}{\beta }_{r}{x}^{r}$ . Using the mapping $\xi$ . We find that $\xi \left({\sum }_{r=0}^{k}\text{\hspace{0.17em}}{\beta }_{r}{x}^{r}\right)\to \left({\beta }_{0},{\beta }_{1},\cdots ,{\beta }_{k}\right)$ therefore ${\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}=\left({\beta }_{0},{\beta }_{1},\cdots ,{\beta }_{k},0,\cdots ,0\right)\in {ℝ}^{n+1}$ . We can calculate its Kronecker product.

${\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}\otimes {\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}=\left[\begin{array}{ccccccc}{\beta }_{0}{\beta }_{0}& {\beta }_{0}{\beta }_{1}& \cdots & {\beta }_{0}{\beta }_{k}& {0}_{k+1}& \cdots & {0}_{n}\\ {\beta }_{1}{\beta }_{0}& {\beta }_{1}{\beta }_{1}& \cdots & {\beta }_{1}{\beta }_{k}& {0}_{k+1}& \cdots & {0}_{n}\\ ⋮& ⋮& \ddots & ⋮& ⋮& \ddots & ⋮\\ {\beta }_{k}{\beta }_{0}& {\beta }_{k}{\beta }_{1}& \cdots & {\beta }_{k}{\beta }_{k}& {0}_{k+1}& \cdots & {0}_{n}\\ 0& 0& \cdots & 0& {0}_{k+1}& \cdots & {0}_{n}\\ ⋮& ⋮& \ddots & ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & 0& {0}_{k+1}& \cdots & {0}_{n}\end{array}\right]$

Therefore, we have

${‖{\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}\otimes {\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}‖}_{F}^{2}=\underset{r=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}\underset{p=0}{\overset{n}{\sum }}{|{\beta }_{r}{\beta }_{p}|}^{2}={‖{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}‖}^{2}$

Performing the calculation we get

${‖{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}‖}^{2}=\underset{r=0}{\overset{n}{\sum }}\text{\hspace{0.17em}}\underset{p=0}{\overset{n}{\sum }}{|{\beta }_{r}{\beta }_{p}|}^{2}$ (4.14)

$=\underset{r=0}{\overset{k}{\sum }}\text{\hspace{0.17em}}{\beta }_{r}^{4}+2\underset{r=0}{\overset{k}{\sum }}\text{\hspace{0.17em}}\underset{s=r+1}{\overset{k}{\sum }}{|{\beta }_{r}{\beta }_{k}|}^{2}$ (4.15)

$={\left(\underset{r=0}{\overset{k}{\sum }}\text{\hspace{0.17em}}{\beta }_{r}{\beta }_{r}\right)}^{2}$ (4.16)

This implies that $‖{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}‖={\sum }_{r=0}^{k}\text{\hspace{0.17em}}{\beta }_{r}{\beta }_{r}=Tr\left({\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}\right)=Tr\left({\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}\otimes {\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}\right)$ .

It is clear that the matrix

$\frac{{\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}\otimes {\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}}{Tr\left({\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}\otimes {\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}\right)}=\frac{{\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}\otimes {\stackrel{¯}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}}{{\delta }_{ij}{\beta }_{i}{\beta }_{j}}$

is normalised. Hence, we may conclude that

$\frac{\left({\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}\right)G}{{g}_{ij}{\xi }_{k}^{i}{\xi }_{k}^{j}}$

is also normalised. It also clear that if $g,{g}^{\prime }$ are such that $g=\gamma {g}^{\prime }$ , $\gamma \in ℝ\\left\{0\right\}$ gives $±{\stackrel{^}{\xi }}_{k}{\oplus }_{s=k+1}^{n}\left\{0\right\}$ which implies that $±{\stackrel{^}{\xi }}_{k}$ are anti-podal points on the hypersphere in ${ℝ}^{k+1}$ . We, therefore, can see that

$\frac{\left({\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}\right)G}{{g}_{ij}{\xi }_{k}^{i}{\xi }_{k}^{j}}=\frac{\left({\stackrel{¯}{{\xi }^{\prime }}}_{k}\otimes {\stackrel{¯}{{\xi }^{\prime }}}_{k}\right)G}{{g}_{ij}{{\xi }^{\prime }}_{k}^{i}{{\xi }^{\prime }}_{k}^{j}}$

Theorem 10. Given two polynomials $g,{g}^{\prime }\in {ℙ}_{n}\left[ℝ\right]$ such that $deg\left(g\right)=k\wedge deg\left({g}^{\prime }\right)={k}^{\prime }\le n$ then, given some $f\in {ℙ}_{n}\left[ℝ\right]$ , the projection of f onto $g+{g}^{\prime }$ is given by the following expression

${\xi }^{-1}\left(\left(\frac{{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}}{{g}_{ij}{\xi }_{k}^{i}{\xi }_{k}^{j}}+\frac{{\stackrel{¯}{\xi }}_{k}\otimes {{\stackrel{¯}{\xi }}^{\prime }}_{k}}{{g}_{ij}{\xi }_{k}^{i}{\xi }_{k}^{j}}+\frac{{{\stackrel{¯}{\xi }}^{\prime }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}}{{g}_{ij}{\xi }_{k}^{i}{\xi }_{k}^{j}}+\frac{{{\stackrel{¯}{\xi }}^{\prime }}_{k}\otimes {{\stackrel{¯}{\xi }}^{\prime }}_{k}}{{g}_{ij}{\xi }_{k}^{i}{\xi }_{k}^{j}}\right)G\right)$

Proof. Let $g,{g}^{\prime }\in {ℙ}_{n}\left[ℝ\right]$ with $deg\left(g\right)=k\wedge deg\left({g}^{\prime }\right)={k}^{\prime }\le n$ such that $g={\sum }_{r=0}^{k}\text{\hspace{0.17em}}{\beta }_{r}{x}^{r}$ and ${g}^{\prime }={\sum }_{r=0}^{{k}^{\prime }}\text{\hspace{0.17em}}{{\beta }^{\prime }}_{r}{x}^{r}$ . We know that $g+{g}^{\prime }=max\left(deg\left(k\right),deg\left({k}^{\prime }\right)\right)$ . We also define $f={\sum }_{q=0}^{n}\text{\hspace{0.17em}}{\alpha }_{q}{x}^{q}$ . We, therefore, have

$g+{g}^{\prime }=g=\underset{r=0}{\overset{k}{\sum }}\text{\hspace{0.17em}}{\beta }_{r}{x}^{r}+\underset{r=0}{\overset{{k}^{\prime }}{\sum }}\text{\hspace{0.17em}}{{\beta }^{\prime }}_{r}{x}^{r}=\underset{r=0}{\overset{\mathrm{max}\left\{k,{k}^{\prime }\right\}}{\sum }}\left({\beta }_{r}+{{\beta }^{\prime }}_{r}\right){x}^{r}+\underset{s=\mathrm{min}\left\{k,{k}^{\prime }\right\}+1}{\overset{\mathrm{max}\left\{k,{k}^{\prime }\right\}}{\sum }}\text{\hspace{0.17em}}{\beta }_{s}{x}^{s}$

Therefore, we have

$\xi \left(g+{g}^{\prime }\right)=\xi \left(\underset{r=0}{\overset{\mathrm{min}\left\{k,{k}^{\prime }\right\}}{\sum }}\left({\beta }_{r}+{{\beta }^{\prime }}_{r}\right){x}^{r}+\underset{s=\mathrm{min}\left\{k,{k}^{\prime }\right\}+1}{\overset{\mathrm{max}\left\{k,{k}^{\prime }\right\}}{\sum }}\text{\hspace{0.17em}}{\beta }_{s}{x}^{s}\right)$ (4.17)

$={\left(\left({\beta }_{0}+{{\beta }^{\prime }}_{0}\right),\cdots ,\left({\beta }_{\mathrm{min}\left\{k,{k}^{\prime }\right\}}+{{\beta }^{\prime }}_{\mathrm{min}\left\{k,{k}^{\prime }\right\}}\right),{\beta }_{\mathrm{min}\left\{k,{k}^{\prime }\right\}+1},\cdots ,{\beta }_{\mathrm{max}\left\{k,{k}^{\prime }\right\}}\right)}^{\text{T}}\in {ℝ}^{\mathrm{max}\left\{k,{k}^{\prime }\right\}}$ (4.18)

I will denote this vector ${\stackrel{¯}{\xi }}_{k,{k}^{\prime }}$ , therefore we have ${\stackrel{¯}{\xi }}_{k,{k}^{\prime }}{\oplus }_{s=\mathrm{max}\left\{k,{k}^{\prime }\right\}+1}^{n}\left\{0\right\}$ .

${\stackrel{¯}{\xi }}_{k,{k}^{\prime }}{\oplus }_{s=\mathrm{max}\left\{k,{k}^{\prime }\right\}+1}^{n}\left\{0\right\}\otimes {\stackrel{¯}{\xi }}_{k,{k}^{\prime }}{\oplus }_{s=\mathrm{max}\left\{k,{k}^{\prime }\right\}+1}^{n}\left\{0\right\}$

The vector ${\stackrel{¯}{\xi }}_{k,{k}^{\prime }}{\oplus }_{s=\mathrm{max}\left\{k,{k}^{\prime }\right\}+1}^{n}\left\{0\right\}$ can be written as

${\stackrel{¯}{\xi }}_{k,{k}^{\prime }}{\oplus }_{s=\mathrm{max}\left\{k,{k}^{\prime }\right\}+1}^{n}\left\{0\right\}=\xi \left(g\right)+\xi \left({g}^{\prime }\right)={\stackrel{¯}{\xi }}_{k}\left(g\right){\oplus }_{s=k+1}\left\{0\right\}+{{\stackrel{¯}{\xi }}^{\prime }}_{k}\left(g\right){\oplus }_{s={k}^{\prime }+1}\left\{0\right\}$

For clearer notation, we write $A={\stackrel{¯}{\xi }}_{k}\left(g\right){\oplus }_{s=k+1}\left\{0\right\}$ and $B={{\stackrel{¯}{\xi }}^{\prime }}_{k}\left(g\right){\oplus }_{s={k}^{\prime }+1}\left\{0\right\}$ . Therefore, by the distributive law of the Kronecker Product, we get the follwing matrix.

$\left(A+B\right)\otimes \left(A+B\right)=A\otimes A+A\otimes B+B\otimes A+B\otimes B$

which in matrix form gives us

$\begin{array}{l}\left(A+B\right)\otimes \left(A+B\right)\\ =\left(\begin{array}{cc}{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}& {M}_{k+1,n-k}=0\\ {M}_{n-k,k+1}=0& {M}_{n-k,n-k}=0\end{array}\right)+\left(\begin{array}{cc}{\stackrel{¯}{\xi }}_{{k}^{\prime }}\otimes {\stackrel{¯}{\xi }}_{k}& {M}_{k,n-{k}^{\prime }}=0\\ {M}_{n-k,{k}^{\prime }}=0& {M}_{n-k,n-{k}^{\prime }}=0\end{array}\right)\end{array}$ (4.19)

$+\left(\begin{array}{cc}{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{{k}^{\prime }}& {M}_{n-k,{k}^{\prime }}=0\\ {M}_{k,n-{k}^{\prime }}=0& {M}_{n-k,n-{k}^{\prime }}=0\end{array}\right)+\left(\begin{array}{cc}{\stackrel{¯}{\xi }}_{{k}^{\prime }}\otimes {\stackrel{¯}{\xi }}_{{k}^{\prime }}& {M}_{{k}^{\prime },n-{k}^{\prime }}=0\\ {M}_{n-{k}^{\prime },{k}^{\prime }}=0& {M}_{n-{k}^{\prime },n-{k}^{\prime }}=0\end{array}\right)$ (4.20)

Clearly, ${\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}$ is of order $\left(k+1,k+1\right)$ , ${{\stackrel{¯}{\xi }}^{\prime }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}$ is of order $\left({k}^{\prime }+1,k+1\right)$ , this implies that ${\stackrel{¯}{\xi }}_{k}\otimes {{\stackrel{¯}{\xi }}^{\prime }}_{k}$ is of order $\left(k+1,{k}^{\prime }+1\right)$ and ${\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}$ is of order $\left({k}^{\prime }+1,{k}^{\prime }+1\right)$ .

Normalising $A\otimes A+A\otimes B+B\otimes A+B\otimes B$ gives and multiplying by the metric tensor, we get

$\left(\frac{A\otimes A}{{g}_{ij}{\xi }^{i}{\xi }^{j}}+\frac{A\otimes B}{{g}_{ij}{\xi }^{i}{{\xi }^{\prime }}^{j}}+\frac{B\otimes A}{{g}_{ij}{{\xi }^{\prime }}^{i}{\xi }^{j}}+\frac{B\otimes B}{{g}_{ij}{{\xi }^{\prime }}^{i}{{\xi }^{\prime }}^{j}}\right)G$

It can be verified that

$\begin{array}{l}\left(\frac{A\otimes A}{{g}_{ij}{\xi }^{i}{\xi }^{j}}+\frac{A\otimes B}{{g}_{ij}{\xi }^{i}{{\xi }^{\prime }}^{j}}+\frac{B\otimes A}{{g}_{ij}{{\xi }^{\prime }}^{i}{\xi }^{j}}+\frac{B\otimes B}{{g}_{ij}{{\xi }^{\prime }}^{i}{{\xi }^{\prime }}^{j}}\right)G\\ =\left(\frac{A\otimes A}{Tr\left(\left(A\otimes A\right)G\right)}+\frac{A\otimes B}{Tr\left(\left(A\otimes B\right)G\right)}+\frac{B\otimes A}{Tr\left(\left(B\otimes A\right)G\right)}+\frac{B\otimes B}{Tr\left(\left(B\otimes B\right)G\right)}\right)G\end{array}$

This is equal to

$\frac{1}{{g}_{ij}\left({\sum }_{{m}^{\prime }=1}^{2}{\sum }_{m=1}^{2}{\xi }_{m}^{i}{\oplus }_{s=k+1}^{n}\otimes {{\xi }^{\prime }}_{{m}^{\prime }}^{j}{\oplus }_{s=k+1}^{n}\right)}\left(A\otimes A+B\otimes A+A\otimes B+B\otimes B\right)G$

5. The Group

To formulate the group structure, we focus our attention to the subspaces of ${ℙ}_{n}\left[ℝ\right]$ where we can define the subspaces as follows

${\Omega }_{k}:=\left\{g\in {ℙ}_{n}\left[ℝ\right]:deg\left(g\right)=k\text{\hspace{0.17em}}\forall g\in {ℙ}_{n}\left[ℝ\right]\right\}$

It is clear that ${\Omega }_{k}\subset {ℙ}_{n}\left[ℝ\right]$ . Then, we also know that

$\stackrel{¯}{\xi }\left({\Omega }_{k}\right){\oplus }_{s=k+1}^{n}\left\{0\right\}=\left({\beta }_{0},{\beta }_{1},\cdots ,{\beta }_{k},0,\cdots ,0\right)\in {ℝ}^{n+1}$

Then projectors on ${\Omega }_{k}$ can be constructed as ${G}_{{\Omega }_{k}}$ which represents the set of all projectors in the subspace ${\Omega }_{k}$ .

Theorem 11. The set ${G}_{{\Omega }_{k}}$ is a group under the mapping

$\psi :\left({G}_{{\Omega }_{k}}×{G}_{{\Omega }_{k}}\right)\to {G}_{{\Omega }_{k}}$

such that

$\varphi :=\left(\begin{array}{l}\varphi \circ \stackrel{¯}{\xi }\left(g\right)=\varphi \left(\stackrel{¯}{\xi }\left(g\right)\right)=\stackrel{¯}{\xi }\left(g\right){\oplus }_{s=k+1}^{n}\left\{0\right\}\otimes \stackrel{¯}{\xi }\left(g\right){\oplus }_{s=k+1}^{n}\left\{0\right\}\\ \varphi \circ \stackrel{¯}{\xi }\left(-g\right)=\varphi \left(-\stackrel{¯}{\xi }\left(g\right)\right)=-\left(\stackrel{¯}{\xi }\left(g\right){\oplus }_{s=k+1}^{n}\left\{0\right\}\otimes \stackrel{¯}{\xi }\left(g\right){\oplus }_{s=k+1}^{n}\left\{0\right\}\right)\end{array}$

$\psi \left({P}_{g},{P}_{{g}^{\prime }}\right)={P}_{g}+{P}_{g{g}^{\prime }}+{P}_{{g}^{\prime }g}+{P}_{{g}^{\prime }}={P}_{g+{g}^{\prime }}$

where the set ${G}_{{\Omega }_{k}}$ is defined as ${G}_{{\Omega }_{k}}:=\left\{P:P=\varphi \circ \stackrel{¯}{\xi }\left(g\right),\forall g\in {\Omega }_{k}\right\}$ .

Proof. We write $A=\stackrel{¯}{\xi }\left(g\right){\oplus }_{s=k+1}\left\{0\right\}$ and ${A}^{\prime }=\stackrel{¯}{{\xi }^{\prime }}\left(g\right){\oplus }_{s=k+1}\left\{0\right\}$ .

Given some $g,{g}^{\prime }\in {\Omega }_{k}$ , it is clear $g+{g}^{\prime }\in {\Omega }_{k}$ . We also know that $\xi \left(g\right)=A$ and $\xi \left({g}^{\prime }\right)={A}^{\prime }$ , hence projecting in the direction of $g+{g}^{\prime }$ is the projection of $A+{A}^{\prime }$ . From the above section we have

$\begin{array}{l}\left(A+{A}^{\prime }\right)\otimes \left(A+{A}^{\prime }\right)\\ =\left(\begin{array}{cc}{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{\xi }}_{k}& {M}_{k+1,n-k}=0\\ {M}_{n-k,k+1}=0& {M}_{n-k,n-k}=0\end{array}\right)+\left(\begin{array}{cc}{\stackrel{¯}{{\xi }^{\prime }}}_{k}\otimes {\stackrel{¯}{\xi }}_{k}& {M}_{k+1,n-k}=0\\ {M}_{n-k,k+1}=0& {M}_{n-k,n-k}=0\end{array}\right)\end{array}$ (5.1)

$+\left(\begin{array}{cc}{\stackrel{¯}{\xi }}_{k}\otimes {\stackrel{¯}{{\xi }^{\prime }}}_{k}& {M}_{k+1,n-k}=0\\ {M}_{n-k,k+1}=0& {M}_{n-k,n-k}=0\end{array}\right)+\left(\begin{array}{cc}{\stackrel{¯}{{\xi }^{\prime }}}_{k}\otimes {\stackrel{¯}{{\xi }^{\prime }}}_{k}& {M}_{k+1,n-k}=0\\ {M}_{n-k,k+1}=0& {M}_{n-k,n-k}=0\end{array}\right)$ (5.2)

In the same way that $g+{g}^{\prime }={g}^{\prime }+g$ in commutative in ${\Omega }_{k}$ , we can see that $\left(A+{A}^{\prime }\right)\otimes \left(A+{A}^{\prime }\right)=\left({A}^{\prime }+A\right)\otimes \left({A}^{\prime }+A\right)$ hence it is also commutative. In a similar way, we can argue associativity. It is also clear that for $k=0,\cdots ,n$ the zero polynomial is each ${\Omega }_{k},\forall k=0,\cdots ,n$ . The zero polynomial will give the identity element since $\left(A+\stackrel{¯}{0}\right)\otimes \left(A+\stackrel{¯}{0}\right)=A\otimes A$ . The inverse element will, simply be, $-A=-{\stackrel{¯}{\xi }}_{k}\left(g\right){\oplus }_{s=k+1}^{n}=\xi \left(-g\right)=-\xi \left(g\right)$ . Hence, we see that $\left(A+\left(-A\right)\right)\otimes \left(A+\left(-A\right)\right)={P}_{0}$ .

We conclude that the ${G}_{{\Omega }_{k}}$ is a group.

Given that for each $k=0,\cdots ,n$ we can say that ${\Omega }_{k}\subset {{\Omega }^{\prime }}_{k}⇒k<{k}^{\prime }$ and ${\Omega }_{k}\subseteq {ℙ}_{n}\left[ℝ\right]$ . This tells us that ${G}_{{\Omega }_{k}}<{G}_{{{\Omega }^{\prime }}_{k}}$ whenever $k<{k}^{\prime }$ . Then, from group theory, we know that the union of two subgroups is a group if one is a subset of the other. Hence, we have the following result

$G=\underset{i\in I}{\cup }\text{\hspace{0.17em}}{G}_{{\Omega }_{k}}$

is also a group. Indeed, we have

${G}_{{\Omega }_{0}}\subset {G}_{{\Omega }_{1}}\subset {G}_{{\Omega }_{2}}\subset \cdots \subset {G}_{{\Omega }_{n}}$

6. Conclusion

In conclusion, given the results above, we find that projections in polynomials spaces are very similar to traditional projections in Euclidean spaces with the right construct. This operation can be achieved via an integral operator or a Kronecker Product. We have also noticed that very similarly, we are using hyper-spheres in ${ℙ}^{k+1}$ to construct such operators. Highlighting a paper, previously published in ALAMAT, discussing the differential geometry aspect of projections and the manifold structure, such a link can be established for polynomial spaces.

Acknowledgements

Dedicated to both my grandmother and grandfather who left us too early. I know Papi would be happy to see this paper. You are very much missed.

I also would like to dedicate this paper to my girlfriend, Miss Yang Xiaoying, and would like to thank her for the love, support and happiness she brings to my life.

Notation

The notation system is as follows:

1) ${P}_{n}\left[x\right],{P}_{n}\left[ℝ\right]$ : The space of polynomials of degree at most n over the real numbers.

2) $\mathcal{B}\left(x\right)$ : The standard basis in ${P}_{n}\left[x\right]$ .

3) $f\left(x\right),g\left(x\right),h\left(x\right)$ : arbitrary elements of ${P}_{n}\left[ℝ\right]$ .

4) $I\left(x,\epsilon \right)$ : Operator on ${P}_{n}\left[ℝ\right]$ on interval $\left[a,b\right]$ with parameter $\epsilon$ .

5) $deg\left(f\right)$ : Degree of the polynomial $f\left(x\right)$ in ${P}_{n}\left[x\right]$ .

6) $\phi ,\xi$ : Mappings between ${P}_{n}\left[x\right]$ and ${ℝ}^{n+1}$ .

7) ${g}_{ij}$ : Metric Tensor on ${P}_{n}\left[x\right]$ .

8) $f\left(x\right),g\left(x\right),h\left(x\right)$ : arbitrary elements of ${P}_{n}\left[ℝ\right]$ .

9) $\stackrel{¯}{\xi }\otimes \stackrel{¯}{\xi }$ : The Kronecker Product of the vector $\stackrel{¯}{\xi }$ .

10) ${\Omega }_{k}$ : The subspace of polynomials of degree k.

11) ${G}_{{\Omega }_{k}}$ : The set of projectors onto ${\Omega }_{k}$ .

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

 [1] Jean-Francois, N. (2018) The Projective Group as a Projective Manifold. Advances in Linear Algebra & Matrix Theory, 8, 134-142. https://doi.org/10.4236/alamt.2018.84012 [2] Niglio, J.-F. (2019) A Follow-Up on Projection Theory: Theorems and Group Action. Advances in Linear Algebra & Matrix Theory, 9, 1-19.https://doi.org/10.4236/alamt.2019.91001 [3] Donald Hartig Orhogonal Projections in Function Spaces. http://wk.ixueshu.com/file/062e60b8cfa8b283318947a18e7f9386.html [4] Liesen, J. and Mehrmann, V. (2015) Linear Algebra. Springer.