^{1}

^{*}

^{1}

^{1}

^{2}

In this paper, we have discussed a number of fitting methods to predict crop yield of soybean depending on the nature of environment and a comparison is done between them on the basis of available data set. Later we have suggested a suitable method for the prediction of the crop yield on the basis of residual (error) terms. Statistical analysis is also used for getting the relationships between different components (variables) of available data set. At last, we have discussed about Chaos that can distort the whole mathematical analysis and a computational approach.

Climate describes the ensemble sum of typical conditions of temperature, relative humidity, cloudiness, precipitation, wind speed and direction and innumerable other meteorological factors that prevail regionally for extended periods [

Many different approaches are used for constraining climate based crop yield predictions based on observations of past empirical change in the yield [

Since the sensors of the parameters mentioned above are respect to one region in Central India, so we consider the crop that this region has lavishly produced, soybean. Soybean is one of the important crops of the world [

Soybean is a crop that grows in warm and moist climate. An optimum yield requires a temperature ranging between 26.5˚C to 30˚C. For rapid germination and vigorous seedling growth soil temperatures of 15.5˚C or above are most suitable. A lower temperature delays flowering. Although, moisture enhances the yield of the crop but excess of moisture can make it prone to foliar diseases like frogeye leafs spot and septoria brown spot. Therefore, an optimum amount of humidity is required for the crop.

Wind direction and velocity also have a significant influence on crop growth [

As far as the prediction of the yield on a larger perspective is considered, the simulations carried out by supercomputers are based on curve fitting methods. Curve fitting is the process of constructing a curve that has the best fit to a series of data points, possibly subject to constraints. Curve fitting involves interpolation [

Month | Temperature (X) | Humidity (Y) | Wind speed (Z) | Wind direction (W) |
---|---|---|---|---|

January | 142.188 | 14.992 | 50.708 | 4.510 |

February | 159.590 | 19.230 | 34.500 | 4.610 |

March | 191.820 | 25.080 | 13.190 | 4.486 |

April | 202.441 | 30.287 | 17.602 | 5.180 |

May | 252.712 | 33.287 | 17.541 | 5.503 |

June | 255.880 | 32.440 | 56.000 | 9.268 |

July | 238.640 | 28.380 | 64.650 | 6.770 |

August | 245.000 | 24.100 | 81.910 | 4.708 |

September | 203.300 | 25.210 | 71.035 | 4.000 |

October | 143.916 | 25.330 | 32.250 | 5.267 |

November | 148.460 | 20.600 | 31.234 | 3.065 |

December | 159.660 | 17.970 | 40.830 | 2.972 |

Extrapolation refers to the use of a fitted curve beyond the range of the observed data, and is subject to a greater degree of uncertainty since it may reflect the method used to construct the curve as much as it reflects the observed data. In order to fit a polynomial up to three degree which exactly fits four constraints, each constraint can be a point, angle, or curvature (which is the reciprocal of the radius of an osculating circle). Angle and curvature constraints are most often added to the ends of a curve, and in such cases are called end conditions. Identical end conditions are frequently used to ensure a smooth transition between polynomial curves contained within a single spline. If we have more than n + 1 constraints (n is the degree of the polynomial), we can still run the polynomial curve through those constraints. An exact fit to all constraints is not certain (but it might happen, for example, in the case of a first degree polynomial exactly fitting three collinear points). In general, however, some method is then needed to evaluate each approximation. The least squares method is one way to compare the deviations.

Low-order polynomials tend to be smooth and high order polynomial curves tend to be lumpy. To define this more precisely, the maximum number of inflection points possible in a polynomial curve is

When a given set of data does not appear to satisfy a linear equation, we can try a suitable polynomial as a regression curve to fit data. The least squares technique can be readily used to fit the data to a polynomial.

Consider a polynomial of degree

If the data contains n sets of x and y values, then the sum of squares of the errors is given by

Since _{1}, a_{2}, a_{3} etc. we have to estimate all m coefficients. As before, we have the following m equations that can be solved for these coefficients.

Consider a general term,

Thus we have

Substituting for

These are m equations

The set of m equations can be represented in a matrix notation as follows:

where

The element of matrix C is

The first model which we fit the yearly soybean yield is the linear model described by

where a_{0} being a constant term, w is the wind direction in degree, x being temperature parameter in degree Celsius, “y” the percentage humidity, “z” is the speed of wind in km/hr.

The error in the generalisation

And squaring the error term for Minimum Squared Error

Differentiating with respect to various factors, similar to equation for the weighted coefficients for the parameters that determine the yield, given by

The yield that is

Solving the equations to get the values of the weighted coefficients

Generalizing the model the yield can be predicted by

where the w, x, y, z are the parameters discussed above.

The second model which we fit the yearly soybean yield is the linear model described by

where a_{0} being a constant term, w is the wind direction in degrees, x being temperature parameter in degree Celsius, “y” the percentage humidity, “z” is the speed of wind in km/hr.

The error in the generalisation

And squaring the error term for Minimum Squared Error

Differentiating with respect to various factors, similar to equation for the weighted coefficients for the parameters that determine the yield, given by

The yield that is

Solving the equations to get the values of the weighted coefficients

Generalizing the model the yield can be predicted by

where the w, x, y, z are the parameters discussed above.

The third model which we fit the yearly soybean yield is the linear model described by

where a_{0} being a constant term, w is the wind direction in degrees, x being temperature parameter in degree Celsius, “y” the percentage humidity, “z” is the speed of wind in km/hr.

The error in the generalisation

And squaring the error term for Minimum Squared Error

Differentiating with respect to various factors, similar to equation for the weighted coefficients for the parameters that determine the yield, given by

The yield that is

Solving the equations to get the values of the weighted coefficients

Generalizing the model the yield can be predicted by

where the w, x, y, z are the parameters discussed above.

Chaos is associated with complex and unpredictable behavior of phenomena over time [

Wind direction & temperature | Wind direction & humidity | Wind direction & wind speed | Temperature & humidity | Temperature & wind speed | Humidity & wind speed | |
---|---|---|---|---|---|---|

Linear model | 0.7919 | 0.2105 | 0.7554 | 0.6390 | −0.2465 | 0.0168 |

Quadratic model | 0.7930 | 0.2086 | 0.7420 | −0.2465 | 0.6390 | 0.0168 |

Variable power model | 0.7831 | 0.4155 | 0.5424 | −0.1147 | 0.6038 | 0.1638 |

Chaos | |
---|---|

Linear model | 0.2622 |

Quadratic model | 0.2634 |

Variable power model | 0.3554 |

the 2012 estimate, thereby proving the legitimacy of the accuracy of the computational calculation of yield using hidden environmental parameters.