^{1}

^{1}

^{*}

^{1}

The present study was conducted to present the comparative modeling, predictive and generalization abilities of response surface methodology (RSM) and artificial neural network (ANN) for the thermal structure of stabilized confined jet diffusion flames in the presence of different geometries of bluff-body burners. Two stabilizer disc burners tapered at 30
° and 60
° and another frustum cone of 60
°/30
° inclination angle were employed all having the same diameter of 80 (mm) acting as flame holders. The measured radial mean temperature profiles of the developed stabilized flames at different normalized axial distances (x/d
<sub>j</sub>
) were considered as the model example of the physical process. The RSM and ANN methods analyze the effect of the two operating parameters namely (r), the radial distance from the center line of the flame, and (x/d<sub>j</sub>) on the measured temperature of the flames, to find the predicted maximum temperature and the corresponding process variables. A three-layered Feed Forward Neural Network in conjugation with the hyperbolic tangent sigmoid (tansig) as transfer function and the optimized topology of 2:10:1 (input neurons: hidden neurons: output neurons) was developed. Also the ANN method has been employed to illustrate such effects in the three and two dimensions and shows the location of the predicted maximum temperature. The results indicated the superiority of ANN in the prediction capability as the ranges of R^{2} and F Ratio are 0.868 - 0.947 and 231.7 - 864.1 for RSM method compared to 0.964 - 0.987 and 2878.8 7580.7 for ANN method beside lower values for error analysis terms.

Bluff-body stabilized turbulent gaseous jet diffusion flames have received renewed attention in recent years due to its practical applications such as, gas burners of industrial furnaces, gas turbine combustion chamber, ramjets and flaring of petroleum industry.

The practical importance of the bluff-body stabilization process has raised a large number of theoretical and experimental studies over the years to identify the physical mechanisms for the stability limits at different geometries of bluff-body acting as flame holders. Several pioneering works have proposed overall flow field classifications based on the observed flame structure as follows:

The effects of bluff-body lip thickness on physical parameters like flame length, radiant fraction, gas temperature and NOx emissions in (LPG-H_{2}) jet diffusion flame were investigated experimentally [

The effect of flame holder geometry on flame structure in non-premixed combustion was studied [

Furthermore, the turbulent non-premixed flames of natural gas/air stabilized in a semi-infinite bluff-body burner were assessed at different situations corresponding either to jet or to wake-dominated to the base of flow field structure. The aim was to identify the influence of the fuel jet and air co-flow velocities on the measured results at the flame stabilization region in situations where intermittent flame lift off and partial extinction may occur [

Recently, Yiheng Tong, et al. [

Another work of the non-reacting flow field and the mixing characteristics of an axisymmetric bluff-body disc burner had been investigated under inlet mixture stratification and preheat [

More recently, the experimental work investigated by [

Modeling is a scientific approach and essential part of many scientific disciplines to represent ideas about the natural of the phenomenon under investigation from the viewpoint of science and to present an alternative to the real phenomenon, to quantify, define, visualize, or simulate it by referring to the existing knowledge. There are several types of modeling approaches, among which the most widely used are mathematical and intelligent modeling approaches [

In industry, the most advanced processes require accurate models if high performance is to be attained as they are nonlinear in nature, which makes developing precise models challenging. When investigating the precision of the modeling technique, various factors, ranging from the nonlinearity of the model behavior to the dimensionality and data sampling technique, to the internal parameters, are noticeably affected. The need for a model that can accurately predict experimental behavior has been the utmost challenge for researchers over the years; such models can dramatically reduce the time and operational cost in many engineering aspects. From here emerged the need to model processes [

Artificial neural networks (ANNs) and response surface methodology (RSM) are important approaches in the field of processes modeling and optimization. These methods of modeling estimate the relations between the output (response or target variable) and input variables (experimental operating factors) of the process by means of experimentally derived data. Subsequently, derived models are used to approximate the optimum situations of independent variables to minimize or maximize the target variable (dependent variable) [

RSM is an effective technique, which enables the estimation of desired response from a number of independent variables as well as the interactions between them. The key advantage in RSM is that fewer experimental runs are sufficient to provide a statistically significant result. Besides analyzing individual variables, it can also generate a mathematical model for the process to determine the optimum condition of a process and to investigate the influencing factors. Despite its simplicity and efficiency, RSM provides efficient and accurate solutions. Therefore, it has successfully been applied in many engineering problems [

ANN modeling is a relatively new nonlinear statistical technique developed to solve problems that are not eligible for conventional statistical methods. It is a factual computing technique developed based on comportment of the biological neural system. It can handle obscure, complex, incomplete problem and execute modeling to produce predictions and generalizations at high speed. Both RSM and ANN techniques do not need the precise expressions or the physical meaning of the system under investigation [

Ahmadpour et al. [

Awolusi et al. [

Karkalos et al. [

Manda et al. [

RSM and ANN were studied and compared for modeling highly nonlinear responses found in impact-related problems. Despite the computation cost of ANN, these studies concluded the supremacy of ANN over RSM in such optimization problems [

The present study focuses on the evaluation of the predictive capabilities of the RSM and ANN two methodologies for the previously reported experimental data of thermal structure of the stabilized flames in the presence of different geometries of bluff-body burners [^{2}), F_Ratio and the various error analyses parameters. Moreover ANN method has been exploited to illustrate the effect of input flame parameters on the response in three and two dimensions and to display the location of the optimum.

The Response Surface Methodology (RSM) was introduced, developed and used in many studies based on polynomial functions in the 1980s. In the last decade, RSM has been extensively utilized for modeling and optimization of several engineering processes and studies. This methodology is an assortment of statistical techniques for the experimental design, the building of the models, evaluating the consequences of factors, and searching for the optimum conditions [

RSM can be divided into the following steps: 1) selection of the independent variables and responses, 2) selection of the experimental design, 3) execution of experiments and collection of results, 4) mathematical modeling of experimental data by polynomial equations, with the best fitting response, 5) checking of models through analysis of variance, 6) drawing of response surfaces, 7) evaluating main and interactional effect of variables using 2D or 3D plots, and, finally, 8) identification of optimal conditions [

The units of the natural independent variables vary from one another. Even if some of the parameters have the same units, not all of these parameters will be tested over the same range. Before performing the regression analysis the variables should be codified to eliminate the effect of different units and ranges in the experimental domain and allows parameters of different magnitude to be investigated more evenly in a range between −1 and +1 [

The frequently used equation for coding is seen below:

Functional relationships between the coded independent variables and dependent variables have been established using multiple regression technique by fitting second order equation of the following form [

Details of this method have been dealt with in our previous papers [

ANNs were introduced as universal function approximators by McCulloch and Pitts in 1943 [

ANN is a colossal structure of interconnected networks based on a simplified analogy to the behavior of the human brain consisting of numerous individual elements called neurons, which are mathematically represented by relatively simple yet flexible functions, such as linear or sigmoid functions capable of performing parallel computations for data processing. These processing units communicate with each other by means of weighted connections, corresponding to the synapses of the brain [

The advantages of ANN are as follows: distributed information processing and the inherent potential for parallel computation. In many cases, when sufficiently rich data are available, they can provide fairly accurate models for nonlinear controls when model equations are not known or only partial state information is available. Due to their parallel processing capability, nonlinearity in nature and their ability to model without a priori knowledge, ANN can be used successfully to capture the dynamics of multivariable nonlinear systems [

An elementary neuron with R inputs is shown below,

The multi-layer perceptron (MLP), also called feed-forward back propagation network is the most widely used network type for approximation problems. Feed forward networks often have one or more hidden layers of sigmoid neurons followed by an output layer of linear neurons besides an input layer.

The number of input neurons represents and is equal to the independent variables of the system and the number of output neurons represents and is equal to the response of the system. Each input unit is attached to all hidden units and each hidden unit is attached to the output layer and there is no communication between neurons in the same layer [

The neurons of one layer are connected with each neuron of the previous and next layer, but information only flows in the forward direction, from the input towards the output layer [

The input variables should also be codified in case of ANN as the normalization leads to avoidance of numerical over flows due to very large or very small weights and to prevent mismatch between the influence of some input values to the network weights and biases [

In training of the back propagation method, the error is determined by comparing the output and the desired output and this error is returned to the hidden and input layers of the next training processes. The network training operation ends when the error comes down below some value specified by the user [

The input-output data is separated into three groups: training, validation and test data. The first group is the only one used to generate the model structure for adjustment of the parameters; validation is used between epochs to optimally select model parameters, in order to halt the training process if the network error starts to increase due to over fitting; and the test data is used to verify the networks predicting capacity at the end of training [

The MATLAB neural network toolbox has been employed for generating, training and using the ANNs. The training is performed employing the Levenberg–Marquardt (LM) due to its fast convergence and reliability in locating the global minimum of the mean-squared error (MSE) [

Mathematical equation relating the input/output variables is given by using the following equation [

where: ^{th} neuron of hidden layer and the single output neuron; ^{th} neuron of hidden layer; h is the number of neurons in the hidden layer; ^{th} input variable and k^{th} neuron of hidden layer;

The Levenberg-Marquardt algorithm uses the following updating function

where:

J is the Jacobian matrix, which contains first derivatives of the network errors with respect to the weights and biases parameters of the ANN. I is the identity matrix, E is a vector of network errors, W contains both the weights and biases of the ANN, μ is a scalar, a parameter of the algorithm, and tk represents the current training epoch [^{th} output neuron and ^{th} output neuron. The

To obtain an ANN the following steps must be completed: a) selection of data for learning, b) network architecture selection, c) determination of weight and threshold values, d) verification and validation of the prediction model on the basis of error function, and, optionally, e) optimization of the function learned by ANN [

· Experimental

Details of the experimental setup and the data employed in this study have been given the previous work of [

The coded factors of (r) and

where:

r: radial distance from the center line of the flame (mm);

X: axial distance along the flame over the disc (mm);

The development process of our MLP network was performed using the Artificial Neural Network Toolbox in MATLAB (R2016a, Math Works, Natick, MA, USA).

We built a three-layer feed-forward ANN with two input neurons representing the coded influencing factors; one output neurons representing the dependent response variable. The back propagation algorithm has been applied to obtain the best fit to the training data because of its capacity of representing non-linear functional relationships between inputs and targets. As activation functions, we used the hyperbolic tangent sigmoid (tansig) for the hidden layer and linear (purelin) for the output layer. The Levenberg-Marquardt back-propagation training algorithm was used for minimizing the error function of the ANN. The mean square error (MSE) was used as performance function.

In the first step, the imported processing data matrix from laboratory experiment results included the coded X and R as input variables and radial mean temperature as an output variable. In the second step, the imported data were randomly divided by the network into three categories of training data (with a share of 70%), test data (with a share of 15%) and validation data (with a share of 15%). In order to identify the optimum network architecture, it is essential to determine the number of neurons in the hidden layer. Therefore, the number of neurons was chosen from 3 to 14 neurons in the hidden layer and the performance parameter (MSE) of each run was accordingly calculated with respect to the target value. The network with 10 neurons in hidden layer shows the best results of minimum MSE. For a certain group of neurons in the hidden layer different results may be obtained in each training process. In each network training process, the weight and bias were corrected to reduce the tilt of the performance function and the output matrix of the network. Therefore, training process for each number of neurons in the hidden layer was executed in five repetitions while the value of the performance function was calculated for each repetition and the average value of the performance function for five repetitions was obtained. Calculating the average value eliminates the effect of the output differences [

In order to evaluate the goodness of the model fitting and prediction accuracy of the constructed models, R^{2}, F_ratio and error analyses were performed between the experimental and predicted data in the RSM, and ANN models. Many approaches for error analyses are stated in the literature, with some listed in a previous study [

Function name | Equation | Reference |
---|---|---|

Relative error percent | [ | |

Average relative error percent | [ | |

Average absolute relative errorpercent | [ | |

Minimum absolute relative error percent | [ | |

Maximum absolute relative error percent | [ | |

Average Relative Error (ARE) | [ | |

Average Relative Deviation (ARD) | [ | |

Absolute Average Deviation (AAD) | [ |

Function name | Equation | Reference |
---|---|---|

Averages | ||

Correlation Coefficient (CC) | [ | |

Coefficient of Efficiency (CE1) | [ | |

Coefficient of Efficiency (CE2) | [ | |

Root Mean Square Error (RMSE) | [ | |

Mean Absolute Error (MAE) | [ | |

Standard Error of Prediction (SEP%) | [ | |

Model Predictive Error (MPE%) | [ |

Function name | Equation | Reference |
---|---|---|

Chi square statistic ( | [ | |

Standard Deviation (SD) | [ | |

Nash Coefficient of Efficiency (NSE) | [ | |

Accuracy (A_{f}) | [ | |

Bias (B_{f}) | [ | |

Relevance Factor (RF) | [ |

where: n: is the number of experimental data points;

For the RSM several mathematical models have been suggested to establish the relationship between the dependent and independent variables. The Box-Cox method has been utilized to identify a suitable power transformation to the response data for normalizing the data or equalizing its variance. The following transformations; of the mean temperature T dependent variable; sqrt(T), Ln(T) and (T) have been resulted and employed for D30, D60 and DFC respectively to represent the response Y in Equation (2) [

In the present study the following cases have been considered for D30, D60 & DFC:

Case a—For all discs, the response temperature has been employed as it is for Y in Equation (2) and for training in case of ANN and the predicted temperatures were compared with the corresponding experimental temperature ones.

For D30 the following trials have been performed:

Case b—The sqrt(T) has been employed in Equation (2) as Y and for training in case of ANN and the predicted results in both cases have been converted to the equivalent predicted temperature to be compared with the corresponding experimental temperature ones.

Case c—The sqrt(T) has been employed in Equation (2) as Y and for training in case of ANN and predicted results in both cases have been compared with the corresponding sqrt(experimental temperature) ones.

For D60 the following trials have been performed:

Case b—The ln(T) has been employed in Equation (2) as Y and for training in case of ANN and the predicted results in both cases have been converted to the analogous predicted temperature to be compared with the corresponding experimental temperature ones.

Case c—The Ln(T) has been employed in Equation (2) as Y and for training in case of ANN and the predicted results in both cases have been compared with the corresponding Ln(experimental temperature) ones.

The results of comparison are presented in Tables 2(a)-(c). These results exposed that the properly trained ANN model has consistently performed more accurate prediction closer to experimentally measured ones compared to RSM model in all aspects hinting that ANN model was quite successful for both simulation and predicted values. Similar observations were obtained by many research groups to study various engineering problems [^{2} & F ratio and the extremely low value of error indicators for the ANN results compared to that of RSM ones. This is more pronounced by comparing the results of the studied case a for the values of R^{2} (0.98312, 0.9683, 0.98745 in case of ANN compared to 0.9231, 0.86814, 0.93575 in case of RSM) and of F ratio (5658.57, 2922.47, 7580.67 in case of ANN compared to 540.154, 231.747, 864.077 in case of RSM) for D30, D60 & DFC respectively. Moreover the ranges of R^{2}, F_Ratio in all the studied cases are 0.868 -

Disc_Method_Case | ave_Y_ output | max_Y_ output | min_Y_ output | R_square | F_ratio | time elapsed | r_pred | RF_R | RF_X | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

D30_ANN_a− | 908.4 | 1394.0 | 333.7 | 0.9831 | 5658.6 | 4.230 | 1397.8 | 0.155 | 183.1 | −1.50 | 6.62 | 5.70E−04 | 0.614 |

D30_RSM_a− | 909.3 | 1275.7 | 128.8 | 0.9231 | 540.2 | 0.357 | 1278.5 | 8.676 | 188.8 | −3.11 | 10.72 | −4.90E−03 | 0.639 |

D30_ANN_b− | 909.1 | 1393.9 | 348.8 | 0.9850 | 6432.5 | 2.881 | 1398.0 | 0.144 | 186.6 | −1.50 | 9.02 | −5.00E−03 | 0.621 |

D30_RSM_b− | 908.0 | 1308.7 | 260.6 | 0.9468 | 760.1 | 0.011 | 1311.1 | 6.352 | 186.3 | −3.12 | 9.23 | −6.00E−03 | 0.646 |

D30_ANN_c− | 29.8 | 37.4 | 18.4 | 0.9844 | 6330.3 | 2.975 | 1420.1 | 1.928 | 179.7 | −1.50 | 5.36 | −8.20E−03 | 0.620 |

D30_RSM_c− | 29.8 | 36.2 | 16.1 | 0.9436 | 753.1 | 0.009 | 1311.1 | 6.352 | 186.3 | −3.12 | 6.43 | −5.40E−03 | 0.630 |

D60_ANN_a− | 768.5 | 1239.5 | 284.1 | 0.9683 | 2922.5 | 6.152 | 1256.6 | 3.001 | 210.9 | −2.00 | 10.51 | 3.18E−02 | 0.564 |

D60_RSM_a− | 770.2 | 1129.8 | 51.1 | 0.8681 | 231.7 | 0.032 | 1130.7 | 11.318 | 234.8 | −2.78 | 15.43 | 3.37E−02 | 0.592 |

D60_ANN_b | 768.7 | 1247.4 | 322.8 | 0.9680 | 3156.6 | 4.284 | 1278.3 | 1.991 | 214.3 | −0.50 | 13.51 | 3.38E−02 | 0.566 |

D60_RSM_b− | 764.5 | 1182.6 | 250.1 | 0.9106 | 336.4 | 0.014 | 1190.5 | 6.626 | 218 | −2.45 | 12.20 | 3.64E−02 | 0.611 |

D60_ANN_c− | 6.6 | 7.1 | 5.5 | 0.9638 | 2878.8 | 3.095 | 1306.2 | 6.078 | 213.7 | −4.00 | 2.81 | 4.89E−02 | 0.541 |

D60_RSM_c− | 6.6 | 7.1 | 5.5 | 0.8974 | 308.0 | 0.129 | 1190.5 | 6.626 | 218 | −2.45 | 3.35 | 5.37E−02 | 0.567 |

DFC_ANN_a− | 1062.6 | 1551.0 | 357.1 | 0.9875 | 7580.7 | 4.624 | 1617.9 | 0.479 | 148.5 | −7.50 | 6.18 | 4.46E−02 | 0.528 |

DFC_RSM_a− | 1063.1 | 1462.3 | 257.5 | 0.9358 | 864.1 | 0.033 | 1463.0 | 5.614 | 172.8 | 0.00 | 10.95 | 5.50E−02 | 0.540 |

Disc_Method_Case | ARE | ARD | AAD | CC | CE1 | CE2 | ||||
---|---|---|---|---|---|---|---|---|---|---|

D30_ANN_a− | 0.0663 | 3.238 | 0.0269 | 19.54 | 0.0257 | 0.0020 | 0.0324 | 0.9915 | 0.9831 | 0.9828 |

D30_RSM_a− | −0.1834 | 8.590 | 0.0884 | 63.21 | 0.0621 | 0.0146 | 0.0859 | 0.9608 | 0.9231 | 0.9167 |

D30_ANN_b− | 0.1497 | 2.940 | 0.0356 | 18.45 | 0.0238 | 0.0017 | 0.0294 | 0.9925 | 0.9850 | 0.9847 |

D30_RSM_b− | −0.4121 | 6.765 | 0.0714 | 28.74 | 0.0517 | 0.0073 | 0.0677 | 0.9730 | 0.9468 | 0.9408 |

D30_ANN_c− | −0.0547 | 1.474 | 0.0241 | 9.43 | 0.0225 | 0.0005 | 0.0147 | 0.9922 | 0.9844 | 0.9842 |

D30_RSM_c− | −0.1141 | 3.384 | 0.0357 | 15.59 | 0.0504 | 0.0018 | 0.0338 | 0.9714 | 0.9436 | 0.9403 |

D60_ANN_a− | 0.2069 | 5.009 | 0.0145 | 25.87 | 0.0355 | 0.0052 | 0.0501 | 0.9840 | 0.9683 | 0.9670 |

D60_RSM_a− | −1.0971 | 12.729 | 0.0245 | 84.51 | 0.0840 | 0.0301 | 0.1273 | 0.9317 | 0.8681 | 0.8481 |

D60_ANN_b | 0.1719 | 4.580 | 0.0269 | 23.10 | 0.0361 | 0.0040 | 0.0458 | 0.9838 | 0.9679 | 0.9668 |

D60_RSM_b− | −0.7025 | 9.505 | 0.0520 | 27.05 | 0.0676 | 0.0142 | 0.0950 | 0.9539 | 0.9099 | 0.8953 |

D60_ANN_c− | 0.0009 | 0.761 | 0.0050 | 4.51 | 0.0366 | 0.0001 | 0.0076 | 0.9817 | 0.9638 | 0.9630 |

D60_RSM_c− | −0.0327 | 1.473 | 0.0074 | 4.78 | 0.0702 | 0.0003 | 0.0147 | 0.9473 | 0.8974 | 0.8857 |

DFC_ANN_a− | 0.0120 | 21.425 | 116.8000 | 34.07 | 0.0215 | 0.0017 | 0.0272 | 0.9937 | 0.9875 | 0.9874 |

DFC_RSM_a− | 0.0191 | 39.401 | 197.0000 | 50.00 | 0.0515 | 0.0103 | 0.0687 | 0.9673 | 0.9358 | 0.9313 |

Disc_Method_Case | MSE | RMSE_N | MAPE | SEP% | MPE% | Chi2 | SD | NSE | Af | Bf |
---|---|---|---|---|---|---|---|---|---|---|

D30_ANN_ a− | 1292.0 | 35.68 | 3.238 | 3.924 | 3.254 | 288.7 | 0.0446 | 0.9831 | 0.3243 | −3.50E−03 |

D30_RSM_a− | 5886.5 | 76.72 | 8.590 | 8.437 | 9.922 | 2100.2 | 0.1212 | 0.9231 | 0.9039 | −7.48E−02 |

D30_ANN_b− | 1150.2 | 33.59 | 2.940 | 3.694 | 2.948 | 250.3 | 0.0407 | 0.9850 | 0.2941 | 6.54E−03 |

D30_RSM_b− | 4072.3 | 63.81 | 6.765 | 7.018 | 6.832 | 957.1 | 0.0856 | 0.9468 | 0.6779 | 4.24E−03 |

D30_ANN_c− | 0.3530 | 0.5863 | 1.474 | 1.969 | 1.483 | 2.351 | 0.0211 | 0.9844 | 0.1478 | −7.80E−03 |

D30_RSM_c− | 1.2783 | 1.1306 | 3.384 | 3.797 | 3.400 | 8.923 | 0.0430 | 0.9436 | 0.3390 | 2.12E−03 |

D60_ANN_a− | 2186.4 | 46.36 | 5.009 | 6.019 | 5.046 | 599.4 | 0.0714 | 0.9683 | 0.5014 | −5.29E−03 |

D60_RSM_a− | 9093.0 | 95.36 | 12.73 | 12.38 | 16.15 | 4434.8 | 0.1739 | 0.8681 | 1.3487 | −9.02E−02 |

D60_ANN_b | 2211.9 | 46.16 | 4.580 | 5.993 | 4.590 | 525.7 | 0.0622 | 0.9679 | 0.4576 | −2.78E−03 |

D60_RSM_b− | 6213.1 | 78.82 | 9.505 | 10.23 | 9.548 | 1651.3 | 0.1195 | 0.9099 | 0.9486 | −6.60E−16 |

D60_ANN_c− | 4.96E−03 | 6.88E−02 | 0.761 | 1.045 | 0.7616 | 0.1406 | 0.0107 | 0.9638 | 0.0761 | −5.10E−04 |

D60_RSM_c− | 1.41E−02 | 1.19E−01 | 1.473 | 1.801 | 1.4738 | 0.4008 | 0.0186 | 0.8974 | 0.1473 | 1.54E−03 |

DFC_ANN_a− | 1107.0 | 33.01 | 2.720 | 3.105 | 2.744 | 239.9 | 0.0409 | 0.9875 | 0.2729 | −5.80E−03 |

DFC_RSM_a− | 5665.9 | 75.27 | 6.871 | 7.081 | 7.084 | 1344.7 | 0.1019 | 0.9358 | 0.6931 | −7.90E−03 |

0.947 & 231.7 - 864.1 for RSM method compared to 0.964 - 0.987 & 2878.8 7580.7 for ANN method. The better modeling ability of ANN can be attributed to its universal approximation ability for nonlinearity, whereas RSM is only limited to a second-order polynomial regression [

Also the predicted temperatures in all the studied cases were compared with corresponding experimental ones and the error was referred to the maximum experimental temperature and this comparison is displayed in

These results indicate that the ANN model shows a significantly excellent generalization capacity than the RSM models. This can be attributed to the universal ability of ANN to approximate the nonlinearity of the system, whereas the RSM is restricted to a second-order polynomial.

Table2(a) presents RF values for each parameter. This Tableindicates the positive relevancy factor of

Also, this table discloses that the ANN method is more expensive than RSM indicated in the larger elapsed time for NN compared to that of RSM, because it uses a series of computationally expensive functions for a single model.

The three-dimensional concave curved response surfaces in Figures 3(a)-(c) indicate the possibility of obtaining a maximum value of the measured temperature within the chosen factors levels and the interaction between the factors [

The contour plots of Figures 4(a)-(c) assess the individual and cumulative influence of the variables and the mutual interaction between the variables and the dependent variable [

Modeling using RSM is easier compared to ANN, as ANN needs a higher number of inputs than RSM for better predictions. ANN has excellent prediction and optimization abilities, while sensitivity analysis is more precise in RSM. RSM is recommended for modeling of a new process, while ANN is best suited for nonlinear systems that include interactions higher than quadratic. Moreover ANN does not require any prior specification for suitable fitting function [

The structured nature of RSM provides the predicted quadratic equation to exhibit the factors contributions from the coefficient regression of the models.

This ability is robust in identifying the significant and insignificant terms in the model and hence can reduce the complexity of the models. However, the ANN presents a better alternative in modeling and prediction [

The higher predictive accuracy of the ANN is attributed to its ability to process multi-dimensional, non-linear and clustered information whereas RSM is limited to use of a second order polynomial. The generation of an optimum ANN is a multi-step calculation process, that is repeated until a desirable error is achieved whereas a response surface model is based on a single step calculation [

Since it had been established that the neural network was able to efficiently predict the temperature for the various conditions of the experiments, the final network with the optimum NN architecture was utilized for the optimization purpose for the above three mentioned cases for the three discs. The optimization was executed by a grid search algorithm, exploring the region defined by two of the coded experimental input variables design limits and dividing each factor into 20 intervals. Therefore, a total of 20^{2} situations were evaluated, simulating the corresponding response factor of the neural network [

An artificial neural mode was successfully developed and compared to RSM to predict the temperature profile of the three discs (D30, D60 & DFC) for three cases. The back propagation ANN network with Levenberg-Marquardt training algorithm was used to train the data from the experimental laboratory testing. The study outcome demonstrated that both statistical and computational intelligence modeling can make a potential alternative to time consuming experimental studies in addition to minimizing the costly machining test trial. The main conclusions obtained in this study are as follows:

1) The prediction results of the neural network model which have 10 neurons in hidden layer were found to be in good agreement with the experimental data.

2) The systematic comparative study has revealed that the properly trained ANN model has consistently performed more accurate prediction compared to those of RSM, in all aspects. The distribution of data points for neural network model almost similar and close to the actual experimental data with correlation coefficient (R) in the range of 0.9 - 1.0. This indicated that the developed neural network model is capable of making the prediction with good accuracy. This accurateness is expressed in the very high values of R^{2} and F_ratios and the very low value of error indicators for the ANN results compared to RSM ones.

3) Neural network is a powerful tool and is easy to use in complex or non-linear problems. This confirms that the ANN model displays a significantly higher generalization capacity than the rest of the RSM models. The reason can be accredited to the universal ability of ANN to approximate the nonlinearity of the system. The ANN predictive ability was proved to be better than that of RSM and it can be concluded that ANN gives a more accurate replacement of RSM.

The authors declare no conflicts of interest regarding the publication of this paper.

Gendy, T.S., Ghoneim, S.A. and Zakhary, A.S. (2020) Comparative Appraisal of Response Surface Methodology and Artificial Neural Network Method for Stabilized Turbulent Confined Jet Diffusion Flames Using Bluff-Body Burners. World Journal of Engineering and Technology, 8, 121-143. https://doi.org/10.4236/wjet.2020.81011