Scientific Research

An Academic Publisher

Short-Term Financial Time Series Forecasting Integrating Principal Component Analysis and Independent Component Analysis with Support Vector Regression ()

Keywords

Share and Cite:

*Journal of Computer and Communications*,

**6**, 51-67. doi: 10.4236/jcc.2018.63004.

1. Introduction

The endeavor of financial time series forecasting has gained extreme attention from both the individual and institutional investors because the accurate forecasting can influence the decision behind investment. This field is characterized by data intensity, noise, non-stationary, unstructured nature, high degree of uncertainty, and hidden relationships [1] . Capital market trend depends on many factors including political events, general economic conditions, news related to the stocks and traders’ expectations. Moreover, according to academic investigations, movements in market prices are not random. Rather, they behave in a highly non-linear, dynamic manner [2] . Therefore, predicting stock market price is a quite challenging task.

Technical analysis is a popular approach to study the capital market patterns and movement. The results of technical analysis may be a short or long-term forecast based on recurring patterns; however, this approach assumes that stock prices move in trends, and that the information which affects prices enters the market over a finite period of time, not instantaneously [3] . Technical indicators used in this analysis are calculated from the historical trading data. Researchers use various machine learning and artificial intelligent approaches to analyze these technical indicators to predict future trends or prices. The traditional statistical models include Box Jenkins ARIMA [4] . Continuous research has introduced plentiful approaches including Artificial Neural Networks (ANN), genetic algorithm, rough set (RS) theory, fuzzy logic and others [5] [6] . Most of these approaches suffer from different problems like over-fitting or under-fitting, initializing large number of control parameters, finding the optimum solutions etc. To resolve most of these shortcomings, support vector regression (SVR) has been widely used in various nonlinear regression tasks. This is largely because; SVR uses the structural risk minimization principal for function estimation while the traditional methods implement empirical risk minimization principal. The successful application of SVR in various time series problems [7] [8] [9] has encouraged its adaptation in financial time series forecasting [10] [11] [12] [13] .

The first important step in developing an SVR based forecasting model is feature extraction (transforming the original features into new ones) and feature selection (choosing the most influential set of features). Principal component analysis (PCA) is a widely applied feature extraction method in the framework of SVR [14] [15] . PCA transforms high-dimensional input vectors into uncorrelated principal components (PCs) by calculating the eigenvectors of the covariance matrix of original inputs. Again, the latent noise residing in financial time series data often leads to over-fitting or under-fitting and hence impairs the performance of the forecasting system. Lu et al. has proposed the use of independent component analysis (ICA) (both linear and non-linear) with SVR to negate the influence of such noise in data in order to improve the forecasting accuracy [16] [17] . In both approaches, at first ICA was used to extract the most influential components from the technical indicators and then were fed to SVR for better prediction purpose. ICA is a signal processing technique that was originally developed for blind source separation. It attempts to achieve statistically independent components (ICs) from the transformed vectors. Cao et al. has shown that both PCA and ICA can improve the performance of SVR in time series forecasting [18] which motivated this research work to adopt PCA and ICA with SVR for predicting future stock prices.

In this paper, an SVR based forecasting model is developed integrating both PCA and ICA to elevate the prediction accuracy for stock prices because even a small improvement of this performance can have a significant influence on investment decisions. Considering the fact that, technical analysis plays a vital role in the forecasting, it has been conducted to calculate technical indicators as the input features. Then PCA is used to extract the influential components from input features which are then filtered to transform the high-dimensional input into low-dimension features. After that, ICA is applied to convert the reduced features into independent components. The SVR then finally uses the filtered and transformed low-dimensional input variables to construct the forecasting model and predict stock prices for 1 to 4 days in advance. The predictive performance of the proposed approach is compared with three traditional approaches: the integration of PCA with SVR (PCA-SVR), ICA with SVR (ICA-SVR) and single SVR.

The reminder of this paper is organized into 6 sections. Section 2 provides a brief overview of the methodologies used in this study which includes PCA, ICA and SRV. Section 3 introduces the proposed method. Section 4 describes the research data. Section 5 reports the experimental results obtained from the study. Finally Section 6 contains the concluding remarks.

2. Methodology

2.1. Principal Component Analysis (PCA)

Principal component analysis (PCA), invented by Karl Pearson [19] , is a well-known statistical procedure for feature extraction. It finds smaller number of uncorrelated components from high dimensional original inputs by calculating the eigenvectors of the covariance matrix. Given a set of m dimensional input vectors
${x}_{i}={\left({x}_{i}\left(1\right),{x}_{i}\left(2\right),\cdots ,{x}_{i}\left(m\right)\right)}^{\text{T}}$ where
$i=1,2,\cdots ,n$ . PCA is a transformation of x_{i} into a new vector y_{i} by:

${y}_{i}={U}^{\text{T}}{x}_{i}$ (1)

where U is the m × m orthogonal matrix whose jth column u_{j} is the jth eigenvector of the sample covariance matrix
$C=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{x}_{i}{x}_{i}^{\text{T}}}$ . In other words, PCA solves the eigenvalue problem of Equation (2).

${\lambda}_{j}{u}_{j}=C{u}_{j},\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,\cdots ,m$ (2)

where
${\lambda}_{j}$ is one of the eigenvalues of C. u_{j} is the corresponding eigenvector. Based on the estimated u_{j}, the components of y_{i} are calculated as the orthogonal transformation of x_{i}:

${y}_{i}\left(j\right)={u}_{j}^{\text{T}}{x}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,\cdots ,m$ (3)

The new components are called principal components. By using only the first several eigenvectors sorted in descending order of the eigenvalues, the number of principal components in y_{i} can be reduced [20] . Thus, PCA can be used to reduce dimensions where the principal components are uncorrelated and have sequentially maximum variances.

2.2. Independent Component Analysis (ICA)

ICA is basically a signal processing technique that regains mutually independent but unknown source signals from their mixture without having any prior knowledge of the mixing mechanism. Let
$X={\left[{x}_{1},{x}_{2},\cdots ,{x}_{m}\right]}^{\text{T}}$ be an m × n matrix which is a mixture of m source signals x_{i} of size 1 × n,
$i=1,2,\cdots ,m$ . In basic ICA the above mixing model can be rewritten as [21] :

$X=AS={\displaystyle {\sum}_{i=1}^{m}{a}_{i}{s}_{i}}$ (4)

where a_{i} is the i^{th} column of the m × m unknown mixing matrix A and s_{i} is the ith row of the m × n source matrix S. The goal of the ICA is to estimate the un-mixing matrix W of size m × m that is used to transform the observed mixture signals X to yield the independent signals Y such that

$Y=\left[{y}_{i}\right]=WX$ (5)

where y_{i} is the i^{th} row of the matrix Y,
$i=1,2,\cdots ,m$ . The rows of Y are called the independent components (ICs) and are required to be statistically as independent as possible. Here, statistically independence means that the joint probability density of the components of Y is equal to the product of the marginal densities of the individual components. If the un-mixing matrix W is the inverse of the original mixing matrix A i.e. W = A^{−}^{1}, the latent source signals (s_{i}) can be estimated using the ICs (y_{i}). For the identification of Equation (5), one fundamental requirement is that all the ICs, with the possible exception of one component, must be non-Gaussian.

Several algorithms have been developed to perform ICA modeling [22] [23] [24] [25] [26] . The FastICA algorithm proposed by [27] is adopted in this research work where the mutual information is used as criteria to estimate Y. Minimizing mutual information between components implies maximizing their negentropy. The negentropy is always non-negative and is zero if and only if y has a Gaussian distribution. In the FastICA algorithm, the approximation of the negentropy is using the following contrast function:

${J}_{G}\left(y\right)\approx {\left[E\left\{G\left(y\right)\right\}-E\left\{G\left(v\right)\right\}\right]}^{2}$ (6)

where v is a standardized Gaussian variable and G is a non-quadratic function. G is given by $G\left(y\right)=\mathrm{exp}\left(-{y}^{2}/2\right)$ in this study.

Two preprocessing steps are applied to the input matrix x to simplify the FastICA algorithm, centering and whitening [27] . First, x is made zero mean by subtracting its mean i.e.
${x}_{i}\leftarrow \left({x}_{i}-E\left({x}_{i}\right)\right)$ . The second step is to whiten x by passing it through a whitening matrix V, i.e., Z = VX. The rows of the whitened variable Z, denoted by z, are uncorrelated and have unit variance, i.e., E{zz^{T}} = I.

2.3. Support Vector Regression (SVR)

The SVR extends the basic principles of Vapnik’s support vector machines (SVM) [28] for classification by setting a margin of tolerance in approximation and up until the threshold ε, 0 error is considered. Given a training set $\left({x}_{i},{y}_{i}\right),i=1,2,\cdots ,n$ , where the ${x}_{i}\in {R}^{m}$ is the m-dimensional input vector and ${y}_{i}\in R$ is the response variable. SVR generates the linear regression function in the form:

$f\left(x,w\right)={w}^{\text{T}}x+b$ (7)

Vapnik’s linear ε-Insensitivity loss (error) function is:

${\left|y-f\left(x,w\right)\right|}_{\epsilon}=\{\begin{array}{l}0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\left|y-f\left(x,w\right)\right|\le \epsilon \\ \left|y-f\left({x}_{i},w\right)\right|-\epsilon ,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$ (8)

Based on this, linear regression $f\left(x,w\right)$ is estimated by simultaneously minimizing ${\Vert w\Vert}^{2}$ and the sum of the linear ε-Insensitivity losses as shown in Equation (10). The constant c controls a trade-off between an approximation error and the weight vector norm $\Vert w\Vert $ , is a design parameter chosen by the user.

$R=\frac{1}{2}{\Vert w\Vert}^{2}+c\left({\displaystyle {\sum}_{i=1}^{n}{\left|y-f\left(x,w\right)\right|}_{\epsilon}}\right)$ (9)

Minimizing the risk R is equivalent to minimizing the following risk under the constraints mentioned in Equations (11)-(13).

$R=\frac{1}{2}{\Vert w\Vert}^{2}+c{\displaystyle {\sum}_{i=1}^{n}\left(\xi +{\xi}^{*}\right)}$ (10)

$\left({w}^{\text{T}}{x}_{i}+b\right)-{y}_{i}\le \epsilon +{\xi}_{i}$ (11)

${y}_{i}-\left({w}^{\text{T}}{x}_{i}+b\right)\le \epsilon +{\xi}_{i}^{*}$ (12)

${\xi}_{i},{\xi}_{i}^{*}\ge 0,\text{\hspace{0.17em}}i=1,2,\cdots ,m$ (13)

Here, ξ_{i} and
${\xi}_{i}^{*}$ are slack variables, one for exceeding the target value by more than ε and other for being more than ε below the target. As used in SVM, the above constrained optimization problem is solved using Lagrangian theory and the Karush-Kuhn-Tucker conditions to obtain the desired weight vector of the regression function.

SVR maps the input vectors ${x}_{i}\in {R}^{m}$ into a high dimensional feature space $\varphi \left({x}_{i}\right)\in H$ . A kernel function $K\left({x}_{i},{x}_{j}\right)$ performs the mapping ϕ(.). The most popular kernel function that is used in this study is Radial Basis Function (RBF) as shown in Equation (14).

$K\left({x}_{i},{x}_{j}\right)=\mathrm{exp}\left(-\gamma {\Vert {x}_{i}-{x}_{j}\Vert}^{2}\right)$ (14)

where γ is the constant of the kernel function. The RBF kernel function parameter γ and regularization constant C are the design parameters of SVR.

3. Proposed PCA-ICA-SVR Forecasting Model

The three stage methodology named PCA-ICA-SVR proposed in this research scheme is depicted in Figure 1. In the first stage we used PCA to the input data to extract features which were then reduced into a low-dimensional feature space. Then ICA was applied to these reduced feature space to extract independent components. Finally, these independent components were used in the SVR for constructing the forecasting model.

First of all, technical analysis is conducted on the dataset and 29 technical indicators (TIs) are calculated that are being used by financial experts [3] . Some important technical indicators and their formulas are shown in Table 1. All values of these constructed features are scaled into the range of [0, 1] to eliminate the biasness towards larger value attributes. Then PCA is applied to the normalized data to extract the PCs containing the most influential information. These PCs are filtered according to the corresponding variance and thus the irrelevant features are discarded to construct a reduced feature space. The ICA model is then used in the low-dimensional data to estimate ICs containing the hidden and effective information of the prediction variables. Finally, the ICs are used as input variables to construct the SVR stock price forecasting model.

As mention in Section 2.3, the RBF kernel function is incorporated in this study because it is the most widely used and well performing kernel function for forecasting purpose. But the performance of SVR is highly influenced by the selection of the parameters: γ and C. A very popular method to select the best values of these parameters is the grid search approach with cross-validation [29] . This is a straightforward method of trying geometric sequences for the best (C,

Figure 1. Proposed prediction framework.

Table 1. Important technical indicators and their formulas.

γ) value pair. The (C, γ) value pair generating the minimum mean absolute percentage error (MAPE) is considered the best values for the parameters. The complete grid search is a time-consuming task. That’s why a coarse grid is used at first which identifies a better region on the grid. Then a finer grid search is conducted on that region.

Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), relative Root Mean Squared Error (rRMSE) and Mean Squared Error (MSE) are used to evaluate the performance of the proposed model. Formulas of these evaluation measures are shown in Equations (15)-(18) [30] . These are the measures of deviation between actual and predicted prices. The smaller the values of these measures, the closer the predicted prices are to actual prices. They can be used to evaluate the predictive performance of any forecasting model.

$\text{MAPE}=\frac{1}{n}{\displaystyle {\sum}_{t=1}^{n}\frac{\left|{A}_{t}-{F}_{t}\right|}{\left|{A}_{t}\right|}\times 100}$ (15)

$\text{MAE}=\frac{1}{n}{\displaystyle {\sum}_{t=1}^{n}\frac{\left|{A}_{t}-{F}_{t}\right|}{\left|{A}_{t}\right|}}$ (16)

$\text{rRMSE}=\sqrt{\frac{1}{n}{\displaystyle {\sum}_{t=1}^{n}{\left(\frac{{A}_{t}-{F}_{t}}{{A}_{t}}\right)}^{2}}\times 100}$ (17)

$\text{MSE}=\frac{1}{n}{\displaystyle {\sum}_{t=1}^{n}{\left({A}_{t}-{F}_{t}\right)}^{2}}$ (18)

where A_{t} is the actual value, F_{t} represent the predicted value and n is the total number of data points.

4. Research Data

To conduct the study and evaluate the performance of the proposed approach, the 16 years’ historical data of daily transaction for the time period from January 2000 to December 2015 are collected from Dhaka Stock Exchange, Bangladesh (http://www.dsebd.org/). This data covers 3600 trading days and each data comprises five attributes: open price, high price, low price, close price and trade volume. We have considered three companies from three different sectors: Square Pharmaceuticals Limited, AB Bank Limited and Bangladesh Lamps Limited as these are the most prominent stocks in DSE. The daily closing prices of these companies are shown in Figures 2-4 respectively. The first one is a leading company in pharmaceuticals sector, the second leads the banking sector and the last one belongs to the engineering sector. 70% of the total sample points (around 2520 trading days) are used as the training sample and the remaining 30% of the total sample points (around 1080 trading days) are holdout to be used as the testing sample.

5. Experimental Results

The principal component analysis on the original data shows that the first 10

Figure 2. Closing prices of square pharmaceuticals limited.

Figure 3. Closing prices of AB bank limited.

Figure 4. Closing prices of Bangladesh lamps limited.

components contribute over 98% cumulative covariance providing most information. Figure 5 shows the cumulative covariance contribution of principal components for AB Bank Limited and the same results are obtained for other cases (not shown here). Hence the first 10 PCs are selected to form the low-dimensional input variables for all the three companies. Then ICA is applied on these 10 PCs and the estimated corresponding ICs are considered as the final input variables for SVR.

In this study, the radial basis function (RBF) is used as the kernel function of SVR. To find the best C and γ value pair we have considered e^{−}^{5} to e^{10} for both parameters as our research space. For the data of Square Pharmaceuticals Limited, the coarse grid discovered the best (C, γ) as (e^{9}, e^{3}) with the 5-fold cross validation MAPE 2.23%. Then a finer grid search on the neighborhood of (e^{9}, e^{3}) produced a better cross-validation MAPE of 1.66% at (e^{9}, e^{2.8}). After the best (C, γ) is found the whole training set is trained again to generate the final SVR model. The best value pairs for C and γ for every prediction task where minimum prediction error is exhibited by the grid search approach are shown in Table 2.

The forecasting results of the proposed PCA-ICA-SVR model are compared with single SVR model that uses all the original input variables, the PCA-SVR model where the filtered PCs are used as input variables by the SVR model and finally the ICA-SVR model where non-filtered ICs calculated from the original variables are used by the SVR model.

In all the cases, closing prices of the target stock are predicted for 1, 2, 3, and 4 days in advance. Prediction results for Square Pharmaceuticals Limited are listed in Tables 3-6. Tables 7-10 illustrate the comparative results of price forecasting for AB Bank Limited. Tables 11-14 compare the performance of price forecasting for Bangladesh Lamps Limited.

It is evident from all the results that, the proposed PCA-ICA-SVR model has produced lower MAPE (%), MAE, MSE and rRMSE (%) for all three target stocks. The integration of PCA and ICA improves the performance of SVR in most of the cases but the proposed PCA-ICA-SVR model outperforms other

Figure 5. Cumulative covariance of PCs for AB bank limited.

Table 2. Grid search results for RBF kernel parameters.

Table 3. Prediction performance for 1 day ahead of Square Pharmaceuticals Limited.

Table 4. Prediction performance for 2 days ahead of Square Pharmaceuticals Limited.

Table 5. Prediction performance for 3 days ahead of Square Pharmaceuticals Limited.

Table 6. Prediction performance for 4 days ahead of Square Pharmaceuticals Limited.

Table 7. Prediction performance for 1 day ahead of AB Bank Limited.

Table 8. Prediction performance for 2 days ahead of AB Bank Limited.

Table 9. Prediction performance for 3 days ahead of AB Bank Limited.

Table 10. Prediction performance for 4 days ahead of AB Bank Limited.

Table 11. Prediction performance for 1 day ahead of Bangladesh Lamps Limited.

Table 12. Prediction performance for 2 days ahead of Bangladesh Lamps Limited.

Table 13. Prediction performance for 3 days ahead of Bangladesh Lamps Limited.

Table 14. Prediction performance for 4 days ahead of Bangladesh Lamps Limited.

three compared methods. This corroborates that the proposed PCA-ICA-SVR approach can generate lower prediction errors than other three compared approaches. Again, it could be noticed from the results that, the forecasting performance of all the approaches decreases as the predictions are made for more and more number of days in advance, which may be obvious for any prediction system.

The robustness of the proposed PCA-ICA-SVR method is evaluated by comparing its performance with PCA-SVR, ICA-SVR and single SVR methods using different ratios of training and testing sample sizes. The performance is compared in terms of MAPE (%) and rRMSE (%) with four relative ratios, 60%, 70%, 80%, and 90% of training sample size with respect to the complete dataset size. Predictions are made for the closing price of the target stock for next trading day. Table 15 summarizes the prediction results for Square Pharmaceuticals Limited, AB Bank Limited and Bangladesh Lamps Limited. Based on the findings in Table 15, we can discover that the proposed PCA-ICA-SVR method outperforms other three methods under all four different relative ratios for all three target stocks. It therefore concludes that PCA-ICA-SVR approach clearly produces less forecasting error than other three approaches. This demonstrates the effectiveness of our proposal.

6. Conclusions

This paper proposes a price forecasting model integrating PCA and ICA with SVR for financial time series. This PCA-ICA-SVR model first uses the PCA to extract the most influential components from the input features in order to overcome the over-fitting or under-fitting challenge caused by the noisy nature of financial time series data. The filtered PCs are then processed by ICA to estimate ICs which are finally used in SVR with RBF kernel function as input variables. The grid search for the best kernel parameters is conducted to improve SVR’s performance. The experiments have evaluated 16 years’ data for three commencing stocks from Dhaka Stock Exchange, Bangladesh. The performance of proposed model is compared with PCA-SVR, ICA-SVR and single SVR for short time durations (1 to 4 days) in terms of prediction error. Experiment results show that the proposed PCA-ICA-SVR model outperforms all three other methods by generating less predictive errors. The empirical results can conclude that the PCA and ICA, working together, can successfully unfold the influential

Table 15. Robustness evaluation of PCA-ICA-SVR, ICA-SVR, PCA-SVR and single SVR with different relative ratios for Square Pharmaceuticals Limited, AB Bank Limited and Bangladesh Lamps Limited.

information from the original data and uplift the performance of SVR in stock price forecasting. As the proposed model helps to predict stock prices with less error, investors can use this to gain more profit or obtain less loss in stock market. Again, this proposed approach can also be used in other domains like weather forecasting, energy consumption forecasting or GDP forecasting.

Future research may integrate Kernel PCA, non-linear ICA and other signal processing techniques like wavelet transformation with SVR to further enhance the forecasting performance. This study mainly focuses on short-term price prediction. Its applicability might be investigated for long-term forecasting in future and appropriate methods could be integrated to enhance performance in future. However, only the price related historical data is used here to predict future prices. But, it is well known that various other aspects like general economic conditions, government policies, company performance, investor’s interest etc. also play vital roles in stock market. In future, these aspects can also be incorporated as input features for prediction which may buttress the accurate prediction.

Nomenclature

ANN Artificial neural networks

ARIMA Autoregressive integrated moving average

CCI Commodity channel index

DSE Dhaka stock exchange

EMA Exponential moving average

GDP Gross domestic product

ICA Independent component analysis

IC Independent component

MACD Moving average convergence/divergence

MAE Mean absolute error

MAPE Mean absolute percentage error

MSE Mean squared error

PCA Principal component analysis

PC Principal component

RBF Radial basis function

ROC Rate-of-change

rRMSE Relative root mean squared error

RS Rough set

RSI Relative strength index

SMA Simple moving average

SVM Support vector machine

SVR Support vector regression

WMA Weighted moving average

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
Abu-Mostafa, Y.S. and Atiya, A.F. (1996) Introduction to Financial Forecasting. Applied Intelligence, 6, 205-213. https://doi.org/10.1007/BF00126626 |

[2] |
Huang, W., Nakamori, Y. and Wang, S.-Y. (2005) Forecasting Stock Market Movement Direction with Support Vector Machine. Computers & Operations Research, 32, 2513-2522. https://doi.org/10.1016/j.cor.2004.03.016 |

[3] | Kaufman, P.J. (1998) Trading Systems and Methods. John Wiley & Sons, Hoboken. |

[4] | Box, G.E.P., et al. (2015) Time Series Analysis: Forecasting and Control. John Wiley & Sons, Hoboken. |

[5] |
Kim, K.-J. and Ingoo, H. (2000) Genetic Algorithms Approach to Feature Discretization in Artificial Neural Networks for the Prediction of Stock Price Index. Expert Systems with Applications, 19, 125-132.
https://doi.org/10.1016/S0957-4174(00)00027-0 |

[6] |
Yao, J.T. and Herbert, J.P. (2009) Financial Time-Series Analysis with Rough Sets. Applied Soft Computing, 9, 1000-1007. https://doi.org/10.1016/j.asoc.2009.01.003 |

[7] |
Elattar, E.E., Goulermas, J. and Wu, Q.H. (2010) Electric Load Forecasting Based on Locally Weighted Support Vector Regression. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40, 438-447.
https://doi.org/10.1109/TSMCC.2010.2040176 |

[8] |
Hong, W.-C., et al. (2011) Hybrid Evolutionary Algorithms in a SVR Traffic Flow Forecasting Model. Applied Mathematics and Computation, 217, 6733-6747.
https://doi.org/10.1016/j.amc.2011.01.073 |

[9] |
Hong, W.-C., et al. (2011) Forecasting Urban Traffic Flow by SVR with Continuous ACO. Applied Mathematical Modelling, 35, 1282-1291.
https://doi.org/10.1016/j.apm.2010.09.005 |

[10] |
Trafalis, T.B. and Huseyin, I. (2000) Support Vector Machine for Regression and Applications to Financial Forecasting. IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, Washington DC, 24 - 27 July 2000, 6348. https://doi.org/10.1109/IJCNN.2000.859420 |

[11] |
Tay, F.E.H. and Cao, L.J. (2001) Application of Support Vector Machines in Financial Time Series Forecasting. Omega, 29, 309-317.
https://doi.org/10.1016/S0305-0483(01)00026-3 |

[12] |
Cao, L.-J. and Tay, F.E.H. (2003) Support Vector Machine with Adaptive Parameters in Financial Time Series Forecasting. IEEE Transactions on Neural Networks, 14, 1506-1518. https://doi.org/10.1109/TNN.2003.820556 |

[13] |
Cao, L.J. (2003) Support Vector Machines Experts for Time Series Forecasting. Neurocomputing, 51, 321-339. https://doi.org/10.1016/S0925-2312(02)00577-5 |

[14] |
Yu, H.H., Chen, R.D. and Zhang, G.P. (2014) A SVM Stock Selection Model within PCA. Procedia Computer Science, 31, 406-412.
https://doi.org/10.1016/j.procs.2014.05.284 |

[15] | Chowdhury, U.N., et al. (2017) Integration of Principal Component Analysis and Support Vector Regression for Financial Time Series Forecasting. International Journal of Computer Science and Information Security (IJCSIS), 15, No. 8. |

[16] |
Lu, C.-J., Lee, T.-S. and Chiu, C.-C. (2009) Financial Time Series Forecasting Using Independent Component Analysis and Support Vector Regression. Decision Support Systems, 47, 115-125. https://doi.org/10.1016/j.dss.2009.02.001 |

[17] |
Kao, L.-J., et al. (2013) Integration of Nonlinear Independent Component Analysis and Support Vector Regression for Stock Price Forecasting. Neurocomputing, 99, 534-542. https://doi.org/10.1016/j.neucom.2012.06.037 |

[18] |
Cao, L.J., et al. (2003) A Comparison of PCA, KPCA and ICA for Dimensionality Reduction in Support Vector Machine. Neurocomputing, 55, 321-336.
https://doi.org/10.1016/S0925-2312(03)00433-8 |

[19] | Pearson, K. (1901) LIII. On Lines and Planes of Closest Fit to Systems of Points in Space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2, 559-572. |

[20] | Jolliffe, I. (2002) Principal Component Analysis. John Wiley & Sons, Ltd., Hoboken. |

[21] |
Hyvarinen, A. and Oja, E. (2000) Independent Component Analysis: Algorithms and Applications. Neural Networks, 13, 411-430.
https://doi.org/10.1016/S0893-6080(00)00026-5 |

[22] | Hyvarinen, A., Karhunen, J. and Oja, E. (2004) Independent Component Analysis. Vol. 46, John Wiley & Sons, Hoboken. |

[23] |
Bell, A.J. and Sejnowski, T.J. (1995) An Information-Maximization Approach to Blind Separation and Blind Deconvolution. Neural Computation, 7, 1129-1159.
https://doi.org/10.1162/neco.1995.7.6.1129 |

[24] |
Girolami, M. and Fyfe, C. (1997) Generalised Independent Component Analysis through Unsupervised Learning with Emergent Bussgang Properties. International Conference on Neural Networks, Houston, 12 June 1997, Vol. 3, 1788-1891.
https://doi.org/10.1109/ICNN.1997.614168 |

[25] |
Karhunen, J., et al. (1997) A Class of Neural Networks for Independent Component Analysis. IEEE Transactions on Neural Networks, 8, 486-504.
https://doi.org/10.1109/72.572090 |

[26] |
Giannakopoulos, X., Karhunen, J. and Oja, E. (1999) An Experimental Comparison of Neural Algorithms for Independent Component Analysis and Blind Separation. International Journal of Neural Systems, 9, 99-114.
https://doi.org/10.1142/S0129065799000101 |

[27] |
Hyvarinen, A. (1999) Fast and Robust Fixed-Point Algorithms for Independent Component Analysis. IEEE Transactions on Neural Networks, 10, 626-634.
https://doi.org/10.1109/72.761722 |

[28] |
Cortes, C. and Vapnik, V. (1995) Support-Vector Networks. Machine Learning, 20, 273-297. https://doi.org/10.1007/BF00994018 |

[29] | Hsu, C.-W., Chang, C.-C. and Lin, C.-J. (2003) A Practical Guide to Support Vector Classification. National Taiwan University, Taipei, 106. |

[30] |
Patel, J., et al. (2015) Predicting Stock Market Index Using Fusion of Machine Learning Techniques. Expert Systems with Applications, 42, 2162-2172.
https://doi.org/10.1016/j.eswa.2014.10.031 |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.