Prediction of the Price of Advanced Global Stock Markets Using Machine Learning: Comparative Analysis

Abstract

This paper seeks to forecast the daily closing prices of advanced global stock markets by employing machine learning techniques. It includes a comparative analysis of four major indices: TASI, the S&P 500, FTSE100, and DAX Price Index. Historical data from these indices were used to train various machine learning models, such as Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and a hybrid Model between LSTM and GRU. The models were assessed by considering their predictive accuracy, mean squared error (MSE), and computational efficiency. Furthermore, it emphasizes the power of machine learning in forecasting stock market daily closing prices and highlights the significance of choosing suitable models for varying market conditions. This analysis offers valuable insights into the evolution of more reliable forecasting tools, which can assist in risk management and strategic investment decisions.

Share and Cite:

Alharbi, M. (2024) Prediction of the Price of Advanced Global Stock Markets Using Machine Learning: Comparative Analysis. Journal of Financial Risk Management, 13, 689-702. doi: 10.4236/jfrm.2024.134033.

1. Introduction

Global financial markets exhibit inherent daily closing prices, which are shaped by a multitude of criteria such as economic data, geopolitical events, interest rates, and investor emotion. The high level of market daily closing prices presents considerable difficulties for investors and financial experts who aim to forecast future price fluctuations and reduce risks. Recently, the use of machine learning (ML) methods has garnered interest because of its capacity to enhance the precision of stock market trend prediction. Machine learning algorithms go beyond conventional statistical approaches by effectively capturing intricate and nonlinear relationships within extensive data sets, therefore enabling more advanced analyses and predictions. The objective of this work is to provide a comparative examination of different machine learning models for predicting prices in worldwide stock markets. The objective of this study is to determine the most efficient strategies for handling the unpredictability and variations occurring in global stock markets by assessing the performance of LSTM and GRU models. The article is organized into five sections: introduction, literature review, core principles of Long Short-Term Memory (LSTM) and Global Reinforcement Unit (GRU), results and discussion, and conclusion.

2. Literature Review

The prediction of stock returns has been a subject of longstanding interest among researchers. Investors seek to predict stock trends to ascertain the optimal moments for purchasing, selling, or retaining shares. Stock trend forecasting integrates qualitative and quantitative analysis techniques. There are several statistical models used for this objective, and the selection of the appropriate model relies on the characteristics of the data (Shah et al., 2019). Contemporary progress in stock market prediction utilizes machine learning methods to predict stock prices by training models on historical data of stock market indices. The present work assesses several machine learning algorithms in order to ascertain the optimal approaches for forecasting stock prices. More precisely, it evaluates the statistical effectiveness of four different algorithms: Linear Regression, Long Short-Term Memory (LSTM), k-nearest Neighbours (k-NN), and FB-Prophet. The primary variables taken into account in the analysis are the opening price, closing price, highest price, date, and previous price. Furthermore, this work investigates the pragmatic implementation of these prediction algorithms in actual situations and tackles obstacles associated with precision in predicting stock prices. Furthermore, it presents a machine learning algorithm for forecasting the enduring viability of equities in a public market. Ensuring precise stock forecasts can yield substantial benefits for stock market institutions and provide pragmatic resolutions to the obstacles encountered by investors, thereby augmenting decision-making and strategy formulation in the financial industry (Sravani et al., 2021). A novel hybrid time series model is presented in this paper, which combines an Adaptive Network-Based Fuzzy Inference System (ANFIS) with Empirical Mode Decomposition (EMD) for the purpose of forecasting stock prices in the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the Hang Seng Stock Index (HSI).

Comparative analysis is conducted to assess the predictive performance of the model in relation to Chen’s model, Yu’s model, the autoregressive (AR) model, the conventional ANFIS model, and the support vector regression (SVR) model.

Evidenced by lower root mean squared error (RMSE) values, the results demonstrate that the suggested model surpasses the others (Wei, 2016). Precise prediction of hospitalisation patient volumes is essential for efficient hospital administration. These figures are contingent upon three categories of patient flows: admissions, discharges, and newly admitted patients. Prior investigations have generally concentrated on predicting a single form of patient flow alone, rather than taking into account all three concurrently. This work introduces an innovative multi-task forecasting algorithm that provides simultaneous predictions for all three patient types. The proposed methodology incorporates three neural network modules into a cohesive architecture in order to augment the forecasting capabilities. Furthermore, we extract temporal characteristics associated with the admission and discharge flows in order to enhance the accuracy of the model. The model is trained and evaluated using an authentic dataset consisting of daily quantities of patients recorded over a period of one year in a hospital environment. In this study, we assess the efficacy of our model in comparison to eight baseline algorithms on two concrete datasets. The experimental findings clearly show that our model surpasses existing baselines by a substantial margin, providing a completer and more precise tool for hospital patient prediction (Zhou et al., 2023). This work presents innovative forecasting models for fractional-order chaotic oscillators, including Duffing, Van der Pol, Tamaševičius, and Chua oscillators, as implemented through feed forward neural networks. The algorithms forecast variations in state values by using a weighted correlation with the oscillator states, therefore proving to be efficient for out-of-sample prediction. The neural network-based weighted models presented for these oscillators provide superior performance in terms of overall mean square error (MSE) compared to current models in the majority of instances. The investigation indicates that the suggested models offer the most effective weights for minimizing the mean squared error (MSE) and exhibit exceptional tracking capability in forecasting future state values (Bingi & Prusty, 2021). Considering the great volatility of daily closing prices in the stock market, artificial intelligence can be efficiently employed to generate well-informed forecasts prior to making investing choices. This work presents a comprehensive examination of artificial intelligence and machine learning as instruments for forecasting stock market trends, investigating their advantages, constraints, possibilities, and difficulties. Furthermore, it analyses the pragmatic uses of three machine learning methodologies, namely artificial neural networks, support vector machines, and long short-term memory (LSTM), in predicting stock market patterns (Chhajer et al., 2022). Stock price forecasting is a challenging endeavour due to the multitude of variables involved, but machine learning and other techniques provide promising solutions. The present work provides a concise summary of stock markets and categorises several prediction approaches. This paper examines notable research progress in stock analysis and prediction, investigates technical and fundamental approaches for forecasting under both short-term and long-term time-frames, and emphasizes the obstacles and prospects for further study in this field (Lin et al., 2022). The study introduces effective methods for predicting stock market trends, detailing the systematic approach employed to collecting the findings. This paper presents a comprehensive evaluation of several algorithms used for predicting stock prices. The findings are presented in tables and graphs to enhance clarity. The research findings indicate that deep learning (DL) algorithms exhibit superior performance compared to other approaches, providing very precise forecasts for stock prices and time series data (Bansal et al., 2022).

Forecasting stock market movements is difficult because of the presence of chaotic and unpredictable data, as well as intricate sources of influence such as earnings releases and market news. Despite the challenges posed by traditional approaches, machine learning (ML) and deep learning (DL) algorithms have demonstrated the potential to enhance the accuracy of predictions. The present study conducts a comprehensive literature analysis spanning the last twenty years, classifies research according to machine learning (ML) and deep learning (DL) methodologies, and assesses the efficacy of different predictive algorithms (Rath et al., 2024). The importance of stock market forecasts lies in their significant influence on the worldwide economy, particularly for prominent corporations such as Amazon, Apple, Microsoft, and Google. This paper presents the application of the LSTM network to predict stock prices for the next two months by using past stock data. Analyzing historical price trends and patterns, the LSTM model is specifically developed to capture intricate relationships in the time series data, hence enhancing its effectiveness in forecasting future stock movements. This methodology enables a more precise prediction of prices within a limited time frame, particularly focusing on a two-month timeframe. The predictive accuracy of the model in forecasting stock prices using past values provides significant information for traders and investors aiming to make well-informed selections.

The enhanced prediction accuracy of LSTM is attributed to its capacity to preserve pertinent information and eliminate noise. The findings suggest that Microsoft’s projected closing price attains its peak value at $374.736 (Maharani et al., 2024). The dynamic nature of stock market trends makes forecasting them a tough task. Conventional models such as AR, ARMA, and ARIMA face difficulties in handling intricate and uncertain data, while deep neural networks provide a more adaptable method for capturing non-linear patterns. Nevertheless, these models necessitate substantial computer resources and are susceptible to overfitting. This multidisciplinary study emphasizes the requirement of ongoing progress in deep learning and a more profound comprehension of market dynamics to improve the degree of forecast accuracy (Chowdhury et al., 2024). The objective of this work is to forecast share price patterns in a developing economy by employing the LSTM algorithm together with technical indicators such as SMA, MACD, and RSI. Utilising data from VN-Index and VN-30 stocks, the model attained a remarkable accuracy rate of 93%, therefore demonstrating the efficacy of LSTM in predicting fluctuations in stock prices (Phuoc et al., 2024). This paper provides an overview of prior studies on the application of Machine Learning (ML) and Deep Learning (DL) algorithms in predicting stock market movements. The research specifically examines market indexes, individual companies, the oil and gas industry, information technology (IT), and trading sectors. In the analysis of eleven publications, it is shown that CNN and LSTM are the most efficient models, with CNN exhibiting somewhat superior performance in capturing sudden changes. One important constraint observed is the restricted quantity of papers examined for each section (Al-Ali & Al-Alawi, 2024). The objective of this study is to establish precise predictions for the upcoming 25 months and offer valuable understanding of possible price fluctuations. Overall, this study illuminates the dynamics of the Islamic market in Indonesia, Thailand, Malaysia, Pakistan, UAE, and Qatar, which conform to the efficient market hypothesis (EMH) because of the reliability of previous data on index prices (Khairunnisa et al., 2024). Utilising the Artificial Organic Network (AON) framework, the Artificial Halocarbon Compounds (AHC) algorithm is a novel supervised machine learning approach inspired by nature, specifically developed for forecasting stock market dynamics. The present algorithm has been precisely developed to address the Index Tracking Problem (ITP), which encompasses the task of monitoring and forecasting the performance of stock market indices. Derived from natural processes, the AHC algorithm emulates organic systems to simulate intricate stock market dynamics. The main objective of this study is to improve the precision of stock index forecasts, providing a robust instrument for investors and analysts seeking to monitor and predict index fluctuations. The application of the AHC algorithm is more effective in addressing the issues of index tracking, such as managing volatility and market swings. This novel approach bears considerable promise for enhancing financial forecasting and offering more accurate market insights.

The work critically evaluates the performance of AHC in comparison to genetic algorithms and other machine learning approaches, showcasing its exceptional accuracy and versatility, particularly in the context of financial forecasting applications (González-Núñez et al., 2024). The utility of four deep learning models, namely Multilayer Perceptron (MLP), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Convolutional Neural Network (CNN), in forecasting stock values is assessed in this study. The training of each of these models employs historical stock data obtained from two prominent financial markets, namely the National Stock Exchange (NSE) of India and the New York Stock Exchange (NYSE). The objective of this study is to evaluate the ability of these sophisticated models to accurately represent intricate patterns and trends in the data by examining historical stock price fluctuations from these major exchanges. The performance of each model is meticulously evaluated, providing valuable insights into which methodologies are better appropriate for precise stock market prediction. This assessment emphasises the capacity of deep learning techniques to improve the precision of predictions in extremely unpredictable and ever-changing financial settings, therefore adding significant value to them as instruments for traders and investors.

The study trains models using NSE data and evaluates their performance on both NSE and NYSE, revealing that CNN achieved the highest performance, even for NYSE equities. The results demonstrate the advantage of deep learning over ARIMA models, emphasising the common market dynamics and the efficacy of neural networks in predicting stock prices (Ghosh et al., 2024). This work presents a novel machine learning approach to forecast the prices of Ethereum and Solana by utilising previous price data and social media sentiment. A daily sentiment score is generated by analysing the social sentiment extracted from tweets using the FinBERT sentiment model. The aforementioned score is combined with historical pricing data to serve as inputs for a hybrid LSTM-GRU model. The model’s predictive accuracy for Ethereum and Solana prices is demonstrated to exceed that of conventional benchmark models, therefore offering more precise projections of price movements (Girsang & Stanley, 2023).

3. Utilised Models

3.1. Long Short-Term Memory (LSTM)

LSTM networks are a sophisticated variant of Recurrent Neural Networks (RNNs) purposefully developed to address the issue of disappearing gradients, which frequently impedes the efficiency of conventional RNNs. In conventional recurrent neural networks (RNNs), the process of passing input through several layers during training can cause the gradients used to update the weights of the model to decrease exponentially. This might provide challenges for the network in learn long-term dependencies. LSTM models tackle this issue by integrating specialised memory cells and gating mechanisms that control the transmission of information, allowing the network to retain and retrieve pertinent material for long durations (Egan et al., 2017).

Recently proposed by Hochreiter and Schmidhuber in 1997, Long Short-Term Memory (LSTM) models are highly suitable for analysing sequential data and forecasting time series because they can effectively preserve long-term relationships (Hochreiter, 1997).

As illustrated in Figure 1, the LSTM design incorporates memory cells complete with gates, including the input gate, forget gate, and output gate, that regulate the information flow by determining whether information is to be retained, update, or discard as time progresses (Landi et al., 2021). These gating mechanisms enable Long Short-Term Memory (LSTM) models to efficiently capture both short-term and long-term relationships in sequential data (Moradi et al., 2021).

Figure 1. The architecture a LSTM.

3.2. The Gated Recurrent Unit (GRU)

Proposed by Cho et al. in 2014 (Britz, 2015b), is a streamlined iteration of the Long Short-Term Memory (LSTM) model. Its primary objective is to tackle the issue of vanishing gradients in Recurrent Neural Networks (RNNs), while utilising a reduced number of parameters and computing complexity (Britz, 2015a). Gated Recurrent Units (GRUs) integrate the forget and input gates into a unified update gate, this is shown in Figure 2. They also include a reset gate that regulates the extent to which the prior state should be disregarded (Bittar & Garner, 2021). This streamlined gating technique enables GRUs to achieve greater computing efficiency while yet preserving long-term dependence (Gruber & Jockisch, 2020).

Gated Recurrent Units (GRUs) have demonstrated strong performance in situations with limited training data or where computational efficiency is of utmost importance. In many instances, GRUs exhibit performance that is equivalent to or even exceeds that of Long Short-Term Memory (LSTM) models.

Figure 2. The architecture of a GRU illustrative representation of the data routing inside a GRU.

3.3. A hybrid LSTM-GRU

A hybrid LSTM-GRU model seeks to integrate the advantages of both LSTM and GRU to develop a more adaptable network. Numerous methods exist for their combination, with several prevalent options including, the Rationale for Employing a Hybrid Model is Flexibility: Hybrid models can reconcile the trade-off between the elevated computational expense of LSTMs and the efficiency of GRUs.Robustness: Utilising both cell types may enhance model’s generalisation across diverse data types, particularly in tasks involving mixed short- and long-term dependency.

Performance: Integrating LSTM’s advanced memory management with GRU’s simplicity can enhance both the speed and accuracy of the model for certain jobs.

4. Results and Discussion

The TASI, the S&P 500, FTSE 100, and DAX Performance Price Index, are created using daily closing prices of historical stock prices, in US dollars from 2 Jan 2010 to 30 Jan 2023, this is illustrated in Table 1, the data source is the DATASTREAM DB.

Table 1. The TASI, the S&P 500, FTSE 100, and DAX performance price index.

TASI

S&P 500

FTSE 100

DAX

N

3267

3267

3267

3267

Mean

2166.670327

2546.812828

3336.324805

3703.320534

Std. Deviation

471.889341

984.6314211

1289.867162

1431.752549

Variance

222679.55

969499.035

1663757.295

2049915.363

Range

2265.3515

3697.33

4843.5023

5376.28755

Minimum

1419.3873

1099.23

1439.9913

1598.39034

Maximum

3684.7388

4796.56

6283.4936

6974.6779

Each of the four indices has 3267 observations, suggesting that the data was gathered during the same time frame and had an equivalent sample size. Mean is the arithmetic average of the index values during a certain period. The DAX index has the highest mean value of 3703.32, suggesting a generally superior measure of index TASI exhibits the lowest mean value of 2166.67, displaying a comparatively reduced index level. Furthermore, as illustrated in Figure 3, the temporal trend signifies an overall upward trend palette.

Figure 3. Time series data for the TASI, S&P 500, FTSE 100, DAX from January 2010 to January 2023.

From Figure 3 above, we observe that all indices experienced price fluctuations while showing an overall upward trend. However, there was a noticeable sharp decline during the COVID-19 pandemic period, followed by a gradual recovery over time.

Figure 4. Comparison of Real TASI Data with LSTM, the hybrid and GRU Model Predictions (January 2010-January 2023).

Figure 4 shows the comparison between the real data and the predictive performance of LSTM, the hybrid and GRU models for the TASI index from January 2010 to January 2023, it indicates that the hybrid model’s predictions are closer to the real data line, so the hybrid model showed a slightly better predictive performance.

Figure 5. Comparison of Real S&P 500 Data with LSTM, the hybrid and GRU Model Predictions (January 2010-January 2023).

Figure 5 illustrates a detailed comparison of the prediction accuracy of Long Short-Term Memory (LSTM), the hybrid and Gated Recurrent Unit (GRU) models in predicting the S&P 500 index from January 2010 to January 2023. The diagram illustrates the actual S&P 500 index data in comparison to the projected values produced by both modeling techniques. The visual comparison demonstrates that the projected values of the GRU model exhibit a higher degree of alignment with the objective S&P 500 data. This observation implies that the hybrid model is more proficient in capturing the trends and patterns in the index compared to the LSTM model. The closer alignment is apparent as the line graphical representation of the predictions made by the hybrid model is constantly closer to the line illustrating the actual data in comparison to the LSTM model.

Figure 6. Comparison of Real FTSE 100 Data with LSTM, the hybrid and GRU Model Predictions (January 2010-January 2023).

An analysis comparing the predictive capacities of Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), the hybrid models in forecasting the FTSE 100index from January 2010 to January 2023 is presented in Figure 6. By comparing the actual FTSE 100 index values with the forecasts produced by both models, it provides a graphical depiction of the performance of each model.

The investigation indicates that the predictions of the hybrid model exhibit a higher degree of alignment with the actual FTSE 100 data in comparison to the LSTM model. This observation suggests that the hybrid model demonstrates greater proficiency in capturing the fundamental patterns and variations included in the index data. The greater alignment is evidenced by the persistent closeness of the GRU prediction line to the actual data line, surpassing that of the LSTM model. The comparative superior predictive performance of the hybrid model can be attributed to its simpler architecture, which minimizes computational complexity and successfully tackles the vanishing gradient problem. In comparison to the LSTM model, the GRU model, the hybrid exhibits superior efficiency in acquiring and preserving long-term dependencies within the time-series data. Therefore, the findings indicate that hybrid models may provide a more favorable equilibrium between the precision of predictions and the efficiency of computation in financial time-series forecasting tasks, especially when dealing with extensive datasets or extended projection periods.

Figure 7. Comparison of Real DAX 30 Data with LSTM, the hybrid, and GRU Model Predictions (January 2010-January 2023).

A comparative comparison of the prediction capabilities of Long Short-Term Memory (LSTM), the hybrid and Gated Recurrent Unit (GRU) models in forecasting the DAX 30 index from January 2010 to January 2023 is shown in Figure 7. The picture graphically compares the reported DAX 30 index values with the predictions produced by both models, therefore presenting a visual depiction of their respective performances. The investigation demonstrates that the predictions generated by the GRU model have a greater level of concordance with the real DAX 30 data in comparison to those predicted by the LSTM model. This observation implies that the GRU model has greater efficacy in capturing the underlying patterns and fluctuations present in the index data. The hybrid prediction line consistently exhibits a closer alignment to the actual data line, surpassing the performance of the LSTM model. The hybrid model’s higher predictive performance can be ascribed to its simpler architecture, which decreases computing cost while simultaneously effectively resolving the vanishing gradient problem. The fundamental simplicity of the hybrid model allows it to learn and retain long-term dependencies in time-series data more effectively than the more intricate LSTM model. Hence, the results suggest that hybrid models may provide a more advantageous equilibrium between the precision of predictions and the level of computing efficiency in financial time-series forecasting, especially when dealing with extensive datasets or prolonged projection periods.

The above table compares the predictive capabilities of Long Short-Term Memory (LSTM), hybrid and Gated Recurrent Unit (GRU) models in forecasting four prominent stock indices: TASI, S&P 500, FTSE 100, and DAX 30, during a

Table 2. The performance of LSTM, the hybrid and GRU Models for prediction four prominent stock indices: TASI, S&P 500, FTSE 100, and DAX 30.

SMAPE

MAPE

MASE

TASI

HYB-GRU-LSTM

0.001700656

0.001908409

0.001202926

GRU

0.001700691

0.001908444

0.00120795

LSTM

0.001845044

0.002123375

0.00134777

S&P 500

HYB-GRU-LSTM

0.001844961

0.002123287

0.000866939

GRU

0.001844999

0.002123326

0.00087056

LSTM

0.002178709

0.00252502

0.00101529

FTSE 100

HYB-GRU-LSTM

0.001845022

0.002123352

0.000843368

GRU

0.00184506

0.002123391

0.00084689

LSTM

0.002178434

0.002524724

0.0009915

DAX 30 5

HYB-GRU-LSTM

0.001845014

0.002123343

0.000835819

GRU

0.001845052

0.002123382

0.00083931

LSTM

0.002178429

0.00252472

0.00098392

defined time frame. The chosen performance measures are Symmetric Mean Absolute Percentage Error (SMAPE), Mean Absolute Scaled Error (MASE), and Mean Absolute Percentage Error (MAPE). These measures provide a thorough evaluation of the models’ precision in predicting time-series data. In terms of reduced SMAPE, MAPE, and MASE values, the hybrid model routinely surpasses the LSTM model across all four indices—TASI, S&P 500, FTSE 100, and DAX 30. The findings indicate that the hybrid model’s less complex structure and effective management of long-term relationships allow it to provide more precise predictions and lower overall computational complexity in comparison to the LSTM model. Thus, the hybrid model provides a better-balanced approach to achieve both predictive accuracy and computing efficiency for financial time-series forecasting problems, especially with large datasets.

5. Conclusion

Deep learning methods have become increasingly popular among researchers in recent years, especially for applications in time series forecasting. The objective of this work is to assess the efficacy of Long Short-Term Memory (LSTM), the hybrid, and Gated Recurrent Unit (GRU) models in forecasting stock market fluctuations. An extensive dataset including TASI, the S&P 500 Composite Price Index, FTSE 100 Price Index, and DAX Performance Price Index, was gathered and converted into a sequential structure appropriate for statistical analysis of time series. Long Short-Term Memory (LSTM), the hybrid and Genetic Regression Unit (GRU) models were employed to unveil intricate patterns and fundamental dynamics in the historical data. The results indicate that although both models yield accurate predictions of stock market trends, The investigation revealed that the hybrid model achieved lower error rates (MSE, RMSE, and MAE) in predicting stock prices, indicating their superior ability to capture the fundamental patterns and dynamics of the time series data. The high predictive accuracy of these models provides useful insights for investors, researchers, and market specialists, facilitating well-informed decision-making about future patterns in the stock market.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Al-Ali, A. M., & Al-Alawi, A. I. (2024). Stock Market Forecasting Using Machine Learning Techniques: A Literature Review. In 2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS) (pp. 466-471). IEEE.
https://doi.org/10.1109/icetsis61505.2024.10459681
[2] Bansal, M., Goyal, A., & Choudhary, A. (2022). Stock Market Prediction with High Accuracy Using Machine Learning Techniques. Procedia Computer Science, 215, 247-265.
https://doi.org/10.1016/j.procs.2022.12.028
[3] Bingi, K., & Prusty, B. R. (2021). Forecasting Models for Chaotic Fractional-Order Oscillators Using Neural Networks. International Journal of Applied Mathematics and Computer Science, 31, 387-398.
https://doi.org/10.34768/amcs-2021-0026
[4] Bittar, A., & Garner, P. N. (2021). A Bayesian Interpretation of the Light Gated Recurrent Unit. In ICASSP 2021—2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2965-2969). IEEE.
https://doi.org/10.1109/icassp39728.2021.9414259
[5] Britz, D. (2015a). Recurrent Neural Networks Tutorial, Part 1Introduction to RNNs.
https://dennybritz.com/posts/wildml/recurrent-neural-networks-tutorial-part-1/
[6] Britz, D. (2015b). Recurrent Neural Network Tutorial, Part 4Implementing a GRU and LSTM RNN with Python and Theano.
https://dennybritz.com/posts/wildml/recurrent-neural-networks-tutorial-part-4/
[7] Chhajer, P., Shah, M., & Kshirsagar, A. (2022). The Applications of Artificial Neural Networks, Support Vector Machines, and Long-Short Term Memory for Stock Market Prediction. Decision Analytics Journal, 2, Article ID: 100015.
https://doi.org/10.1016/j.dajour.2021.100015
[8] Chowdhury, M. S., Nabi, N., Rana, M. N. U., Shaima, M., Esa, H., Mitra, A. et al. (2024). Deep Learning Models for Stock Market Forecasting: A Comprehensive Comparative Analysis. Journal of Business and Management Studies, 6, 95-99.
https://doi.org/10.32996/jbms.2024.6.2.9
[9] Egan, S., Fedorko, W., Lister, A., Pearkes, J., & Gay, C. (2017). Long Short-Term Memory (LSTM) Networks with Jet Constituents for Boosted Top Tagging at the LHC. arXiv: 1711.09059.
[10] Ghosh, B. P., Bhuiyan, M. S., Das, D., Nguyen, T. N., Jewel, R. M., Mia, M. T. et al. (2024). Deep Learning in Stock Market Forecasting: Comparative Analysis of Neural Network Architectures across NSE and Nyse. Journal of Computer Science and Technology Studies, 6, 68-75.
https://doi.org/10.32996/jcsts.2024.6.1.8
[11] Girsang, A. S., & Stanley, (2023). Hybrid LSTM and GRU for Cryptocurrency Price Forecasting Based on Social Network Sentiment Analysis Using Finbert. IEEE Access, 11, 120530-120540.
https://doi.org/10.1109/access.2023.3324535
[12] González-Núñez, E., Trejo, L. A., & Kampouridis, M. (2024). A Comparative Study for Stock Market Forecast Based on a New Machine Learning Model. Big Data and Cognitive Computing, 8, Article 34.
https://doi.org/10.3390/bdcc8040034
[13] Gruber, N., & Jockisch, A. (2020). Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text? Frontiers in Artificial Intelligence, 3, Article 40.
https://doi.org/10.3389/frai.2020.00040
[14] Hochreiter, S. (1997). Long Short-Term Memory. Neural Computation MIT-Press.
[15] Khairunnisa, D. A., Rodoni, A., & Rama, A. (2024). Predicting Shariah Stock Market Indices with Machine Learning: A Cross-Country Case Study. Jurnal Ekonomi Syariah Teori dan Terapan, 11, 86-104.
https://doi.org/10.20473/vol11iss20241pp86-104
[16] Landi, F., Baraldi, L., Cornia, M., & Cucchiara, R. (2021). Working Memory Connections for LSTM. Neural Networks, 144, 334-341.
https://doi.org/10.1016/j.neunet.2021.08.030
[17] Lin, Y., Lai, C., & Pai, P. (2022). Using Deep Learning Techniques in Forecasting Stock Markets by Hybrid Data with Multilingual Sentiment Analysis. Electronics, 11, Article 3513.
https://doi.org/10.3390/electronics11213513
[18] Maharani, F. A., Ivana, S., Fithriyah, B., Zakiyyah, A. Y., & Sihotang, E. F. A. (2024). Time Series Forecasting Using LSTM to Predict Stock Market Price in the First Quarter of 2024. In 2024 International Conference on Smart Computing, IoT and Machine Learning (SIML) (pp. 74-80). IEEE.
https://doi.org/10.1109/siml61815.2024.10578097
[19] Moradi A., M., Sadrossadat, S. A., & Derhami, V. (2021). Long Short-Term Memory Neural Networks for Modeling Nonlinear Electronic Components. IEEE Transactions on Components, Packaging and Manufacturing Technology, 11, 840-847.
https://doi.org/10.1109/tcpmt.2021.3071351
[20] Phuoc, T., Anh, P. T. K., Tam, P. H., & Nguyen, C. V. (2024). Applying Machine Learning Algorithms to Predict the Stock Price Trend in the Stock Market—The Case of Vietnam. Humanities and Social Sciences Communications, 11, Article No. 393.
https://doi.org/10.1057/s41599-024-02807-x
[21] Rath, S., Das, N. R., & Pattanayak, B. K. (2024). An Analytic Review on Stock Market Price Prediction Using Machine Learning and Deep Learning Techniques. Recent Patents on Engineering, 18, 88-104.
https://doi.org/10.2174/1872212118666230303154251
[22] Shah, D., Isah, H., & Zulkernine, F. (2019). Stock Market Analysis: A Review and Taxonomy of Prediction Techniques. International Journal of Financial Studies, 7, Article 26.
https://doi.org/10.3390/ijfs7020026
[23] Sravani, A., Anusha, C., & Shankar, N. V. S. (2021). A Comparative Analysis of Machine Learning Algorithms in Stock Prediction. In Proceedings of the International Conference on Industrial Engineering and Operations Management (pp. 2619-2623). IEOM Society International.
[24] Wei, L. (2016). A Hybrid ANFIS Model Based on Empirical Mode Decomposition for Stock Time Series Forecasting. Applied Soft Computing, 42, 368-376.
https://doi.org/10.1016/j.asoc.2016.01.027
[25] Zhou, M., Huang, X., Liu, H., & Zheng, D. (2023). Hospitalization Patient Forecasting Based on Multi-Task Deep Learning. International Journal of Applied Mathematics and Computer Science, 33, 151-162.
https://doi.org/10.34768/amcs-2023-0012

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.