Fractional Rider Deep Long Short Term Memory Network for Workload Prediction-Based Distributed Resource Allocation Using Spark in Cloud Gaming

The modern development in cloud technologies has turned the idea of cloud gaming into sensible behaviour. The cloud gaming provides an interactive gaming application, which remotely processed in a cloud system, and it streamed the scenes as video series to play through network. Therefore, cloud gaming is a capable approach, which quickly increases the cloud computing platform. Obtaining enhanced user experience in cloud gaming structure is not insignificant task because user anticipates less response delay and high quality videos. To achieve this, cloud providers need to be able to accurately predict irregular player workloads in order to schedule the necessary resources. In this paper, an effective technique, named as Fractional Rider Deep Long Short Term Memory (LSTM) network is developed for workload prediction in cloud gaming. The workload of each resource is computed based on developed Fractional Rider Deep LSTM network. Moreover, resource allocation is performed by fractional Rider-based Harmony Search Algorithm (Rider-based HSA). This Fractional Rider-based HSA is developed by combining Fractional calculus (FC), Rider optimization algorithm (ROA) and Harmony search algorithm (HSA). Moreover, the developed Fractional Rider Deep LSTM is developed by integrating FC and Rider Deep LSTM. In addition, the multi-objective parameters, namely gaming experience loss QE, Mean Opinion Score (MOS), Fairness, energy, network parameters, and predictive load are considered for efficient resource allocation and workload prediction. Additionally, the developed workload prediction model achieved better performance using various parameters, like fairness, MOS, QE, energy and delay. How to cite this paper: Désiré, K.K., Francis, K.A., Kouassi, K.H., Dhib, E., Tabbane, N. and Asseu, O. (2021) Fractional Rider Deep Long Short Term Memory Network for Workload Prediction-Based Distributed Resource Allocation Using Spark in Cloud Gaming. Engineering, 13, 135-157. https://doi.org/10.4236/eng.2021.133011 Received: January 26, 2021 Accepted: March 15, 2021 Published: March 18, 2021 Copyright © 2021 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access


Introduction
Cloud computing is a developing computing architecture, and it provides various computing resources as general utilities for end user through Internet. Cloud computing allows on-demand access to a shared set of resources, such as services, servers, storage space and networks [1]. However, today, cloud technology is expanding its services, called Everything-as-a-Service (XaaS). Cloud gaming enables a game content on non-specialized devices, like mobile phones, tablets, smart televisions and so on. The cloud gaming provides on-demand manner interactive gaming application, which is remotely processed in cloud and pictures are streamed as video series to play by Internet [2] [3]. The entire processing operations associated with game scene frames are performed on server Virtual Machines (VMs) in cloud gaming.
Virtualisation is the main feature of cloud computing technology, allowing the physical data center to be distributed as dynamic virtual resources. The process of resource allocation is an important part of the cloud data center. This process can save energy consumption, reduce computing costs, and enhance resource utilization efficiency. In resource allocation, the game is performed at cloud server or client side based on the present resources in network and client. Besides, the cloud computing system handles the games computational operation using cognitive capacities. The massive development of cloud computing is resource allocation, which reduces the operating price and improves the resource consumption. Generally, virtualization approach attains the flexibility, and it includes hardware virtualization, such as Central processing Unit (CPU), storage, network and memory [4] [5]. Normally, response delay includes playout delay, processing delay and network delay. Typically, playout delay is considered insignificant, and it does not have an important factor in player's game experience [6] [7]. A huge amount of games are delivered by service providers and simulated in numerous instances at various data-centers in cloud, and it is geographically distributed for reducing network delay. Likewise, processing delay relies on accessible processing power in cloud server. Processing power is identified by server resources, like CPU, Graphics processing Unit (GPU), storage and workload of game sessions, which run on cloud server. Meanwhile, virtualization system is employed for decreasing cost of cloud service providers [8] [9].
Dynamic allocation of resources can be done in two ways, such as a reactive approach and a proactive approach [10]. As part of a reactive approach, cloud users can set thresholds for resource underutilization and overuse. When the workload reaches the threshold value, the automatic resizing process takes the action based on the current state of resources, such as removing virtual machines from cloud services for an underutilized state or adding from virtual machines to cloud services for a state of overuse. The main disadvantage of this process is that the automatic resizing process has difficulty performing the resizing operations in the event of sudden fluctuations in the workload. A proactive approach allows resizing operations to be carried out in advance [11]. Cloud resource management forecasts the future workload of each cloud service and allocates the resources to their cloud services based on the expected value. Currently, there are many techniques and methods applied to the prediction of the workload of computer systems, such as the e-learning ensemble approach [12], ARIMAR (Auto Regressive and Moving Average) models [13], Recurrent Neural Networks (RNN) [14], Long and Short Memory Networks (LSTM) [15]. However, deciding on the exact amount of resources with proactive approaches during the execution time of cloud services is a difficult and not insignificant task. Due to irregular access, cloud services are subject to fluctuations in workload. This can lead to over-or under-utilization of resources. In a state of over-provisioning, more resources will be allocated to applications in the cloud than necessary. According to service level agreements (SLAs), this is a benefit for cloud users, but for providers, it is an unnecessary cost that results in high energy consumption. In the state of under-provisioning, fewer resources will be allocated to cloud applications than are needed, leading to SLA violations, lower QoE, and ultimately loss of consumers and revenue. An effective and proactive approach must therefore accurately predict the future resources needed to achieve QoE. The most important measure of system workload prediction models is accuracy, which is measured by the difference between predicted and actual results [16]. In general, the closer the predicted value is to the actual value, the better the model.
In recent days, Artificial Intelligence (AI) technologies are introduced for computing amount of resources consumed through clients. In addition, the techniques, termed as GAugur, which identifies whether a co-located game assembles quality of service needs with pre-determined error rate [2]. Specially, Evolutionary computation (EC) approaches, such as Genetic Techniques (GA) is devised to enhance resource utilization and reduce energy consumption. The customized GA with fuzzy multi objective computation is developed for VM in [17]. Meta-heuristic-based approaches are also introduced by offering near optimal solution in sufficient period using particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), GA etc. [18] [19]. Moreover, meta-heuristic method, termed Grey Wolf optimizer (GWO) is developed, which is inspired by grey wolves [20]. However, integration performance of cloud computing is mainly based on matching rank between system representation and meta-heuristic model [21].
In order to deal with the issues mentioned above, this paper proposes the development of a new method for workload prediction method by developed algorithm using spark architecture in cloud gaming.

Motivation
The cloud gaming provides interactive gaming application, which is remotely processed in cloud and pictures are streamed as video series to play by internet. The main challenge is the interactive and real time behaviour of multiplayer cloud gaming produces a response delay in end user quality of experience [7] [22] [23]. The adaptive optimization system for cognitive resource allocation is challenging one, due to load of requested resources and real time service in gaming [7]. Also, the energy consumption is an important challenge in the cloud gaming. These challenges in the existing workload prediction and resource allocation are considered as a motivation, a new method named Fractional Rider-based HSA is developed in this work. In the proposed method, to solve these problems, the workload prediction and resource allocation are carried out using the developed Fractional Rider Deep LSTM. Also, the multi-objective parameters, such as energy, gaming experience loss, fairness, MOS, network parameters, and predictive load are considered for computing the optimal solution in cloud gaming.
The major purpose of this research is to developed a new method for workload prediction method by developed algorithm using spark architecture in cloud gaming. In this research, workload of each resource is computed by developed Fractional Rider Deep LSTM architecture. The developed Fractional Rider Deep LSTM system is the integration of FC [24] and Rider Deep LSTM [25]. Here, resource allocation process is performed based on Fractional Rider-based HSA, which is the combination of HSA [26] and ROA [27] and fractional calculus. Furthermore, fairness, MOS, network parameters, gaming experience loss, energy and predictive error is considered in multi-objective method for efficient resource allocation.
The major contribution of this research is enlisted below:  Developed Fractional Rider Deep LSTM for effective workload prediction: An efficient technique is devised using Fractional Rider Deep LSTM for workload prediction in cloud gaming. The developed Fractional Rider Deep LSTM is the combination of FC and Deep Rider LSTM network. Moreover, multi-objective parameters, namely energy, gaming experience loss, fairness, MOS, network parameters, and predictive load are considered for computing the optimal solution in cloud gaming. The remaining parts of the paper are listed as follows: Section 2 illustrates the existing approaches of resource allocation and workload prediction in cloud gaming and Section 3 presents the system model of cloud computing system. Section 4 explains the developed Fractional Rider Deep LSTM for workload prediction and resource allocation system. The results of developed Fractional Rider Deep LSTM are portrayed in Section 5 and Section 6 concludes the paper.

Literature Survey
In this section, existing workload prediction and resource allocation are explained with their disadvantages. Yiwen Han et al. [9] developed a distributed technique for optimizing VM position in mobile cloud gaming. Here, mobile cloud gaming system was employed with resource optimization, and NP-hard is used for identifying optimal solutions to overcome this optimization problem. Moreover, potential game theory was introduced for determining the Nash Equilibrium in multi-player competition game. This technique obtained enhanced performance than other scales and policies with increasing number of players. However, this approach did not consider the multidimensional parameters in cloud gaming for better optimization. To overcome this problem, the multi-objective method is estimated to find optimal solution in the proposed method. Hossein Ebrahimi Dinaki et al. [7] introduced two effective techniques for graphics processing unit-based server selection in cloud gaming. This approach is a developed version of PSO and GA, named boosted PSO and boosted GA. Additionally, service providers and players experience profits were considered for enhancing the quality of experience. This approach obtained better effectiveness in terms of player's quality of energy, and capacity wastage. Even though, this model not considered extra network parameters, and quality metrics for obtaining better inclusive solution. This problem is overcome by using several parameters, such as network definition factor, energy, gaming experience loss, fairness, predictive load, load and MOS in the proposed method. Damian Fernández-Cerero et al. [28] devised a GAME-SCORE simulation model in cloud gaming platform. This developed model performed various scheduling method based on Stackelberg game. In this system, two major players, named as energy efficient agent and scheduler were included to analyse the effectiveness. This model achieved light balance among make span and less energy consumption, but this method not explored extra sophisticated energy rules for better performance. In the proposed method, the total energy relating to the execution of application includes energy dissipation on both mobile server and device. Seyed Javad Seyed Aboutorabi and Mohammad Hossein Rezvani [2] modelled Bees technique to tackle players frame rate allocation problem in cloud gaming. This approach effectively enhanced cloud providers and reduced server side expenditure. Moreover, this technique is robustness in terms of frame quality, run time, bandwidth loss, and acceptance ratio, although processing power of this method was the main challenge. In the proposed method, the processing power is identified by server resources, like CPU, GPU and RAM.
Mohammad Sadegh Aslanpour et al. [29] presented learning automata-based resource provisioning technique for extremely multiplayer online games in cloud system. Here, an autonomic system was introduced for dynamic prerequisite of VM in cloud-based gaming system. Moreover, Auto Regressive Integrated Moving Average (ARIMA) prediction technique was employed with workload fluctuations for obtaining enhanced prediction accuracy. Furthermore, learning automata-based technique was employed as decision maker for identifying the suitable auto-scaling decision in planning stage. This system easily reduces the response time and cost, even though this method failed to examine the effect of resource auto-scaling and optimization. To overcome this problem, the developed Fractional Rider Deep LSTM predicts the load based on distributed resource allocation in cloud gaming by spark architecture. Yusen Li et al. [30] developed machine learning-based performance technique for resource allocation in cloud gaming. In this method machine learning approach was devised for capturing difficult relationship between performance interference. In addition, efficient techniques were devised for resource allocation circumstances in cloud gaming. This approach enhanced the resource utilization, even though the performance of this algorithm is not analysed through more server for better performance. In the proposed method, the total energy relating to the execution of application includes energy dissipation on both mobile server and device. Mostafa Ghobaei-Arani et al. [31] introduced autonomous resource provisioning approach for multiplayer online games in cloud structure. At first, load prediction service predicts game entity distribution based on Adaptive Neuro Fuzzy Interference System (ANFIS) from historical trace data. Moreover, fuzzy decision tree technique was employed for estimating appropriate amount of resources based on predicted workload and user Service Level Agreements (SLA).
Anyhow, this technique extremely increases the delay. In the proposed method, delay is one of the parameters used for the fitness calculation for optimal solution. Anand Bhojan et al. [32] devised new software architecture, termed Clou-dyGame in cloud game system. Besides, accepted game engine was considered on resource usage in game cloud. After that, synergy and dynamic asset streaming among shared game instance and asset streaming were combined for high resolution. This model achieved high resolution and minimum content set, but this method failed to reduce the computational complexity. In the proposed method, the FC is used to reduce the overall computational time.

System Model
Cloud gaming allows the user with short processing capacity to play qualitative games using high quality link connection. The games can be played without installing or downloading other game software's. Moreover, the game service provider utilized a distributed data center for presenting their services to users. The user request is transmitted to particular storage space and VM is allocated to execute every user requests after receiving a request by cloud gaming architecture. Thus, VM utilizes a streaming encoded game to a user. In addition, a cloud model allocated the resources to user task for specific period such that the tasks get finished before the deadline. The resource allocator provides synchronization between cloud service provider and user. Furthermore, VM resource utilizes various configurations with storage, memory and power. The minute degradation creates the cloud infrastructure ineffective, because of resource allocation element poses a total control of cloud functions. Thus, resource allocation repre-sentation is more important for cloud gaming infrastructure. The tasks are processed manually and the system senses an overloaded condition, while VM load position is in common circumstances. Additionally, the resource allocation is developed for allocating tasks from overloaded VMs to under-loaded VMs. The network demonstration for allocating a resource in cloud is displayed in Figure 1.

Proposed Method for Workload Prediction
This section presents the developed Fractional Rider Deep LSTM for workload prediction in cloud gaming. Here, the developed Fractional Rider Deep LSTM is developed for work load prediction based distributed resource allocation in cloud gaming based on spark architecture. In this method, the resource allocation is performed using Fractional Rider HSA model. Moreover, the work load of each resource allocation is predicted by developed Fractional Rider Deep LSTM. The developed Fractional Rider Deep LSTM is developed by combining FC model and Rider Deep LSTM network. Moreover, gaming experience loss, MOS, fairness, energy and network parameters, predictive load is included into multi-objective model for effective resource allocation. The schematic diagram of developed Fractional Rider Deep LSTM for workload prediction is portrayed in Figure 2.
The major intention of the developed approach is to find the optimal resources in workload prediction and resource allocation to all games demanded by user. Let us assume a cloud structure with g th PMs, and it is expressed as { } Every selected VM to distribute the resource is configured by various parameters, such as memory, bandwidth, Million Instructions Per Second (MIPS) and processors, and it is formulated by, where,  , , , , , where, user preference level lies among 0 and 1, where 1 indicates more preference and 0 denotes not preference.

Multi Objective Model
The multi-objective method is estimated to find optimal solution using solution where, E y denotes energy, N is the network definition factor, MOS indicates mean opinion score, L m specifies gaming experience loss, E f refers fairness, L signifies load and L p is a predictive load. The load of VM equation is represented as, where, n specified the number of games, N is normalizing factor, V p denotes number of processing elements in i th VM, V x indicates the memory units in i th VM, V a is a bandwidth component in i th VM, V g represents the MIPS element in i th VM and V f specifies the frequency component in i th VM.

1; if game is run by VM
The load of PM equation is formulated by, The MOS equation [33] equation is expressed as, where, E i denotes the game bit rate run in i th VM, K i indicates video frame rate of game running in i th VM and B i is a resource parameters. Here, the resource parameter is because of QoS is formulated by, By integrating Frame Per Second and resolution, experience of gaming E [34] of the player is formulated, and it is the target of each player, which is expressed as, where, D denotes delay, Q signifies experienced Frames Per Second (FPS), P indicates gaming video quality and 1 2 3 , , α α α represents constant parameters.
Clone delay between user e may present, because VM with games is created and ruined dynamically, which indicates that the delay in initializing service.
The writing speed in hard disk is represented as K w through storing games in repository. If a player selects a game using file size t i , then the delay [34] is formulated using initialization period of H th VM, where, t i indicates file size of game in i th VM and H i is an initialization period of VM. Moreover, Frame Per Seconde (FPS) practiced by gaming users is a key experience measure. The users follow a gaming with key experience metric, like FPS for dealing with cloud environment. FPS is exposed with Random Access Memory (RAM) GPU and CPU considers the physical server. In cloud gaming, FPS [34] is formulated by, where, 1 σ , 2 σ and 3 σ is a approximation parameter. Additionally, game video quality [34] is expressed as, where, , , d c σ and 0 σ is constant value and i P represents the video resolution of game in i th VM.
The fairness is formulated as, where, Z is a user preference level. The network definition factor is expressed as, ( ) where, L is a bandwidth and O indicates delay. The bandwidth is estimated by, The delay parameter is calculated by, where, 1 α defines the constant parameter.
The total energy [35] is expressed as, The total energy relating to execution of I y application includes energy dissipation on both mobile server and device, which is illustrated as,

( )
, , where, z indicates to server index in which application is mapped. Let us assume the server follows a time out management of power policies such that, low power mode is maintained after the particular time of idle. The static energy consumption of y th server, while it is idle formulated as, where, X be the time out threshold and , ststic z X indicates static power server z in idle time. Based on Rider Deep LSTM, the update rule of overtaker is expressed as,

Proposed Fractional Rider Deep LSTM for Workload Prediction
Here, where, λ random number, which ranges from [ ] φ indicates the leading rider.

Spark-Based Work Load Prediction and Resource Allocation Using Fractional Rider-Harmony Search Algorithm
This section presents the Fractional Rider-HSA for allocating the resources. The Fractional Rider-HSA is the combination of ROA [27], FC [24], and HSA [26]. The combination of, ROA, FC and HSA adjust the correlated parameters for obtaining global optimal solution.

Solution Encoding
Solution encoding is an expression of solutions to identify optimal solution and control the optimization problems. This solution encoding is portrayed in Figure 3. The steps of Fractional Rider-HSA are explained below: Step-1: Initialization: The initialization of Fractional Rider HSA technique is given below, where, I indicates total riders, ( ) , p C u v denotes the position of u th rider in v th dimension and J is a total dimension.
Step-2: Fitness function identification: The fitness function estimation is very essential for obtaining the best solution. Therefore, fitness function for every solution is described in multi-objective model. The optimal solution is determined at final iteration as each solution for obtaining best solution. Moreover, Figure 3. Solution encoding.  (3) in multi-objective model section.
Step-3: Determine the updated Location: The optimal solutions are identified using the Fraction Rider HSA, and the updated equation is illustrated as, where, S specifies arbitrary distance bandwidth and is a random number and it lies from [0, 1].
Step-4: Evaluating feasibility: The feasibility is estimated using the fitness value, if a new value is better than previous one, then update the previous solution with new solution.
Step-5: Termination: The above processes are continual until the best solution is achieved for resource allocation process.

Results and Discussion
The result of proposed Fractional Rider Deep LSTM approach is illustrated in this section. The performance of developed technique is evaluated by various parameters, namely energy, MOS, delay, fairness, QE and error.

Experimental Setup
The developed Fractional Rider LSTM technique is executed in PYTHON with PC contains Windows 10 OS, 4GB RAM, and Intel i3 core processor.

Performance Metrics
The performance metrics considered for analysis of existing cloud gaming techniques are fairness, MOS, QE, energy and delay. The detailed explanation of these metrics is illustrated in Section 5.

Performance Analysis
This section illustrates about the performance analysis of developed Fractional Rider Deep LSTM technique based on predictive error with different hidden layer. The performance analysis of developed Fractional Rider Deep LSTM with predictive error is depicted in Figure 4.

Comparative Methods
The developed Fractional Rider Deep LSTM technique is analyzed with comparative approaches, like Potential game based optimization algorithm, Proactive resource allocation algorithm [36], QoE-aware resource allocation algorithm [33], Rider-HSA method [34] and Fractional Rider-HSA technique for computing the performance.

Comparative Analysis
The comparative analysis of developed Fractional Rider Deep LSTM technique with existing systems are performed based on several parameters, like fairness,   2) Analysis with game size 300 The comparative analysis of developed Fractional Rider Deep LSTM using fairness, MOS, QE, energy and delay with game size 300 is represented in Figure   6.   Table 1 represents the analysis of work load prediction approaches using fairness, MOS, QE, energy and delay parameters with game size 200 and 300. Here, the maximum fairness, MOS, QE and minimum energy and delay are considered as the best performance. From the table, the maximum fairness, MOS and QE is

Conclusion
This paper presents an effective workload prediction method based on developed