Latency Aware and Service Delay with Task Scheduling in Mobile Edge Computing

In a traditional Mobile Cloud Computing (MCC), a stream of data produced by mobile users (MUs) is uploaded to the remote cloud for additional processing throughout the Internet. Though, due to long WAN distance it causes high End to End latency. With the intention of minimize the average response time and key constrained Service Delay (network and cloudlet Delay) for mobile users (MUs), offload their workloads to the geographically distributed cloudlets network, we propose the Multi-layer Latency Aware Workload Assignment Strategy (MLAWAS) to allocate MUs workloads into optimal cloudlets, Simulation results demonstrate that MLAWAS earns the minimum average response time as compared with two other existing strategies.


Introduction
The design and portability of mobile devices make them minuscule, which is mandatory for them to be movable and accessible in order to carry them anywhere [1].Mobile devices are now capable of sustaining a wide range of applications, a lot of demand increasing the requirements in key areas such as computation and communication [2].These pretenses a challenge because the mobile phone is resource-constrained device with limited processing power, memory, storage, and battery energy.The cloud computing technology offers virtually unlimited dynamic resources for computation, storage, and service provision.Therefore, researchers envision extending cloud computing services to mobile devices to overcome the mobile phone constraints.An impressive approach to bettering performance of mobile applications is to offload some of their tasks to the remote cloud, where an application consists of multiple tasks.In existing research, the mobile task offloading mostly considered the cloud to be a remote offloading destination, due to its abundance of computational resources.However, the cloud is usually located remotely and far away from its mobile users (MUs), and the Service Delay (Network and Process delay) incurred by transferring data between MUs and the cloud can be very costly.This is especially unsatisfactory in augmented reality applications, mobile multilayer gaming systems, social media and real time data processing, where a low response time is crucial to the MUs experience [3].Therefore, mobile users (MUs) become more demanding and are anticipating executing highly computational exhaustive applications on their mobile devices.To meet the requirements for the solution of above problems, it is essential to integrate mobile computing and cloud computing in order to extend capabilities and bring those capabilities proximity to network edge called Edge Cloud Computing (ECC) [2]- [7].
Recent works [3]  [10], the small size of the network means that the Service Delay between the cloudlet and MUs is negligible.We believe that the effective deployment of cloudlet at geo-distributed networks (GDN) becomes more significant.First, GDN areas have a high population density, which means that cloudlets will be accessible to a large number of users.This improves the cost-effectiveness of cloudlets as they are less likely to be idle.Second, the size of the network in GDN for service providers can take advantage of economies of scale when the cloudlet services have deployed through the GDN, and making cloudlet services more affordable to the general public.However, due to the size of the network in GDN, a given the mobile user (MU) could be a significant number of network edges away from the nearest cloudlet, although the Service Delay applications becomes much more significant [11].
We cannot be formulated all the problems at the same time, in this paper, we have investigated that how to reach the optimal workload allocation in geo-distributed cloudlet network with lower Service Delay, this problem is very difficult and more interesting for mobile application in real practice.We have following contributions in this paper: • Proposed the cloudlet architecture with lower Service Delay.
• Formulate an optimal workload allocation problem which suggests the optimal workload allocations geo-distributed cloudlet network toward the minimal response time with the constrained Service Delay.• Proposed Method MLAWAS to tackle the optimal workload allocation with lower Service Delay.
• Simulation results demonstrate that our proposed method has better results than existing methods.
Application service can be seen in Figure 2.
Rest of the paper is planned as follows.Section 2, we discuss existing approaches related to Service Delay issue in ECC.Section 3 contains mathematical model, how to select optimal cloudlet for workload assignment regarding Service Delay in ECC, Section 4 clarifies how we utilize techniques chosen to lower the Service Delay in conjunction with mathematical model, Section 5 contains proposed method about how to minimize overall delay, Section 6 conducts simulation with results and Section 7 encloses the conclusion.

Related Work
In the past few decades, the domain of mobile offloading task to clusters of computer cloudlet gained much attention due to its vital applications [12].
Normally cloudlet brings the remote cloud capacities at the edge of the network, and act as the offloading destination of MUs tasks.The detailed study related to   this area can be found in [13] [14].Related to my research work we have made the individual sections that can be easily understood.

Offloading to Remote Cloud
The Cuckoo framework that performs the offloading via application partitioning at runtime proposed by [11] [15].Their proposed algorithm decides whether a part of an application will be executed locally or remotely.The proposed algorithm cited in [5] [6] for task offloading on clouds can minimize the makespan but did not consider service latency.The mentioned algorithm has few lacks that are time and efficiency limitation.An efficient code partition algorithm to find the offloading and integration points on a sequence of calls by depth-first search and linear time searching scheme proposed by X [16].There was a drawback in suggested algorithm that is more time consuming.The paper [17] proposed multi-resource allocation strategy improves the value of mobile cloud service, in connection with system throughput (the amount of admitted mobile user applications) and the service latency [18].However, the time efficiency has not been calculated on large physical distributed system.Moreover this strategy is not useful for sensitive Mobile Cloud Applications.The paper [19] proposed an offline heuristic algorithm, to tackle the Multi-User Computation Partitioning for Latency Sensitive Mobile Cloud Applications to reach least average completion time for all the mobile users, based on the amount of provision resources on the cloud [20].

Offloading to Cloudlet
Cloudlet has been more effective techniques for power and energy consumption task offloading [7] [8] [9] [10] [11].Odessa [10] is an example that can offload tasks to either the cloud or a dedicated cloudlet.Author name [12] proposed strategy by considering both remote cloud and cloudlet at the same time for mobile task offload using game theory.Some other work 3 related to mobile cloud gaming can be found in [13] and cloudlet assisted cloud gaming mechanism has been proposed for access point scheduling [14].The detailed study related to their work can be found in [16] [17].

Service Delay
The Service Delay is the combination of Network Delay and Cloudlet Delay, its geometric representation is shown in Figure 2. We described the Network Delay as the time required for the mobile user to send the application workload to the cloudlet and the time it takes for the individual cloudlet sends workload back to the mobile user.These actions are performed through wireless medium such as mobile user to the base station and base station to the cloudlet.It is clear that we are considering those parameters of wireless surroundings once that affect this delay.We describe the Cloudlet Delay as the time required at the cloudlet for the task of the mobile user to be executed and to be produced.This occupies the time that a task spends in cloudlet queue ready to be processed and the time which cloudlet processor takes to execute the task as well.This kind of delay is essentially attached to the efficient utilization of cloudlet processors.
Sun and Ansari [9] presented the computing resources assignment algorithm in the cloudlet network to reduce the network delay.Their work is much related to our work but they did not focus on cloudlet delay.In [6] Jia et al. proposed the workload assignment load balancing algorithm to among geo-distributed cloudlets, however did not focus on network delay.Tiago Gama Rodrigues [1] proposed the methods to minimize computation and communication elements, controlling Processing Delay through virtual machine migration and improving

System Model
Service Moreover jk t is Round Trip Time (RTT) between the base station j and the cloudlet k.Note the value of jk t (j6 = k) can be measured by SDN periodically [23].
Denote ij Y is the binary identification to Mobile users i being in coverage of Base stations j, finally total average Network Delay is

Average Cloudlet Delay
The cloudlet is the collection computer clusters, in simple words it is the macro data center.Proposed architecture is shown in Figure 1.Both cloudlets are fixed and connected with cloudlet controller, whereas both cloudlets have same capacity and same rate of services [24].The application workload of mobile users arrive in the cloudlet, it assigns the amount of computing resources to mobile user workload.The mechanism is working as queuing models, and supposes the application workload of the mobile user request generation represented by i, where i I ∈ whereas the average generation rate is i λ .We suppose that each mobile user workload will be assigned where as k µ is the overall average service rate of individual cloudlet k [22].Here we are considering the cloudlet as one entity to hold the mobile user application request.Though we are focusing coarse-grained workload offloading scheme in this paper [24], we try to allocate mobile user workload to optimal among cloudlets.It is then suitable to model the processing of application request from mobile users via a cloudlet as an M/M/1-PS queuing model.We can obtain the application workload of mobile user i offloading to cloudlet k as follows Consequently, the average cloudlet delay of the MU i • The mobile user application workload to be executed is composed of a collection of independent tasks that have no dependency to each other, often called metatask Independent tasks that have no priorities, no deadline.• Approximations of execution time of the task (ETC) on individual machine in homogeneous cloudlets (HC) are known.The approximations must be supplied before a task submits for execution.The task mapping procedure is to be done in a batch mode manner.
• The mapper runs on a separate machine and controls the execution of all jobs on all machines in the suite.
• The task mapper is executed on an individual machine and manages the task execution on individual machine in heterogeneous computing suite.
• Every individual machine is to assign one task at a time; therefore the order of assigning is First Come First Served.
• Independent task or meta-tasks size, machine numbers in HC are known.
In proposed heuristic, the exact approximation of the task execution time on machine is known priori, and contained within execution time to compute matrix (ETC), however ETC (ti, mj) is the approximated execution task time on individual machine j.The main purpose of the task scheduling algorithm is to minimize average Cloudlet Delay by using ETC matrix replica [25].Support on Equation (1) and Equation (3), the overall average response time of mobile user i, represented as _ i

Problem Formulation
The main purpose of the proposed architecture is to minimize Service Delay [26] [27], proposed the task scheduling algorithm is to minimize average Cloudlet Delay by using ETC matrix replica.The description of scheduling as follows: Let explain task set I = t1, t2, t3, t4, t5, tn, it is referred as meta-task submitted to scheduler.Cloudlet resource set K = m1, m2, m3, m4, mk, available resources at the task arrival time Cloudlet whereas our objective to [ ] . ., 0 [ ] , 0,1 minimize the average response time of mobile users in the network, so Equation (7) ensures that average service of cloudlet to be less than the average rate of mobile user arrival rate for individual cloudlet so that system would be stable.
Equation ( 8) is to guarantee that every mobile user is only served by one cloudlet  9)-( 10) are assignment of exactly one resource for each application and vice versa [24].Theorem 1: i τ is an NP-hard problem; we can express that this is decision problem and well known as NP-complete.The decision problem i τ can be de- scribed as follows: given a positive value b, is it possible to find a feasible solution as follows: = and this solution must less than the given variable b and it is necessary the preceding condition must be satisfied the equation ( 7)- (10).Furthermore, we convert problem in the partition problem as we can manage application workload equally on multiple cloudlets so that the total response time of the application could be minimized.i τ is reduced and satisfy all constraints when the total execution ought to less than b.

Proposed Algorithm
In Algorithm 1 MLAWAS, we initialed the application workload into descending order.Proposed algorithm MLAWAS is an iterative in nature rather than sequential.Application workload is allocated based on optimal cloudlet or cloud sever thereby minimize the Service Delay while performing the application execution [26].The time complexity of the MLAWAS Algorithm 1 is O (log (I × K)), whereas I is the iterative allocation of application workload and K is the number of iteration to determine the optimal cloudlet server with lower response time.
To cope with the end to end latency problem and optimal task assignment on homogeneous cloudlet servers we have proposed MLAWAS.Which determine the optimal cloudlets among all and try to allocate user maximum workload on the optimal server?Algorithm 1 starts with application submission that is work- Mid-sort returns list of optimal way from initial point to the end with lower communication delay.
In the * Q is the optimal solution for offloading with lower communication and improve the offloading performance.
In Algorithm 3 Process Delay we initialed the application workload into descending order.Proposed algorithm 3 is an iterative in nature rather than sequential.Application workload is allocated based on optimal cloudlet based on equa-  12) and ( 13) or cloud sever thereby minimize the response time of an application [25].The time complexity of the MLAWAS Algorithm 1 is whereas I is the iterative allocation of application workload and K is the number of iteration to determine the optimal cloudlet server with lower response time.

Performance Evaluation
Our propose algorithm MALWAS is compared with baseline approaches, for example Full offloading (Full) [22] and Non offloading (NOF) [26].Figure 3 and Figure 4 are illustrated that proposed algorithm MLAWAS has lower communication delay as compared to baseline approaches.However, we have tested based heavy benchmark applications [23] [26] [27] and [28] with different range of tasks, in the end we found in an adaptive environment MLAWAS works better as compared to existing methods.
The service time plays an important role in high performance offloading system, because it is initial time to start the application.The requirement of mobile application is started with short delay.Our proposed MLAWAS outperforms as compared to existing approach as shown in Figure 4.
We set the b value as a threshold value for process time.These values make sure the process delay of offloaded workload must be lower anticipated value (e.g., predefined value).We can observe from Figure 6 and Figure 7.The average response of all applications (differentiate with colors and tasks) is minimized in the proposed algorithm.

Discussion on Future Work
The MLAWAS framework proposed in this paper we are optimizing the Service Delay.Whereas, Service Delay is the combination of cloudlet delay and network delay in geo-distributed network.Proposed algorithm MLAWAS optimizes the mobile user application workload to optimal cloudlet and cloud server in order to minimize the Service Delay.
[4] related to ECC have proposed the use of clusters of computers called cloudlets as a supplement to the cloud for offloading, cloudlets are typically collocated at an access point (AP) in a network, and are accessible by users via wireless connection shown in Figure 1.MUs connects with the individual base station (BS), and individual base station of MUs connect to the cloudlet k through SDN (Software Defined Network) cellular network, every individual base station can share cloudlets services among geographically distributed cloudlet network, and cloudlets are connect with cloudlet controller.A key advantage of cloudlets over the cloud is that the close physical proximity between cloudlets and MUs enables shorter communication delays, which results in an enhancement to MUs experience of interactive applications.It is most significant that the service constraint of ECC must meet low Service Delay, which is essential for latency sensitive type applications which need to react fast on specific events.Although there has been a little research done in the deployment of cloudlets either at LAN (Local Area Network) or Metropolitan Area Network[8] [9] D. K. Sajnani et al.DOI: 10.4236/cn.2018.104011130 Communications and Network

Figure 1 .
Figure 1.Mobile Cloud Computing architecture, our proposed architecture can be shown in Figure 1.
Delay is the amalgam of Network Delay and Cloudlet Delay in order to minimize the average Service Delay; we propose the Mobile Cloud Computing architecture shown in Figure 1 proposed architecture is the amalgam of MUi, base station j and Cloudlet k the arrangement of proposed architecture the MUs connect with individual proximity to base station with lower latency, and base station connect to the cloudlet via SDN fiber, SDN manage the individual base station and share the Cloudlets service to individual base station, Cloudlets connect with Cloudlet Controller, and Cloudlet Controller connects remote Cloudlet via internet.Furthermore we are design the model to elaborate the lower Service Delay by calculating the Network Delay and Cloudlet Delay with mathematical model [22].3.1.Network DelayLet assume that, here we are taking M as mobile user, B base station and C cloudlet.The Network Delay, denoted as Tnet, When mobile user request for offload the task to cloudlet, it encompasses on 1: TTrans M!B, where as TTrans is Transmission Delay for uploading the application task offload request to related BS. 2: TTnet B!C, Network Delay from B related mobile user base station to associated mobile user C cloudlet of mobile user to serve the offloaded task.3: TTnet B!C Network Delay for transmitting the results of task from C to B. 4: TTrans B!M, transmitting results from B to M, where total Network Delay.Whereas total Network Delay Tnet = TTrans M!B + TTrans B!M + Tnet B$C.Tnet B$C, Round Trip Time, Delay between related Base station to Cloudlet.In paper we are assuming our decision variables I, J, and K set of mobile users, base stations and cloudlets respectively.However Xik denotes as binary variable to D. K. Sajnani et al.DOI: 10.4236/cn.2018.104011133 Communications and Network designate whether the application workloads generate by the mobile user i are handled by associated cloudlet as following showing in Equation (1).
the amount of resources in only one individual cloudlet k whereas k K ∈ in a time slots, assigning the workload to different 4 cloudlets during the same time it brings extra overheads for the mobile user.As the mobile user average application arrival rate follows Poisson process and equal to i I i ik X λ ∈ ∑ , and for executing application requests from mobile user is exponentially distributed with the average service time equal to 1 k µ load i.e., coarse grained in the nature and follow Poisson process, we follow the same offloading mechanism as CloneCloud but with different with offloading decision.Before offloading we need to schedule optimal network based on given b value, it is noteworthy we know in advance the anticipation of network and cloud status before offloading.The fundamental objective of the Algorithm 2 is to optimize communication delay whereas, optimal task allocation occurs in Al-DOI: 10.4236/cn.2018.104011136 Communications and Network gorithm 3 with minimum process delay.We have designed the Communication Delay sub Algorithm 2 based on Markov decision process theory.Where a is the mobile controller always take action, state could be targeted wireless network, whereas, each state via learning factor and policy algorithm try to produce best decision as optimize the score of the offloading.Further detail about basic markov decision theory can be detailed in [22].We focus on our algorithm.Algorithm 2 use train model of all wireless network values.For mobile offloading communication time has directionally proportional to the process time. in Algorithm 2 line 2 -6 initialize the all available network technologies for application uploading, it will choose the optimal among all available networks.The line 7 -11 always chooses optimal way to the cloudlet.

Figure 3 and
Figure 3 and Figure 5 are showing that MALWAS has lower Service Delay in terms of cloudlet delay and average response time as compared to baseline approaches LEAN and Conventional algorithm.
[21] time database driven mobile applications have short computation but long transmission, it means some mobile applications effect with Process Delay and some of them Network Delay[18].With our best knowledge, nor any work related to Service Delay in GDN exists (i.e., by both considering the network delay and the cloudlet delay) for all MUs remains a challenging problem.In this paper, we introduce a Multilayer Latency Aware Workload Assignment Strategy (MLAWAS) in order to minimize the Service Delay and optimally allocated the mobile workload to cloudlet with minimum response time[21].