Varying Response Ratio Priority: A Preemptive CPU Scheduling Algorithm (VRRP)

In present era, one of the most important resources of computer machine is CPU. With the increasing number of application, there exist a large number of processes in the computer system at the same time. Many processes in system simultaneously raise a challenging circumstance of managing the CPU in such a manner that the CPU utilization and processes execution gets optimal performance. The world is still waiting for most efficient algorithm which remains a challenging issue. In this manuscript, we have proposed a new algorithm Progressively Varying Response Ratio Priority a preemptive CPU scheduling algorithm based on the Priority Algorithm and Shortest Remaining Time First. In this scheduling algorithm, the priority is been calculated and the processes with high priority get CPU first or next. For new process, the priority of it becomes equal to inverse of burst time and for the old processes the priority calculation takes place as a ratio of waiting time and remaining burst time. The objective is to get all the processes executed with minimum average waiting time and no starvation. Experiment and comparison show that the VRRP outperforms other CPU scheduling algorithms. It gives better evaluation results in the form of scheduling criteria. We have used the deterministic model to compare the different algorithms.


Introduction
In the early days, due to the heavy cost of the hardware, the full attention was drawn on the better utilization of resources. After a vast advancement of technology, the cost of hardware get reduced to affordable price and the interest of author turned to fast calculation and response. The system is controlled by the multiprogramming and multitasking approach of operating system. The CPU plays an important role in implementation of multiprogramming and multitasking [1]. The use of computer has affected most of the working of our daily life and the use in almost all sort of field has invited a number of applications for every level of users. As many applications worked simultaneously which required many numbers of processes to be created, the processes must be executed in such a manner to provide the better response to users. The increment of processes in the system increased the responsibility of CPU scheduler. The users expect fast response and execution, and users' expectation results in a requirement of least waiting time in execution. There exist a number of scheduling algorithms [2] [3] but Round Robin, Shortest Remaining Time First and the Priority scheduling may have the weightage of the most widely used scheduling algorithms. They have their advantages and disadvantages too. The concept of designing a scheduling algorithm has a significant age around thirty years; even then we are motivated to found some scope of improvement in the CPU scheduling approach. We believe that there always exists some hope where normal people found dead, because it depends on the view of researchers. A significant work has been done by many researchers over last two decades but we find a new hope and through this paper a new scheduling algorithm Varying Response Ratio Priority is introduced. In this paper, we have tried to collect the advantage of Shortest Remaining Time First and Priority by merging them and to eliminate the starvation condition by using indirect aging. In this scheduling algorithm for new processes the decision of allocating the CPU depends similarly as in shortest remaining time first and for the old processes waiting in ready queue decision gets affected by the ratio of waiting time and remaining burst time. The main objective of this paper is to provide the scheduling without starvation and average waiting time if not equal to but nearer to the average waiting time of shortest remaining time first. In this manuscript, we use the deterministic model to compare the VRRP to other scheduling algorithms. The next section contains the brief description of some of the most common scheduling algorithms. In Section 3, we have explained the working and calculation of our approach. Section 4 explains the comparative results with some most favourite scheduling and finally our conclusion comes in the Section 5.

Related Work
As the time passes the enhancement of technology has major affect on the way of working of operating system from a single user to multi users, normal single program/process execution to multiprogramming/multitasking. Overall the CPU remains a central attraction as a most important resources and managing the processes for CPU allocation becomes a most important task of operating system. The short term scheduler [4] [5] is responsible to allocate the CPU to the processes under some methodology. These methodologies are named as scheduling algorithms. In present scenario there are different scheduling algorithms been used n different operating system with some merits and demerits. There are some basic algorithms and some are the upgraded version of the basic algorithms. These algorithms vary in efficiency in different environment in different conditions. A system engineer has to consider so many factors before designing the any scheduling algorithm. There may be some expectations of designer and user from the scheduler such as maximum throughput, no starvation, least overhead, calculation and assignment of priority, tradeoffs between response and utilization and desirable behaviour. It is very important to see the factors which play a great role in deciding the grade of scheduling algorithms. Throughput (TP) [2] [6] is the factor which gives the idea about the number of processes executed in per seconds. Turnaround Time (TT) of a process relates to the total life time of a process as time interval from the submission of process in ready queue to the termination of process which includes the total execution time, waiting time and response time. Response time (RT) is the time interval from the submission time of process in ready queue to the starting of execution for the first time. Waiting time (WT) is the time during the life time of process when the process is sitting idle waiting for its turn to get the CPU for execution in ready queue. CPU is most important a resource which is required to be kept busy all the time and it decides the efficiency of system in terms of ratio of time CPU is busy executing the processes to total system time, this ratio is called CPU utilization (CU). Switching the CPU from one process to another is known as context switch (CS). It requires saving the status of old process into its PCB from system to stop executing the old process and loading the status of new process from its PCB into system to start executing new process. Context switches are an overhead on the system. The ideal conditions for any scheduling algorithm are when the following criteria meet their corresponding standard conditions as maximum throughput, minimum turnaround time, minimum response time, minimum waiting time, maximum CPU utilization and minimum overhead of context switches [7] [8]. Some of the scheduling algorithms deployed in the operating systems are:

First Come First Serve (FCFS)
This is the simplest scheduling algorithm where the processes residing in the ready queue are getting the CPU in the order of their arrival to the ready queue. The process which enters first in the ready queue will get the CPU first and the process which enters afterwards will get it sequentially in the arrival order [2]. This scheduling algorithm is nonpreemptive once the process get allocated the CPU it will not leave the CPU until it get terminated. In this algorithm the ready queue is implemented using FIFO list. The newly arrival processes are added to the rear end and the CPU is allocated to a process at front end or head. The drawback in this algorithm is the high average waiting time.

Shortest Remaining Time First (SRTF)
In this scheduling algorithm the decision of which process will get the CPU first or next is decided on the basis of remaining burst time means the excess time required for execution of process. The process with less or minimum remaining time required for execution will get the chance to get the CPU for execution [2] [9]. In this algorithm the ready queue is maintained and the process control block of all processes ready to get the CPU is kept in the Ready queue. The processes with minimum burst time will be selected and the CPU will be allocated to that process. If a new processes enters in the system then the priority of all the processes in the ready queue is been evaluated and if the new arrived process has the less burst time with respect to the executing process then the CPU will be switched to the new process by saving the system statistics on the executing process control block and the new selected statistics to the system. If the new process has the burst time greater than the executing process or the processes in the ready queue than this process control block will be queued in the ready queue. In case no new process arrives before the completion of the executing process than another process will be selected by finding the minimum burst time of a process. N this way the scheduler keeps on continuing the selection of processes for allocating the CPU for execution. This algorithm has a problem that if the small burst time processes keeps on coming into the ready queue the process with big burst time may be suffer with starvation. The SRTF is better for the processes with low CPU burst time only. It has a better low average response time and low average waiting time.

Priority Scheduling (PR)
In this type of CPU scheduling algorithm there is always the categorization of processes on the basis of some system factors as type of processes. The importance of any process is made by a range of values and any process entering the ready queue gets allocated its importance as priority. This priority value [9] decides the decision of allocating the CPU to a process in such a way that the process with high value of priority will get the CPU first or next. There are two version of it one is preemptive and another is nonpreemptive. The preemptive algorithm is of our interest. In the preemptive scheduling the processes belongs to a category and that category priority is been allocated at the entry to a ready queue and do not change during the whole life time of a process. As soon as the processes enter into the ready queue by allocating its process control block into ready queue it comes along with the priority generated by the system. The scheduler uses to select the process with high priority and allocate the CPU and if new process arrives than the priority of the executing gets compared to new process priority [2]. If the priority of new process has high value than the CPU is been switched to new process by scheduler and the statistics of system gets saved into its process control block and the statistics of new process control block required to be loaded into the system, to get start executing the new process. If the new process priority is lesser than the executing process will continue executing and the new process has to wait into ready queue for its turn. In this scheduling the overhead of switching does not contain minimum value but less than other, for high priority processes the waiting time and response time have smaller values. In this algorithm the low priority process may be in starvation [10] if high priority processes keep on coming in the ready queue.

Round Robin Scheduling (RR)
It is specific case of priority scheduling algorithm where all the process posses the priority by their incoming order in the ready queue and the fix time of CPU is provided to all the processes, this fix duration of CPU time is called time quantum [11]. If the process do not finishes its execution in this time quantum the process has to wait to get the next quantum. The time slice/quantum [12] allocation done by the scheduling in the cyclic order hence named round robin. The data structure used is FIFO queue [2] the process is selected from the head of queue and the new process is added to the rear end of the queue. The selection of the duration of time quantum plays the great role in this scheduling algorithm. The implementer is suggested to take care while deciding for the time quantum. It must be significant with respect to dispatch latency. It never moves to starvation condition and have low response time but with the cost of high waiting time and turnaround time for long processes [13].

Highest Response Ratio Next (HRRN)
It is a nonpreemptive priority scheduling algorithm [8] where the priority is been allocated to process on the basis of the ratio called response ratio. Where the service time is the CPU burst time required to execute the full process. The intention of the researcher made to increase the priority if the process has to wait.

Our Approach: Varying Response Ratio Priority (VRRP)
In this algorithm the base is priority for CPU allocation decision. We used a different approach for calculating the priority of any process named as varying response ratio next. It is a preemptive priority scheduling algorithm.
To calculate the priority following proposition is been used: Proposition 1: In ideal case initially the priority must be proportion to inverse of burst time. Proposition 2: The priority must be increased with increase in waiting time for old waiting or executing processes to avoid the starvation condition.
Proposition 3: For waiting or executing processes the priority must be calculated as inverse of remaining time rather than the burst time.
Proposition 4: In any case if two or more than two processes have the same priority then their waiting time will be compared, if the waiting time is similar a process arrive first in ready queue get the preference but if the waiting time is not equal then a process with lowest remaining burst time will be the preferred one.
Acronym and abbreviation used: Priority of any process P i VRRPP i Waiting Time of any process P i WT i Burst Time of any process P i BT i Remaining Burst Time for Execution of process P i RBTE i As per Proposition 1: VRRPP i α 1/BT i (only for new processes) As per Proposition 2: VRRPP i α WT i (only for old or executing processes) As per Proposition 3: VRRPP i α 1/RBTE i (only for old or executing processes) By combining all the propositions shown in the Equations (2)-(4) the priority of process i may be formulated and calculated as: Initially when the process enters into the ready queue its priority is been calculated as inverse of burst time similar to shortest remaining time first and scheduler compares the priority of this process to the other processes including the executing process then a process with highest priority will get the CPU next. If any process other than an executing process has higher priority than the CPU switches to that process. The calculation of priority for any process other than newly arrived process is calculated by a ratio of waiting time and remaining burst time. The idea of motivation is that processes with small burst time must get the preference over processes with long burst time or processes with small remaining burst time must get the preference over processes with long remaining burst time and to avoid the starvation condition old processes priority must increase with the increase of waiting time. In this the indirect aging is been used in the form of a factor waiting time. If in case two process gain the same priority then their waiting time will be compared. If the processes have same waiting time FCFS will be applied among those processes. If waiting time is different a process with lowest remaining burst time gets the CPU next. If some processes have similar waiting and some different waiting time then group them by waiting time and sort the individual group using FCFS. All the top elements of every group with different waiting time will be compared for remaining burst time a process with lowest remaining burst time will get the CPU next.
If two processes P i and P j gets same priority then Then a process i with low remaining burst time will get the preference because a process with low remaining burst time will have high possibility that it may finish its execution before arrival of new process and will not get context switched.

Algorithm
Assumption: There exist a Job Pool JP = {p 1 , p 2 , ⋅⋅⋅, p n } where n number of processes are residing to come in main memory to get the CPU for execution, and a long term scheduler (LTS) who is taking a process from JP to main memory ready queue Queue at their submission time. At the starting of CPU scheduling the system time (ST) will be zero and it keep on incrementing. P is a set of processes in Queue. The abbreviation and acronym used in the manuscript to formulize the factors affecting the scheduling algorithms.

Experimental Analysis
The processes used are independent processes and system is uniprocessor. The context switching time is assumed to be negligible. The time slice/time quantum used for round robin is fixed to 2 time unit. The processes are arriving in the ready queue and its burst time is known. Our experiment consists of few inputs such as burst time and submission time/arrival time and some output parameters such as response time, waiting time, turnaround time and their average values. The Gantt chart is given to show calculations and sequence of CPU allocation. We have performed three experiments to compare and calculate the parameters for evaluation of algorithms. The methodology used in experiments for evaluation of scheduling algorithm is deterministic model. We have taken data in three manners as decreasing, increasing and random order of burst time with different arrival/submission time. This algorithm is capable of using with large set of data but we have used a small set. For every set of data we have compared our algorithm with SRTF, FCFS, RR and HRRN.    As per the graph plotted in the Figure 14 when the data sampled is in increasing order our algorithm VRRP provides average response time similar to FCFS, SRTF and HRRN but greater than RR. When the sampled data have decreasing order our algorithm VRRP provides the average response time lesser to FCFS, SRTF, RR and HRRN. When the data selected is random its average response time is less than FCFS, RR and HRRN but greater than SRTF.
The graph plotted in the Figure 15 reflects that average waiting time of our algorithm VRRP is similar to FCFS, SRTF and HRRN but less than RR if the selected data is in increasing order. When the sampled data is in decreasing order the average waiting time of our algorithm VRRP is lesser to FCFS, RR and HRRN but greater than SRTF. When the data is selected randomly the average waiting time is less than FCFS, RR and HRRN but greater than SRTF.
The graph of Figure 16 reflects three cases where the selected data is increasing order, decreasing order and random order. In first case where the data have increasing order our scheduling approach VRRP provides average turnaround time same as FCFS, SRTF and HRRN but less than RR. Secondly when the data have decreasing order or random order our algorithm VRRP calculates average turnaround time less than FCFS, RR and HRRN but greater than SRTF.
The working of VRRP is better than the FCFS and HRRN in all the cases. Although the average response time, average turnaround time and average waiting time of SRTF is better than the VRRP the working of VRRP is better than the SRTF because the SRTF leads to starvation [7] but our VRRP will never let any process in starvation condition. VRRP also provides better results than RR although the average response time of RR is least in case of increasing order of data but its average waiting time and average turnaround time is very high which makes it less efficient.

Conclusion
CPU is the most important resource of computer system and its way of use affects great in system performance. To improve the system performance if the scheduling of CPU among the processes can be improved, it will be an achievement. In SRTF and some other algorithms, some of low priority processes may get the starvation problem. Our proposed scheduling algorithm VRRP performs better than other scheduling algorithms as it never let any process to have starvation condition problem unlike SRTF. It provides better scheduling criteria than      other scheduling algorithms-RR, HRRN and FCFS. In future we can further investigate VRRP-our proposed algorithm-for its usefulness in providing additional task-oriented results in current comprehensive composite functioning of operating system.