You are on page 1of 4

Scheduling is a key concept in computer multitasking, multiprocessing operating

system and real-time operating system designs. Scheduling refers to the way proc
esses are assigned to run on the available CPUs, since there are typically many
more processes running than there are available CPUs. This assignment is carried
out by softwares known as a scheduler and dispatcher.
The scheduler is concerned mainly with:
* CPU utilization - to keep the CPU as busy as possible.
* Throughput - number of processes that complete their execution per time un
it.
* Turnaround - total time between submission of a process and its completion
.
* Waiting time - amount of time a process has been waiting in the ready queu
e.
* Response time - amount of time it takes from when a request was submitted
until the first response is produced.
* Fairness - Equal CPU time to each thread.
* 1 Types of operating system schedulers
o 1.1 Long-term Scheduler
o 1.2 Mid-term Scheduler
o 1.3 Short-term Scheduler
o 1.4 Dispatcher
* 2 Scheduling criteria
* 3 Scheduling disciplines
* 4 Fundamental scheduling algorithms
o 4.1 First In First Out
o 4.2 Shortest remaining time
o 4.3 Fixed priority pre-emptive scheduling
o 4.4 Round-robin scheduling
o 4.5 Multilevel Queue Scheduling
o 4.6 Overview
o 4.7 How to choose a scheduling algorithm
* 5 Operating system scheduler implementations
o 5.1 Windows
o 5.2 Mac OS
o 5.3 Linux
o 5.4 FreeBSD
o 5.5 NetBSD
o 5.6 Solaris
o 5.7 Summary
*1-Types of operating system schedulers
The long-term scheduler decides which jobs or processes are ready to queue; th
is scheduler dictates what processes are to run on a system, and the degree of c
oncurrency to be supported at any one time In modern OS's, this is used to make
sure that real time processes get enough CPU time to finish their tasks. Withou
t proper real time scheduling, modern GUI interfaces would seem sluggish. Long-t
erm scheduling is also important in large-scale systems such as batch processing
systems, computer clusters, supercomputers and render farms. In these cases, sp
ecial purpose job scheduler software is typically used to assist these functions
, in addition to any underlying admission scheduling support in the operating sy
stem.
Mid-term Scheduler -The mid-term scheduler temporarily removes processes from ma
in memory and places them on secondary memory (such as a disk drive) or vice ver
sa. This is commonly referred to as "swapping out" or "swapping in" (also incorr
ectly as "paging out" or "paging in"). The mid-term scheduler may decide to swap
out a process which has not been active for some time, or a process which has a
low priority, or a process which is page faulting frequently, or a process whic
h is taking up a large amount of memory in order to free up main memory for othe
r processes, swapping the process back in later when more memory is available, o
r when the process has been unblocked and is no longer waiting for a resource. [
Stallings, 370]
The short-term scheduler (also known as the CPU scheduler) decides which of the
ready, in-memory processes are to be executed (allocated a CPU) next following a
clock interrupt, an IO interrupt, an operating system call or another form of s
ignal. Thus the short-term scheduler makes scheduling decisions much more freque
ntly than the long-term or mid-term schedulers - a scheduling decision will at a
minimum have to be made after every time slice, and these are very short. This
scheduler can be preemptive, implying that it is capable of forcibly removing pr
ocesses from a CPU when it decides to allocate that CPU to another process, or n
on-preemptive (also known as "voluntary" or "co-operative"), in which case the s
cheduler is unable to "force" processes off the CPU.Another component involved i
n the CPU-scheduling function is the dispatcher. The dispatcher is the module th
at gives control of the CPU to the process selected by the short-term scheduler.
This function involves the following:
* Switching context
* Switching to user mode
* Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible, since it is invoked during every p
rocess switch. The time it takes for the dispatcher to stop one process and star
t another running is known as the dispatch latency.
Different CPU scheduling algorithms have different properties, and the choice of
a particular algorithm may favor one class of processes over another. In choosi
ng which algorithm to use in a particular situation, we must consider the proper
ties of the various algorithms. Many criteria have been suggested for comparing
CPU scheduling algorithms. Which characteristics are used for comparison can mak
e a substantial difference in which algorithm is judged to be best. The criteria
include the following:
* CPU Utilization. We want to keep the CPU as busy as possible.
* Throughput. If the CPU is busy executing processes, then work is being don
e. One measure of work is the number of processes that are completed per time un
it, called throughput. For long processes, this rate may be one process per hour
; for short transactions, it may be 10 processes per second.
* Turnaround time. From the point of view of a particular process, the impor
tant criterion is how long it takes to execute that process. The interval from t
he time of submission of a process to the time of completion is the turnaround t
ime. Turnaround time is the sum of the periods spent waiting to get into memory,
waiting in the ready queue, executing on the CPU, and doing I/O.
* Waiting time. The CPU scheduling algorithm does not affect the amount of t
he time during which a process executes or does I/O; it affects only the amount
of time that a process spends waiting in the ready queue. Waiting time is the su
m of periods spend waiting in the ready queue.
* Response time. In an interactive system, turnaround time may not be the be
st criterion. Often, a process can produce some output fairly early and can cont
inue computing new results while previous results are being output to the user.
Thus, another measure is the time from the submission of a request until the fir
st response is produced. This measure, called response time, is the time it take
s to start responding, not the time it takes to output the response. The turnaro
und time is generally limited by the speed of the output device.
It is desirable to maximize CPU utilization and throughput and to minimize turna
round time, waiting time, and response time. In most cases, we optimize the aver
age measure. However, under some circumstances, it is desirable to optimize the
minimum or maximum values rather than the average. For example, to guarantee tha
t all users get good service, we may want to minimize the maximum response time.
Investigators have suggested that, for interactive systems, it is more importan
t to minimize the variance in the response time than to minimize the average res
ponse time. A system with reasonable and predictable response time may be consid
ered more desirable than a system that is faster on the average but is highly va
riable. However, little work has been done on CPU-scheduling algorithms that min
imize variance.
[edit] Scheduling disciplines
Main article: Scheduling algorithm
Scheduling disciplines are algorithms used for distributing resources among part
ies which simultaneously and asynchronously request them. Scheduling disciplines
are used in routers (to handle packet traffic) as well as in operating systems
(to share CPU time among both threads and processes), disk drives (I/O schedulin
g), printers (print spooler), most embedded systems, etc.
The main purposes of scheduling algorithms are to minimize resource starvation a
nd to ensure fairness amongst the parties utilizing the resources.
Fundamental scheduling algorithms
CPU scheduling deals with the problem of deciding which of the processes in the
ready queue is to be allocated the CPU. There are many different CPU scheduling
algorithms. In this section, we describe several of them.
*First In First Out Also known as First Come, First Served (FCFS), its the sim
plest scheduling algorithm, FIFO simply queues processes in the order that they
arrive in the ready queue.
* Since context switches only occur upon process termination, and no reorgan
ization of the process queue is required, scheduling overhead is minimal.
* Throughput can be low, since long processes can hog the CPU
* Turnaround time, waiting time and response time can be low for the same re
asons above
* No prioritization occurs, thus this system has trouble meeting process dea
dlines.
* The lack of prioritization does permit every process to eventually complet
e, hence no starvation.
Shortest remaining time Also known as Shortest Job First (SJF). With this stra
tegy the scheduler arranges processes with the least estimated processing time r
emaining to be next in the queue. This requires advanced knowledge or estimation
s about the time required for a process to complete.
* If a shorter process arrives during another process' execution, the curren
tly running process may be interrupted, dividing that process into two separate
computing blocks. This creates excess overhead through additional context switch
ing. The scheduler must also place each incoming process into a specific place i
n the queue, creating additional overhead.
* This algorithm is designed for maximum throughput in most scenarios.
* Waiting time and response time increase as the process' computational requ
irements increase. Since turnaround time is based on waiting time plus processin
g time, longer processes are significantly affected by this. Overall waiting tim
e is smaller than FIFO, however since no process has to wait for the termination
of the longest process.
* No particular attention is given to deadlines, the programmer can only att
empt to make processes with deadlines as short as possible.
* Starvation is possible, especially in a busy system with many small proces
ses being run.
* Fixed priority pre-emptive scheduling
The O/S assigns a fixed priority rank to every process, and the scheduler arrang
es the processes in the ready queue in order of their priority. Lower priority p
rocesses get interrupted by incoming higher priority processes.
* Overhead is not minimal, nor is it significant.
* FPPS has no particular advantage in terms of throughput over FIFO scheduli
ng.
* Waiting time and response time depend on the priority of the process. High
er priority processes have smaller waiting and response times.
* Deadlines can be met by giving processes with deadlines a higher priority.
* Starvation of lower priority processes is possible with large amounts of h
igh priority processes queuing for CPU time.
* Round-robin scheduling
The scheduler assigns a fixed time unit per process, and cycles through them.
* RR scheduling involves extensive overhead, especially with a small time un
it.
* Balanced throughput between FCFS and SJN, shorter jobs are completed faste
r than in FCFS and longer processes are completed faster than in SJN.
* Fastest average response time, waiting time is dependent on number of proc
esses, and not average process length.
* Because of high waiting times, deadlines are rarely met in a pure RR syste
m.
* Starvation can never occur, since no priority is given. Order of time unit
allocation is based upon process arrival time, similar to FCFS.
* Multilevel Queue Scheduling
This is used for situations in which processes are easily classified into differ
ent groups. For example, a common division is made between foreground (interacti
ve) processes and background (batch) processes. These two types of processes hav
e different response-time requirements and so may have different scheduling need
s.
* Policy driven scheduling
to make real promises to the user about performance and then live up to them. sy
stem must keep track of how much cpu time a user has had for all his processes s
ince login and how long each user has been logged in.
*two level scheduling
* Two level scheduling

You might also like