You are on page 1of 20

OPERATING SYSTEM

Scheduling

4.1. Scheduling
Scheduling is the criteria that determine which process is allowed to run
CPU scheduling is the most important function of operating system
Main objective of OS is to utilize CPU and I/O devices maximum. To achieve this
scheduling is required.
Objective
Modern operating Systems have features like multithreading & multithreaded
Main job of OS is CPU job scheduling
CPU scheduling is a guideline given to CPU to execute multiple processes in a
particular time
In multiprogramming OS, CPU switches form process to process & it is kept busy.
So objective of scheduling is
1. To maximize CPU utilization & Throughput (tasks per unit of time).
Operating system becomes more productive if CPU is utilized properly
2. To minimize Turnaround time (submission-to-completion), Waiting time
(sum of times spend in ready queue) and Response time (production of
first response)
3. To maintain fairness. Every task should be handled eventually (no starvation).
Tasks with similar characteristics should be treated equally
Concept
Multiprogramming systems tries to keep CPU & I/O devices busy
Almost all programs have some alternating cycle of CPU time and I/O time.
In a simple system running a single process, the time spent in waiting for I/O is
wasted, and those CPU cycles are lost forever.
A scheduling system allows one process to use the CPU while another is waiting for
I/O, thereby making full use of CPU & I/O devices.
The challenge is to make the overall system as "efficient" and "fair" as possible, in
varying and dynamic conditions,
OS does scheduling in following four conditions
1. When process switches from Running to Waiting (I/O wait)
2. When process switches from Running to Ready (time slice)
3. When process switches from Waiting to Ready (I/O completion)
4. When process switches to Termination
OPERATING SYSTEM

CPU & I/O Burst Cycle
Process execution is collection of two activities i.e. CPU Burst & I/O Burst
A process cycles between CPU processing and I/O activity
Almost all processes alternate between two states in a continuing cycle, as shown in
Figure below :
1. A CPU burst of execution, and
2. An I/O burst, waiting for data transfer in or out of the system.

Fig. CPU & I/O Burst Cycle

Process execution begins with CPU burst followed by I/O burst, again CPU burst and
again I/O burst and so on.
Last state is executed by CPU with system request of termination and thus process
get terminated.
A process generally has many short CPU bursts or a few long CPU bursts. Depending
on that two categories of processes exist
1. I/O bound processes- These have many short CPU bursts
2. CPU bound processes- These have few long CPU bursts
This can affect the choice of CPU scheduling algorithm used in an OS
OPERATING SYSTEM

Some processes, such as the one in Fig. A below , spend most of their time on
execution(CPU-bound), while others, such as the one in Fig. B below, spend most of
their time waiting for I/O (I/O-bound).

Fig. Bursts of CPU usage alternate with periods of waiting for I/O.
(A) A CPU-bound process. (B) An I/O-bound process.
OS does scheduling in following four conditions
1. When process switches from Running to Waiting (I/O wait)
2. When process switches from Running to Ready (time slice)
3. When process switches from Waiting to Ready (I/O completion)
4. When process switches to Termination
Having some CPU-bound processes and some I/O-bound processes in memory
together is a better idea (a careful mix of processes).
Types of Scheduling
CPU scheduling is broadly categorized into two types i.e. preemptive & non
preemptive scheduling
CPU scheduling decisions may take place
1. when a process switches from the running to waiting state
2. when a process switches from the running to ready state
3. when a process switches from the waiting to ready state
4. when a process terminates
Under conditions 1 and 4 is there is no choice of CPU scheduling. The scheduling
under conditions 1 and 4 is called non-preemptive scheduling.
Scheduling under conditions 2 and 3 is called preemptive scheduling


OPERATING SYSTEM

Preemptive Scheduling
In preemptive scheduling tasks are usually assigned with priorities.
At times it is necessary to run a certain task that has a higher priority before another
task although it is running.
Therefore, the running task is interrupted for some time and resumed later when
the priority task has finished its execution. This is called preemptive scheduling.
The preemptive scheduling is prioritized. The highest priority process should
always get chance for execution
Suppose P1(having low priority) & P2(having high priority) are two processes.
When process P1 is executing and suddenly P2 arrives then OS pauses P1 and
context switches to P2. Now P1 is enters the ready queue.
Eg: Round robin scheduling
Windows used non-preemptive scheduling up to Windows 3.x versions,
The pre-emptive scheduling can cause problems when two processes share data,
because one process may get interrupted in the middle of updating shared data
structures.
Preemption can also be a problem if the kernel is busy implementing a system call (
e.g. updating critical kernel data structures ) when the preemption occurs.

Non preemptive Scheduling
If scheduling takes place only under conditions 1 and 4, the system is said to be non-
preemptive, or cooperative. Under these conditions, once a process starts running
it keeps running, until it finishes.
In non-preemptive scheduling, when a process enters the state of running, the state
of that process is not deleted from the scheduler until it finishes its service time.
In non-preemptive scheduling, a running task is executed till completion. It cannot
be interrupted.
Suppose P1 & P2 are two processes. If P1 is executing and suddenly P2 arrives still
P1 is not interrupted, P1 executes to finish and only after complete execution of P1,
P2 starts its execution. So no context switching happens in between.
Eg First In First Out
Macs & Windows 95 used pre-emptive scheduling.




OPERATING SYSTEM

Difference between Preemptive & Non preemptive Scheduling
Sr.No. Preemptive Scheduling Sr.No. Non preemptive Scheduling
1 Process switches from running
to ready state
1 Process switches from running to
waiting state
2 Process switches from waiting
to ready state
2 Process switches from running to
terminate state
3 Higher priority process can
preempt lower priority process
which is being executed by CPU
3 Once a process starts its execution
by CPU, no other process can
interrupt it until it finishes its
complete execution.
4 It needs specific platform 4 It is platform independent
5 E.g. Round Robin Scheduling 5 E.g. First Come First Serve
Scheduling
6 E.g. Windows 95, Macintosh 6 E.g. Windows 3.X

Scheduling Criteria
There are several different criteria to consider when OS try to select the "best"
scheduling algorithm for a particular situation and environment, including:
1. CPU utilization It is important to keep CPU always busy so as to fully utilize it.
Ideally the CPU would be busy 100% of the time. On a real system CPU usage
should range from 40% ( for lightly loaded systems ) to 90% ( for heavily loaded
systems )
2. Throughput If CPU is kept is busy then expected work will be done. There is a
measure to calculate work done by CPU i.e. number of processes completed
per unit time. This measure is called Throughput. Throughput may range from
10 / second for short processes to 1 / hour for long processes.
3. Turnaround time Turnaround time is a time required for a particular process
to complete, from submission time to completion time. i.e. time spent between
process submission & process completion. It is also called as Wall clock time.
4. Waiting time - How much time processes spend in the ready queue waiting their
turn to get on the CPU. It is the sum of the periods spent waiting in the ready
queue.
OPERATING SYSTEM

5. Response time Turnaround time cannot be good criteria for selecting
algorithm in interactive systems. So there is another measure to calculate the
time duration between submissions of a process till its first response is
produced. This measure is called as response time. The time taken in an
interactive program from the issuance of a command to commence of a response
to that command is called as response time.
In general one wants to optimize the average value of a criteria
Optimization Criteria:
1. Max. CPU utilization
2. Max. throughput
3. Min. turnaround time
4. Min. waiting time
5. Min. response time


4.2. Scheduling Algorithms
A) First Come First Serve (FCFS)
FCFS is very simple to implement - Just a FIFO(First In First Out) queue, like
customers waiting in line at the bank or the post office.
FCFS is non preemptive scheduling algorithm
The process which request CPU first is allocated to the CPU first.
FCFS can yield some very long average wait times, particularly if the first process
to get there takes a long time.
For example, consider the following three processes:
Process Burst Time
P1 24
P2 3
P3 3
If we take sequence as P1,P2,P3 then its Gantt Chart Representation will be,

The average waiting time for the three processes is ( 0 + 24 + 27 ) / 3 = 17.0 ms.
If we take sequence as P2,P3,p1 then its Gantt Chart Representation will be,
OPERATING SYSTEM


The same three processes have an average wait time of ( 0 + 3 + 6 ) / 3 = 3.0 ms. The
total run time for the three bursts is the same, but in the second case two of the
three finish much quicker, and the other process is only delayed by a short amount.
FCFS can also block the system in a busy dynamic system known as the convoy
effect. If there is a process which is CPU bound and many other processes which are
I/O bound, then CPU bound process will hold the CPU for its execution.
At the same time I/O bound processes finish their I/O operation and they enters in
ready queue. This moment I/O devices are kept idle.
In this way FCFS is poor due to long average waiting time & convoy effect(CPU & I/O
devices are not utilized properly).

B) Shortest Job First (SJF)
Technically this algorithm picks a process based on the next shortest CPU burst.
Whenever CPU is free, it is assigned to the process with smallest CPU burst
time, then to the next smallest process and so on.
The idea behind the SJF algorithm is to pick the quickest fastest little job that needs
to be done, get it out of the way first, and then pick the next smallest fastest job to do
next.
Whenever two processes having same CPU burst time occurs, FCFS is used to solve
the problem.
SJF can be either preemptive or non-preemptive. Preemption occurs when a new
process arrives in the ready queue that has a small burst time than burst time of a
process which is currently executed by the CPU. Preemptive SJF is sometimes
referred to as shortest remaining time first scheduling.
Example 1 (non preemptive SJF), consider following processes and assume that
all jobs arrive at the same time.

Process Burst Time
P1 6
P2 8
P3 7
P4 3

OPERATING SYSTEM

The Gantt chart representation is as follows



In the case above the average wait time is ( 0 + 3 + 9 + 16 ) / 4 = 7.0 ms, ( as opposed
to 10.25 ms for FCFS for the same processes. )
SJF can be proven to be the fastest scheduling algorithm.
Example 2 (preemptive SJF), consider the following data

Process Arrival Time
Burst
Time
P1 0 8
P2 1 4
P3 2 9
p4 3 5

Gantt chart is as follows



The average wait time in this case is ( ( 5 - 3 ) + ( 10 - 1 ) + ( 17 - 2 ) ) / 4 = 26 / 4 =
6.5 ms. ( As opposed to 7.75 ms for non-preemptive SJF or 8.75 for FCFS. )
Advantage- reduced average waiting time
Disadvantage- Knowing length of next CPU burst time is very difficult






OPERATING SYSTEM

Difference between FCFS & SJF

Sr.No. FCFC Sr.No. SJF
1 FCFS uses simple approach
First Come First Serve
1 SJF uses Shortest job first
approach
2 Turnaround time can be low &
scheduling overhead is
minimal
2 Turnaround time is more than
FCFS
3 Waiting time & response time
can be low
3 Waiting time & response time
increases as the process
computational requirements
increase
4 Throughput can be low since
long processes can take over
the CPU
4 This algorithm is designed for
maximum throughput in most
scenarios

C) Round Robin (RR)
Round robin scheduling algorithm is specially used for time sharing systems.
It is similar to FCFS scheduling, with pre emption and CPU bursts are assigned to
processes with limits called time quantum.
Time quantum is a small unit of time. It is generally from 10 ms to 100 ms.
In this method ready queue is treated as circular queue. The CPU scheduler goes
around this queue and allocates CPU to each and every process for time interval of
up to one time quantum.
For implementing RR algorithm, the processes are kept in FIFO order. New
processes are added at the rear end of the queue.
The scheduler picks one job and assigns it to CPU; CPU executes it for one time
quantum.
There may exist two possibilities
If time quantum is greater than required burst time of process then, process itself
releases CPU.
If time quantum is smaller than requires burst time of process then, CPU is
interrupted.
OPERATING SYSTEM

CPU stops execution, process goes to ready queue and next process goes for
execution by CPU.
RR scheduling can give the effect of all processors sharing the CPU equally, although
the average wait time can be longer than with other scheduling algorithms.
Example, consider following data

Process Burst Time
P1 24
P2 3
P3 3

Gantt Chart will be as follows

The average wait time is (6+4+7)/3 = 5.66 ms
The performance of RR is sensitive to the time quantum selected.
If the time quantum is large enough, then RR becomes similar to the FCFS algorithm
and if it is very small, then each process gets 1/nth of the processor time and shares
the CPU equally.
But, a real system invokes overhead for every context switch, because the smaller
the time quantum, more context switches are there.
Most modern systems use time quantum between 10 and 100 milliseconds.

OPERATING SYSTEM

D) Priority Scheduling
Priority scheduling is a more general case of SJF, in which each job is assigned a
priority and the job with the highest priority gets scheduled first.
Note that in practice, priorities are implemented using integers within a fixed range.
For example, the following Gantt chart is based upon these process burst times and
priorities:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Average waiting time is (6+0+16+18+1)/5 = 8.2 ms
Priorities can be assigned either internally or externally.
Internal priorities are assigned by the OS using criteria such as average burst time,
ratio of CPU to I/O activity, system resource use, and other factors available to the
kernel.
External priorities are assigned by users, based on the importance of the job, fees
paid, politics, etc.
Priority scheduling can be either preemptive or non-preemptive.
Priority scheduling can suffer from a major problem known as indefinite blocking,
or starvation, in which a low-priority task can wait forever because there are
always some other jobs around that have higher priority.
One common solution to this problem is aging, in which priorities of jobs increase
the longer they wait. Under this scheme a low-priority job will eventually get its
priority raised high enough that it gets run.
E) Multilevel Queue Scheduling
Multilevel queue scheduling is the class of scheduling algorithms which has been
created for situations in which processes can be easily categorized into different
groups.
OPERATING SYSTEM

Multilevel queue scheduling makes partitions of ready queue into separate queues.
Each queue implements a scheduling algorithm which is most appropriate for that
type of processes.
Scheduling must also be done between queues. Generally pre-emptive priority
scheduling is done between different queues.
Example, Multilevel queue scheduling with five queues as shown in figure,

Fig. Multilevel Queue Scheduling
Two common options are there,
One is strict priority in which no job in a lower priority queue runs until all higher
priority queues are empty
For example processes in batch queue cannot get execute until all the processes in
system queue, interactive queue and interactive editing queue get executed.
If system queue, interactive queue and interactive editing queue are empty then
batch queue processes get the chance for execution.
Meanwhile suppose CPU is currently executing process from batch queue and
suddenly new process arrives in previous queues then batch queue process get pre
empted and new process get executed by CPU.
Second option is round-robin i.e. each queue gets a time quantum of different
sizes.
Note that under this algorithm processes cannot switch from queue to queue - Once
they are assigned a queue, they remains in that queue only until they finish their
execution.


OPERATING SYSTEM

Multilevel Feedback Queue Scheduling
In multilevel queue scheduling we assign a process to a queue and it remains in that
queue until the process is allowed access to the CPU. That is, processes do not move
between queues.
Multilevel feedback queue scheduling allows processes to move between queues.
Every queue will have its own scheduling algorithm. And scheduling between
different queues will be priority preemptive.
Example,
1. Consider processes with different CPU burst characteristics. If a process uses too
much of the CPU it will be moved to a lower priority queue. This will leave I/O
bound and (fast) interactive processes in the higher priority queue(s).
2. Assume we have three queues (Q0, Q1 and Q2). Q0 is the highest priority queue and
Q2 is the lowest priority queue
3. The scheduler first executes process in Q0 and only considers Q1 and Q2 when Q0 is
empty. Whilst running processes in Q1, if a new process arrived in Q0, then the
currently running process is preempted so that the Q0 process can be serviced.

Fig. Multilevel Feedback Queue Scheduling
4. Suppose Q0 is implementing Round Robin algorithm for itself then any job arriving
is put into Q0 for a time quantum of 8ms (say). If the process does not complete, it is
preempted and placed at the end of the Q1 queue.
5. This queue (Q1) has a time quantum of 16ms associated with it. Any processes not
finishing in this time are demoted to Q2
6. In Q2 these processes being executed on a FCFS basis.
The above description means that any jobs that require less than 8ms of the CPU are
serviced very quickly. Any processes that require between 8ms and 24ms are also
serviced fairly quickly. Any jobs that need more than 24ms are executed with any
spare CPU capacity once Q0 and Q1 processes have been serviced.
OPERATING SYSTEM

In implementing a multilevel feedback queue there are various parameters that
define the scheduler.
1. The number of queues
2. The scheduling algorithm for each queue
3. The method used to demote processes to lower priority queues
4. The method used to promote processes to a higher priority queue (presumably by
some form of aging)
5. The method used to determine which queue a process will enter

4.3. Deadlock
In operating systems, processes compete for resources. A process requests
resources, if those are not available at that time; a process enters into the wait state.
A deadlock situation is arrived when every process in a set of processes is waiting
for a resource that is currently allocated to another process in the set and which can
only be released when that other waiting process makes progress.
The system thus goes into an indefinite loop resulting into a deadlock.
The deadlock in operating system seems to be a common issue in multiprocessor
systems, parallel and distributed computing setups.
Definition
A set of process is in a deadlock state if each process in the set is waiting for an
event that can be caused by only another process in the set. In other words, each
member of the set of deadlock processes is waiting for a resource that can be
released only by a deadlocked process.
None of the processes can run, none of them can release any resources, and none of
them can be awakened.
It is important to note that the number of processes and the number and kind of
resources possessed and requested are unimportant.
The resources may be either physical or logical. Examples of physical resources are
Printers, Tape Drivers, Memory Space, and CPU Cycles. Examples of logical
resources are Files, Semaphores, and Monitors.
Example
The simplest example of deadlock is where process 1 has been allocated non-
shareable resources A, say, a tap drive, and process 2 has be allocated non-sharable
resource B, say, a printer. Now, if it turns out that process 1 needs resource B
(printer) to proceed and process 2 needs resource A (the tape drive) to proceed and
these are the only two processes in the system, each is blocked the other and all
useful work in the system stops. This situation is termed deadlock. The system is in
deadlock state because each process holds a resource being requested by the other
process neither process is willing to release the resource it holds

OPERATING SYSTEM

System Model
For the purposes of deadlock discussion, a system can be modeled as a collection of
limited resources, which can be partitioned into different categories.
These resources are to be allocated to number of processes, each having different
needs.
Resource categories are
Sharable/ Preemptive Resources: - These resources can be taken away from the
process owing it with no ill effects. Ex, Memory, open files etc
Non sharable/ Non Preemptive Resources: - These resources cannot be taken
away from the process owing it with no ill effects. Ex, Printers, tape drives etc.
By definition, all the resources within a category are equivalent, and a request of
this category can be equally satisfied by any one of the resources in that category.
In normal operation a process must request a resource before using it, and release it
when it is done, in the following sequence:
Request - Process first request a resource, if the request cannot be immediately
granted, then the process must wait until the resource(s) it needs become available.
For example the system calls open( ), malloc( ), new( ), and request( ).
Use - The process uses the resource, e.g. prints to the printer or reads from the file.
Release - The process releases the resource so that it becomes available for other
processes. For example, system calls close( ), free( ), delete( ), and release( ).
For all kernel-managed resources, the kernel keeps track of
1. what resources are free and which are allocated,
2. to which process they are allocated,
3. and a queue of processes waiting for this resource to become available.
Deadlock may occur on same type of resources. Ex, if there is a printer in system and
both are held by two different processes then first process waits for second process
to release printer which causes deadlock.
But deadlock may occur on different type of resources. Ex, if there are two different
resources in system printer and tape drive and suppose process p1 is holding
printer and process p2 is holding tape drive. If p1 requests tape drive which is held
by p2 then deadlock occurs.

Principle Necessary Conditions
Followings are the necessary and sufficient conditions to occur deadlock
These are the Coffman conditions that must hold simultaneously for there to be a
deadlock.
1. Mutual Exclusion Condition
The resources involved are non-shareable.
Explanation: Each resource can be hold by only one process at a time. Only one
process at a time claims exclusive control of the resource. If another process
OPERATING SYSTEM

requests that resource, the requesting process must be delayed until the resource
has been released.
2. Hold and Wait Condition
A process holds a resource already, while requested for other resources.
Explanation: There is a process that is holding a resource already allocated to it
while waiting for additional resource that are currently being held by other
processes.
3. No-Preemptive Condition
Resources already allocated to a process cannot be preempted.
Explanation: Resources cannot be forcibly taken away from a process holding it. No
other process should be able to preempt that resource which is already being used
by a process.
4. Circular Wait Condition
The processes in the system form a circular list or chain where each process in the
list is waiting for a resource held by the next process in the list.
There must be a set of processes(p1,p2,p3,.pn) such that process p1 is requesting a
resource held by p2 , process p2 requesting a resource held by p3 and so on in a
circular manner.

Fig. four processes forming circular wait condition
Mutual Exclusion
In mutual exclusion the resources involved are non-shareable.
Here each resource can be hold by only one process at a time. Only one process at a
time claims exclusive control of the resource. If another process requests that
resource, the requesting process must be delayed until the resource has been
released.
Process P2
requesting
resource of p3
Process P3
requesting
resource of p4
Process P4
requesting
resource of p1
Process P1
requesting
resource of p2
OPERATING SYSTEM

Mutual Exclusion can be achieved by different proposals, so that while one
process is busy with shared memory operations in its critical region, no other
process will enter its critical region and create problem.
Following proposals can be used for that purpose:
1. Disabling Interrupts:
In this solution, when a process enters in critical region it can disable interrupts
and re enable them before it leaves critical region
CPU switches from one process to another as a result of interrupt. So when
interrupts are turned off CPU will not switch from one process to other.
Here process can complete its shared memory operations without any fear.
It is important to give power to process to disable CPU interrupts because only
kernel can easily enable and disable CPU interrupts. So this is not a good
approach
2. Lock Variables:
Here one lock variable is declared that is initialized to 0.
If the process finds that lock is 0 then it enters into critical region and makes that
lock variable as 1 and just before leaving it assigns lock variable with 0.
If another process try to enter into critical region when already one process is in
critical region then it can see that lock variable is 1 then it waits until lock
variable is again set to 0.
3. Strict alteration:
A variable turn is used to run two processes in alteration.
i.e. Process 1 runs then process 2 then process 1 runs again and so on.
But this approach can waste CPU time leading to busy waiting time.
Critical Region
When two or more processes are reading or writing some shared data and final
results of operation depend on which process runs precisely then the situation
occurs called as Race Condition. This condition occurs because of sharing of files or
memory available.
To avoid this race condition, we have to ensure that if one process is using some file
the other process should not be allowed to share it.
Means we have to use Mutual Exclusion. Mutual Exclusion is a way of ensuring that
if one process is trying to use shared files then other process will be excluded from
doing the same.
The part of program where the shared memory is accessed is known as
Critical region.
To avoid race condition, we have to see that no two processes will be there at a time
in the critical region.
To find out solution to race condition, following approaches can be used:
OPERATING SYSTEM

1. No two process should be there at the same time in their critical region
2. No assumptions should be made regarding speeds or number of CPUs
3. The process outside critical region may not block other processes.
4. No process should have to wait forever to make an entry in its critical region
Deadlock Handling
Generally speaking there are three ways of handling deadlocks:
Deadlock prevention or avoidance - Do not allow the system to get into a
deadlocked state.
Deadlock detection and recovery - Abort a process or preempt some resources
when deadlocks are detected.
Ignore the problem all together - If deadlocks only occur once a year or so, it may
be better to simply let them happen and reboot system as necessary. This is the
approach that both Windows and UNIX take.
In order to avoid deadlocks, the system must have additional information about all
processes. In particular, the system must know what resources a process may
request in the future.
Deadlock detection is fairly straightforward, but deadlock recovery requires either
aborting processes or preempting resources, neither of which is an attractive
alternative.
If deadlocks are neither prevented nor detected, then when a deadlock occurs the
system will gradually slow down, as more and more processes become stuck
waiting for resources currently held by the deadlock and by other waiting
processes. Unfortunately this slowdown the system slowdown when a real-time
process has heavy computing needs.
Deadlock Prevention

1. Elimination of Mutual Exclusion Condition:
Resources in the system may be sharable or non sharable
Shared resources such for example read-only files or memory do not lead to
deadlocks as more than one process can use the sharable resources so there
would not be mutual exclusion among processes.
But non sharable resources, such as printers and tape drives, require exclusive
access by a single process. Means they can be assigned to a single process only at
a time. So in case of non sharable resources mutual exclusion cannot be
eliminated.
In general we cannot prevent deadlock by denying mutual exclusion condition as
some resources are non sharable.

OPERATING SYSTEM

2. Elimination of Hold and Wait Condition:
To prevent this condition processes must be prevented from holding one or
more resources while simultaneously waiting for other resources.
This require that processes holding resources must release them before
requesting new resources, and then they can re-acquire the released
resources along with the new ones in a single new request.
But forcing the process to release their resources can be a problem if a process
has partially completed an operation using a resource and then fails to get it re-
allocated after releasing it.
For example, imagine a process which needs to copy data from DVD to hard disk,
sort these files on disk and take printout of that data. Means this process will
request for DVD drive and is requesting for disk and printer. So it will hold
printer till it finishes with DVD drive and disk.
According to the solution given above, the process will request only DVD drive
and disk only. It will finish working with these both resources and then will put a
request for printer and will release DVD drive and disk simultaneously. Now
process will have printer only. After printing it will release it.
Disadvantage: Resource utilization can be low as the resources may not be
utilized long time and starvation is also possible.

3. Elimination of No Preemption Condition:
Preemption of process resource allocations can prevent this condition of
deadlocks, whenever it is possible.
One approach is that if a process is forced to wait when requesting a new
resource, then all other resources previously held by this process are released,
( i.e. preempted ), forcing this process to re-acquire the old resources along with
the new resources in a single request, similar to the solution to hold & wait.
Another approach is that when a resource is requested and not available, then
the system looks to see which processes currently have these resources and have
they blocked themselves waiting for some other resource. If such a process is
found, then some of their resources may get preempted and added to the list of
resources for which the process is waiting.
Either of these approaches may be applicable for resources whose states are
easily saved and restored, such as registers and memory, but are generally not
applicable to other devices such as printers and tape drives.

4. Elimination of Circular Wait Condition:
One way to avoid circular wait is to number all resources, and to get those
resources the processes request them only in strictly increasing ( or decreasing )
order.
OPERATING SYSTEM

In other words, in order to request resource Rj, a process must first release all Ri
such that i >= j. One big challenge in this scheme is determining the relative
ordering of the different resources
Second approach is that resources should be allocated to processes in non linear
fashion to avoid circular wait.

Deadlock Avoidance Algorithm

This approach to the deadlock problem anticipates deadlock before it actually occurs.
This approach employs an algorithm to access the possibility that deadlock could occur
and acting accordingly. The most famous deadlock avoidance algorithm, due to Dijkstra
[1965], is the Bankers algorithm. It is name so because the process is analogous to
that used by a banker in deciding if a loan can be safely made.



Bankars Algorithm

The Banker's Algorithm gets its name because it is a method that bankers could use to
assure that when they allocate resources (money) they will still be able to satisfy all
their clients.
It follows following steps:-
When a process starts up, it must state in advance the maximum allocation of resources
it may request in future, up to the amount available on the system.
When a request is made, the scheduler determines whether granting the request would
leave the system in a safe state or not. If not, then the process must wait until the
request can be granted safely.
For this purpose the banker's algorithm relies on several key data structures:
Suppose n is the number of processes and m is the number of resource categories.
1. Available[ m ]- This indicates how many resources are currently available of each
type.
2. Max[ n ][ m ]- This indicates the maximum demand of each process for each
resource.
3. Allocation[ n ][ m ]- This indicates the number of each resource category allocated
to each process.
4. Need[ n ][ m ]- This indicates the remaining resources needed of each type for each
process. ( Note that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i, j. )

You might also like