You are on page 1of 77

CPU Scheduling is the basis of multi programmed operating systems.

By switching the CPU among processes, the OS can make the computer more productive. Scheduling is the fundamental OS function. Let us study some of the basic concept

CPU I/O BURST CYCLE CPU Scheduler Preemptive Scheduling Dispatcher

CPU I/O BURST CYCLE

Process execution consists of a cycle of CPU execution and I/O wait. Process alternate back and forth between these two states. Process execution begins with a CPU burst. An I/O burst is followed by a CPU burst, then another I/O burst and so on. The last CPU burst will end with a system request to terminate execution.

Load store Add store CPU burst

Read from file


Wait for I/O Store increment I/O burst

Index
Write to file Wait for I/O Load store Add store Read from file Wait for I/O

CPU burst

I/O burst

CPU burst Sequence of I/O burst

CPU and I/O burst

The duration of CPU burst are measured in terms of exponential or hyper-exponential curve. A CPU bound program might have a few very long CPU bursts. A I/O bound program might have a many very short CPU bursts. The distribution plays an important role in the selection of an appropriate CPU scheduling algorithm.

CPU SCHEDULER

Whenever the CPU becomes idle, the OS must select one of the processes in the ready queue to be executed. The CPU scheduler carries out the selection process. It is also known as short term scheduler. The scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them.

PREEMPTIVE SCHEDULING

Scheduling is carried out under the following four circumstances:

When a process switches from the running state to the waiting state When a process switches from the running state to ready state When a process switches from the waiting state to the ready state When a process terminates

Scheduling that takes place under circumstances 1 and 4 are called as non preemptive scheduling. Scheduling that takes place under circumstances 2 and 3 are called as preemptive scheduling. Preemptive scheduling requires a special h/w such as timer, special mechanism to cooperate access to shared data

DISPATCHER

It gives control of the CPU to the process selected by the short term scheduler. It function involves:

Switching context Switching to user mode Jumping to the proper location in the user program

The dispatcher should be as fast as possible during every process switching. The time taken by the dispatcher to stop process and start another running is known as the dispatch latency.

SCHEDULING CRITERIA

Different CPU scheduling algorithms are available depending upon the class of processes. These scheduling algorithms are mainly based on the following criteria:

CPU utilization Throughput Waiting time Response time

CPU utilization:

To keep CPU busy, utilization may range from 0 to 100 percent. In real system, it should range from 40% to 90% Throughput is one measure of work done by CPU is the number of processes completed per time unit. For long process, the rate may be 1 process per hour. For short process, the rate may be 10 process per second.

Throughput:

Turnaround time:

The interval from the time of submission of process to the time of completion of process is the turnaround time. It is the sum of the periods spend waiting to get into memory, waiting in the ready queue, executing on the CPU and doing I/O

Waiting time:

Waiting time is the sum of the periods spent waiting in the ready queue. This time affects the CPU scheduling.

Response time:

It is the amount of time it takes to start responding , but not the time that it takes to output the response.

SCHEDULING ALGORITHMS

First come, First Served scheduling Shorted job First scheduling Priority scheduling Round Robin scheduling Multilevel Queue scheduling Multilevel Feedback Queue scheduling Multiple Processor scheduling Real Time scheduling

MULTILEVEL QUEUE SCHEDULING

This algorithm partitions the ready queue into several separate queues. There are 5 types of queues

System processes Interactive processes Interactive editing processes Batch processes Student processes

Each process has a priority level. Each process in the queue is given a time slice between the queues.

Highest priority
System processes

Interactive processes

Interactive editing processes

Batch processes

Student processes Lowest priority

MULTILEVEL FEEDBACK QUEUE SCHEDULING


Allows a process to move between queues. If the process uses too much of CPU time, it will be moved to a lower priority queue. This leaves the I/O bound and interactive processes in the higher priority queues. The scheduler first executes processes in queue 0. Only when queue 0 is empty will execute processes in queue 1. processes in queue 2 will be executed only when queue 0 and 1 are empty. A process entering the ready queue is put in queue 0.

MULTIPLE PROCESSOR SCHEDULING

If multiple CPUs are available, the scheduling problems is more complex. In such scheme any one of the following scheduling approaches may be used.

One approach is each processor is self scheduling. Each processor examines the common ready queue and selects a process to execute. The other approach is to appointing one processor as scheduler for the other processors, thus creating a master slave structure.

REAL TIME SCHEDULING

It is divided into

Hard real time computing Soft real time computing

Hard real time systems are required to complete a critical task within a guaranteed amount of time. A process is submitted along with amount of time in which it needs to perform or complete. The scheduler either admits the process or rejects the request as impossible. This is known as resource reservation. Soft real time systems requires that critical processes receive priority over less priority ones. To implement soft real time computing we should have priority scheduling and the processes should have high priority.

Evaluation methods

Deterministic modeling:

This method takes a particular predefined workload and defines the performance of each algorithm for that work load. Consider all five processes arrive at time 0: processes burst time P1 10 P2 29 P3 3 P4 7 P5 12

Lets consider FCFS, SJF and RR (quantum = 10 ms) scheduling algorithms and the waiting time.

For FCFS algorithm,


P1 0 10 P2 P3 39 42 P4 49 P5 61

The average waiting time is = ( 0 + 10 + 39 + 42 + 49 )/5 = 28 ms. For non preemptive SJF algorithm,
P3 0 3 P4 10 P1 20 P5 32 P2 61

The average waiting time is = (10 + 32 + 0 + 3 + 20 ) / 5 = 13 ms. For RR algorithm,


P1 P2 P3 P4 P5 P2 P5 P2 52

10

20

23

30

40

50

61

The average waiting time is = ( 0 +32 + 20 + 23 + 40 )/5 = 23 ms.

Queuing model:

The computer system is described as a network of servers. Each server has a queue of waiting processes. The CPU is a server with its ready queue, as is the I/O system with its device queue. Knowing the arrival rates and the service rates, we can compute utilization, average queue length, average wait time and so on. This is called queuing network analysis. Queuing analysis can be calculated using formula

Where be the average queue length, w be the average waiting time and be the average arrival rate for new processes every seconds.

Simulations:

This gives more accurate evaluation of scheduling algorithms, which involve programming a model of the computer system. The simulator has a variable representing a clock; as this variables value is increased, the simulator modifies the system state to reflect the activities of the devices, the processes, and the scheduler. As the scheduler executes statistics that indicate algorithm performance are gathered and printed. Simulations can be expensive, requries hours of computer time, requires large amount of storage space and finally the design, coding and debugging of the simulator can be a major task.

simulation CPU I/O CPU I/O CPU I/O CPU 10 213 12 112 2 147 173 FCFS simulation SJF simulation Trace tape

Performance statistics for FCFS

Actual process execution

Performance statistics for SJF

RR

Performance statistics for RR (q= 14)

Trace tapes provide an excellent way to compare two algorithms on exactly the same set of real inputs. This method can provide accurate results for its inputs.

Implementation:

The completely accurate way to evaluate a scheduling algorithm is to code it, put it in the OS, and see how it works. In this method the expense is incurred not only in coding the algorithm and modifying the OS to support it as well as its required data structure. The other difficulty with any algorithm evaluation is that the environment in which the algorithm used will change, as the new programs are written and the types of problem change.

Process Synchronization
Critical section problem and its solutions:

Consider a system consisting of n processes {P0,P1,P2,.,Pn-1} Each process has a segment code called a critical section, in which the process may be changing common variables, updating a table, writing a file, and so on. When one process is executing in its critical section, no other process is to be allowed to execute in its critical section. Thus the execution of the critical section by the processes is mutually exclusive in time. The critical section problem is to design a protocol that the processes can use to cooperate. Each process should request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by exit section. The remaining code is the remainder section.

General structure of a typical process Pi : do { entry section


critical section

exit section
remainder section

} while (1); The entry section and exit section are enclosed in boxes to indicate that they are important segments of code.

A solution to the critical section problem must satisfy the following three requirements: Mutual exclusion:

If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in the decision on which will enter its critical section next, and this selection cannot be postponed indefinitely. There exists a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Progress:

Bounded waiting:

Two Process Solutions

Algorithms that are applicable to only two processes at a time is called as two process solutions. Consider the processes numbered P0 and P1. For algorithm let us use Pi and Pj where j == 1-i. Algorithm 1:

Let the processes share a common integer variable turn initialized to 0. If turn == 1, then process Pi is allowed to execute in its critical section. The structure of process Pi is shown below: do { while (turn != i); critical section turn = j; remainder section } while (1);

This solution ensures that only one process at a time can be in its critical section. It does not satisfy the progress requirement, since it requires strict alternation of processes in the execution of the critical section. For eg. If turn == 0 and P1 is ready to enter its critic al section, P1 cannot do so, even though P0 may be in its remainder section. That is, the problem with this algorithm is that it does not retain sufficient information about the state of each process, it remembers only which process is allowed to enter its critical section.

Algorithm 2:

The problem with algorithm can be replaced with the variable turn with the following array: Boolean flag [2]; The elements of array are initialized to false. If flag [i] is true, this value indicates that Pi is ready to enter the critical section. The structure of Pi is shown below: do { flag [i] = true; while ( flag [i]); critical section flag [i] = false; remainder section } while (1);

In this algorithm, process Pi first sets flag [i] = true, signaling that it is ready to enter its critical section. Then Pi, checks to verify that process Pj is not also ready to enter its critical section. If Pj were ready, then Pi would wait until Pj had indicated that it no longer needed to be in the critical section, until flag [j] = false. At this point, Pj would enter the critical section. On existing the critical section, Pi would set flag [i] to be false, allowing the other process, if it is waiting to enter its critical section. In this algorithm the mutual exclusion requirement is satisfied, but the progress requirement is not met.

Consider the following execution sequence:


T0: T1:

P0 sets flag [0] = true P1 sets flag [0] = true

Now P0 and P1 are looping forever in their respective while statements. To avoid this, either their must be seeral processes executing concurrently, or where an interrupt such as a timer interrupt occurs immediately after step T0 is executed. Otherwise, we can use in a situation where it is possible for both processes to be in the critical section at the same time, violating the mutual exclusion requirements.

Algorithm 3: If we have the combination of algorithms 1 and 2, we obtain a correct solution to the critical section problem, where all the three requirements are met. The process share two variables: Boolean flag [2]; int turn; Initially flag [0] = flag [1] = false and the value of trun is either 0 or 1. The structure of process Pi is shown below: do{ flag [i] = true; turn = j; while ( flag [j] && trun == j); critical section flag [i] = false; remainder section } while (1);

Multiple Process solutions (Bakery algorithm)

The algorithm for solving the critical problem for n processes, then it is called as multiple Process solutions or Bakery algorithm. On entering the queue, each process receives a number. The process with the lowest number is served next. The bakery algorithm cannot guarantee that two processes do not receive the same number. If it receives so, the process with the lowest value is served first. i.e. if Pi and Pj receive the same number and if i<j, then Pi is served first. The common data structures are Boolean choosing [n]; int number [n];

Initially, these data structures are initialized to false and 0 respectively.


Let us define the following notation:

(a , b) < (c , d) if a < c or if a == c and b < d. Max ( a0,a1,..,an-1) is a number, k, such that k >= ai for i=0,1,2,..,n - 1.

The structure of process Pi in the bakery algorithm is as shown below:

To prove that the algorithm is correct, we need first to show that, if Pi is in its critical section and Pk (k !=1) has already chosen its number k !=0.

do { choosing [i] = true; number [i] = max (number [0], number[1],., number[n-1]) + 1; choosing [i] = false; for ( j = 0; j <n; j++) { while (choosing [j]); while ((number [j] != 0 && (number [j , i]< number [i , i] )); } critical section number [i] = 0; remainder section } while (1);

Achieving Mutual Exclusion for Two State Process

Dekkers Algorithm:

The two processes, P0 and P1 share the following variables:

Boolean flag[2]; which is initially false int turn;

This algorithm satisfies all the three requirements for the critical section problem. The structure of process Pi ( I == 0 or 1 ), with Pj (j == 1 or 0) being the other process as shown below. The structure of process Pi in dekkers algorithm is

do {
flag [i] = true; while (flag [ j ] { if ( turn == 1) { flag [i] == false; while (turn == j); flag [i] = true; } } critical section turn = j; flag [i] = true; remainder section } while (1);

Petersons Solution: Assume that the LOAD and STORE instructions are automatic i.e. cannot be interrupted. The two process share the variables: int turn and Boolean flag [2]. The variable turn indicates whose turn it is to enter the critical section. The flag array is used to indicate if a process is ready to enter the critical section. Flag [i] = true implies that process Pi is ready

Algorithm for process Pi: do { flag [i] = TRUE; turn = j; while (flag [j] && turn == j); critical section flag [i] = false; remainder section } while (true);

Tools for Process Synchronization


Synchronization Hardware: Critical section problem can be solved by some simple hardware instructions that are available on many systems. Generally the mutual exclusion implementation is done by using either TestAndSet instruction or by using Swap instruction.

The definition of the TestAndSet instruction: Boolean TestAndSet (Boolean &target) { Boolean rv = target; target = true; return rv; } The definition of the Swap instruction: void Swap (Boolean &a, Boolean &b) { Boolean temp = a; a = b; b = temp; }

We can use special instructions to solve the critical section problem in a relatively simple manner as follows: If a machine supports the Swap instructions, then mutual exclusion can be provided as follows: A global Boolean variable lock is declared and is initialized to false. Each process also has a local Boolean variable key. The structure of process Pi is as shown below: Boolean waiting [n] Boolean lock; do { key = true; while (key == true) swap (lock, key); lock = false; } while (1); These algorithms do not satisfy the bounded waiting requirement.

Bounded waiting mutual exclusion with TestAndSet instruction is as follows: do { waiting [i] = true; key = true; while (waiting [i] && key ) key = TestAndSet(lock; waiting [i] = false; critical section j = ( I + 1) % n; while ((j !=i) && !waiting [j]) { j= ( j + 1 ) % n; if ( j == i ) { lock = false; else waiting [j] = false; } } remainder section } while (1);

The process Pi can enter its critical section only if either waiting [i] == false or key == false. The value of key can become false only if the TestAndSet is executed. The first process to execute the TestAndSet will find key == false, all others must wait. The variable waiting [i] can become false only if another process leaves its critical section; only one waiting [i] is set to false, maintaining the mutual exclusion requirement. This algorithm supports all the three requirements needed for processes synchronization.

Semaphores

It is a synchronization tool to solve the critical section problem. A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard automatic operations: wait and signal. These operations were originally termed P for wait and V for signal. The classical definition of wait in pseudo code is:
wait (S) { while (S <= 0); // no op S - -; }

The classical definition of signal in pseudo code is :


signal (S) { S + +; }

USAGE OF SEMAPHORES

It can be used to deal with the n-process criticalsection problem. The n process share a semaphore, mutex ( mutual exclusion), initialized to 1. Each process Pi is organized as shown below:

Mutual Exclusion implementation with semaphores

do {
wait (mutex); critical section signal (mutex); remainder section }while (1);

Implementation

While a process is in its critical section, any other process that tries to enter the critical section must loop continuously in the entry code. This continual looping is clearly a problem in a real multiprogramming system, where a single CPU is shared among many processes. Busy waiting wastes CPU cycles that some other process might be able to use productively. This type of semaphore is also called a spin lock. To overcome the need for busy waiting, we can modify the definition of the wait and signal semaphore operations.

When a process executes the wait operation and finds that the semaphore value is not positive, it must wait. Rather than busy waiting, the process can block itself. The block operation places a process into a waiting queue associated with the semaphore and the state of the process is switched to the waiting state. Then, control is transferred to the CPU scheduler, which selects another process to execute. A process that is blocked, waiting on a semaphore S, should be restarted when some other process executes a signal operation. The process is restarted by a wakeup operation, which changes the process from the waiting state to ready state.

The implementation of semaphore using this definition is: typedef struct { int value; struct process *L; } semaphore;
The wait semaphore operation can now be defined as: void wait (semaphore S) { s.value--; if (S. value < 0 ) { add this process to S.L; block(); } }

The signal semaphore operation can now be defined as void signal (semaphore S) { S.value ++; if (S.value <= 0) { remove a process P from S.L; wakeup(P); } } When a process must wait on a semaphore, it is added to the list of processes. A signal operation removes one process from the list of waiting processes. A block operation suspends the process that involves it The wakeup (P) operation resumes the execution of a blocked process P.

BINARY SEMAPHORE

Binary semaphore is a semaphore with an integer value that can range only 0 and 1 and would be simpler to implement depending upon the underlying hardware architecture. To implement the binary semaphores we need the following data structure. binary-semaphore S1,S2; int C; Initially S1 = 1, S2 = 0, and the value of integer C is set to the initial value of the counting semaphore S.

The wait operation on the counting semaphore S can be implemented as follows: wait (S1); C--; if (C<0) { signal (S1); wait (S2); } signal (S1); The signal operation on the counting semaphore S can be implemented as follows: wait (S1); C++; if (C<0) signal (S2); else wait (S2); signal (S1);

Deadlock and Starvation


Deadlock: 2 or more process are waiting indefinitely for an event that can be caused by only one of the waiting process. Let S and Q be two semaphores initialized to 1 Po P1 wait (S); wait (Q); wait (Q); wait (S); wait (S); wait (S); . .. signal (S); signal (Q); signal (Q;); signal (S); Since these signal operations cannot be executed, Po and P1 are deadlocked. Starvation: indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

Monitors

A high level abstraction that provides a convenient and effective mechanism for process synchronization. A monitor is characterized by a set of programmer defined operators. The syntax of a monitor is shown below: monitor monitor-name { // shared variable declarations procedure P1() {..} .. procedure Pn() {..} initialization code (.) {} }

The monitor construct ensures that only one process may be active within the monitor at a time. But for many process synchronization scheme, synchronization mechanisms provided by the condition construct as shown in the following figure along with the monitor.
Shared data x y Monitors with condition variables

Operations
Initialization code

Message passing

This is the method of inter process communication uses two primitives SEND and RECEIVE, which are system calls, such as send (destination, &message); receive (source, &message); The former sends a message to a given destination and the later receives a message from a given source. If no message is available then the receiver could block until one arrives.

Classical Problems of process synchronization: The bounded buffer Problem:

It is commonly used to illustrate the power of synchronization primitives. Its general structure is: a. Assume that the pool consists of n buffers, each capable of holding one item. b. The mutex semaphore provides mutual exclusion for access to the buffer pool and is initialized to the value 1. c. The empty and full semaphores count the number of empty and full buffers, respectively. d. The semaphore empty initialized to the value n; the semaphore full is initialized to 0.

The Reader Writer Problem:

A data object like a file or record can be shared among several concurrent processes. Some of these process may want only to read the content of the shared object, whereas other may want to update the shared object. The process that are in only reading are called as readers, and to the rest as writers. The following priority solutions are used:

No reader will be kept waiting unless a writer has already obtained permission to use the shared object. Once a writer is ready, that writer performs its write as soon as possible.

The dining Philosophers Problem: The dining philosopher problem is considered a classic synchronization problem and is a simple representation of the need to allocate several resources among several process in a deadlock and starvation free manner. Consider five philosophers who spend their life eating and thinking In the table there is a bowl of rice and five chopsticks. When a philosopher think he does not talk with his colleagues. When a philosopher gets hungry then tries to pick up the two chopsticks that are closet to him. At a time he can pick up only one chopstick, when both the chopsticks he gets, then he eats without releasing the chopsticks. When he is finished eating, he puts down both of his chopsticks and starts thinking again.

Solution for dining philosopher problem using semaphore:


Represent each chopstick by a semaphore. A philosopher tries to grab the chopstick by executing a wait operation on that semaphore. He releases the chopstick by executing the signal operation on the appropriate semaphores. The data shared are: semaphore chopstick [5] Where all the elements of chopstick are initialized to 1. This solution guarantees that no two neighbors are eating simultaneously but is has the possibility of creating a deadlock.

do {
wait (chopstick [i] ); wait (chopstick [ (i+1) % 5); . eat signal (chopstick [i]); signal (chopstick (i+1) % 5); .. think } while (1);

Solution for dining philosopher problem by using monitor:


Data structure: enum{ thinking, hungry, eating } state [5]; Philosopher i can set the variable state [i] = eating only if his two neighbors are not eating: Condition self [5] where philosopher i cannot eat when he is hungry as he is unable to obtain the chopsticks he needs. The distribution of chopsticks is controlled by the monitor dp. Each philosopher before starting to eat must invoke the operation pickup. The philosopher invokes the putdown operation and may start think when he finishes eating.

dp.pickup (i); eat dp.putdown (i); Monitor dp { enum {thinking, hungry, eating } state [5]; condition self[5]; void pickup (int i) void putdown (int i) Void test (int i) Void init() { for (int i = 0; i < 5; i++) state [i] = thinking; } }

Void pickup (int i) { state [i] = hungry; test(i); if ( state [i] != eating) self[i].wati(); } Void putdown (int i) { state [i] = thinking; // test left and right neighbors test ((i + 4) % 5); test ((i + 1) % 5); }

Void test (int i) { if ( ( state [i + 4) % 5 != eating ) && ( state [i] == hungry) && ( state [i + 1) % 5 ] != eating )) { state [i] = eating; self [i] . Signal(); } } Void init () { for ( int i =0; i < 5; i++) state [i] = thinking; } }

DEADLOCKS

DEADLOCKS

If a process requests resources and the resources are not available at that time, the process enters await state. Waiting process may never again change state, because the resources they have requested are held by other waiting process. This situation is called a deadlock. Example: let P1 and P2 be two process and let R1 and R2 be two resources. P1 holding R1 and waiting for R2 and P2 holding R2 and waiting for R1.

Starvation

Lets suppose that three process P1,P2,P3 each requires access to resource R. Consider the situatio in which P1 is in possession resource, and both P2 and P3 are delayed, waiting for that resource. When P1 exits its critical section, either P2 or P3 should be allowed access to R. let us assume that P3 is granted access and that before it completes its critical section, P1 requires access. If P1 is granted access after P3 has finished, and if P1 and P3 repeatedly grant access to each other, then P2 may indefinitely be denied access to resource, even though there is no deadlock situation, we call P2 is under starvation.

System Model

A system consists of a finite number of resources to be distributed among number of competing processes. The number of resources requested may not exceed the total number of resources available in the system. Under the normal mode of operation, a process may utilize a resource in only the following sequence:

Request: If the request cannot be granted immediately, then the requesting process must wait until it can acquire the resource. User: The process can operate on the resources. Release: The process releases the resource.

Deadlock Characterization

Necessary Conditions Deadlock can arise if 4 conditions hold simultaneously:

Mutual exclusion: only one process at a time can use a resource. Hold and wait: a process holding at least one resources is waiting to acquire additional resources held by other processes. No preemption: a resources can be released only voluntarily by the process holding it, after that process has completed its task. Circular wait: P1 waiting for a resource held by P2, P2 waiting for a resource held by P3, and P3 for P1 and so on

Resource Allocation Graph

The graph consist of a set of vertices V and a set of edges E. Vertices are nodes P={ P1,P2,P3,..Pn} and resources R={R1,R2,R3.Rn} Directed edge from process Pi to resource Rj is denoted as Pi Rj called as request edge

Process

Resources type with 6 instances

P1
R1

P1 requests instance of R1

P1

P1 is holding and instance of R1

R1

R1

R3

P1 R2

P2

P3

THE RESOURSE ALLOCATION GRAPH

The sets P,R,E:


P = {P1, P2, P3} R= { R1, R2, R3} E = {P1 R1, P2 R2, R1 P2, R2 P2, R2 P1, R3 P3}

Methods of handling deadlocks

Ensure that the system will never enter a deadlock state Allow the system to enter a deadlock state and then recover Ignore the problem and pretend that deadlocks never occur in the system which is used by most of operating system, including UNIX.

Deadlock Avoidance

Method for avoiding deadlocks is to require additional information about how resources are to be used. The following are the some of the algorithms used to avoid deadlock:

Safe state Resource Allocation Graph algorithm Bankers Algorithm Safety algorithm Resource Request Algorithm

Safe State:

When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state. System is in safe state if there exists a safe sequence of all processes. If system is in safe state no deadlocks If system is in unsafe state possibility of deadlocks Avoidance ensure that a system will never enter an unsafe state. Let us consider the following example for the safe and unsafe state.

You might also like