You are on page 1of 9

Module 5

1. Variations in Subprogram Control


Variations in subprogram control provide chance for additional features in our subprogram
structure, such as coroutines, exceptions and tasks.
Exceptions
Provide a way such that a subprogram will be called when a particular condition or event
occurs.
Such a condition or event is usually termed as exception or a signal.
The subprogram that performs the special processing is termed exception handler.
The action of noticing the exception, interrupting the program execution, and transferring
control to exception handler is called raising the exception.
Exception Handlers
Since not invoked explicitly exception handler does not require name or parameters.
The definition of exception handler contains
o Set of declarations of local variables
o Sequence of executable statements
Raising an Exception
An exception can be invoked implicitly or explicitly
In ADA there is a instruction called raise for invoking explicitly.
Propagating an Exception
When an exception is handled in a subprogram other than the subprogram from which it
is raised, the exception is said to be propagated from the point at which it is raised to the point at
which it is handled.
I mplementation
Once exception is raised the control transferred to handler
Handler invoked as a normal subprogram call




Exception Handling in Java
When an exception occurs within a method, the method creates an exception object and hands it
off to the runtime system
Creating an exception object and handing it to the runtime system is called throwing an
exception
Exception object contains information about the error, including its type and the state of
the program when the error occurred
The runtime system searches the call stack for a method that contains an exception handler
When an appropriate handler is found, the runtime system passes the exception to the handler
An exception handler is considered appropriate if the type of the exception object thrown
matches the type that can be handled by the handler
The exception handler chosen is said to catch the exception.
If the runtime system exhaustively searches all the methods on the call stack without finding an
appropriate exception handler, the runtime system (and, consequently, the program) terminates
and uses the default exception handler
Benefits of Java Exception handling framework
Separating Error-Handling code from regular business logic code
Propagating errors up the call stack
Grouping and differentiating error types

Assertions
Similar to exceptions
Assertions is Statement implying the relationship among data objects
Coroutines
The subprograms that allow control to return to their calling program before completion
of execution
I mplementation
The resume B instruction in coroutine A involves two steps:
1. The current value of CIP is saved in the resume point location of activation record of A
2. The ip value in resume point of B is fetched from Bs activation record and assigned to
CIP to effect transfer of control to the proper instruction in B.


2. Parallel Programming
Several programs might be executing simultaneously
The program is termed a concurrent or parallel program
Each program that can execute concurrently with other subprograms is termed as a task
(or a process)
Principles of Parallel Programming
a) Variable Definitions: Variables may be either mutable or definitional
mutable : values may be assigned to the variables and changed during program execution
(as in sequential languages).
definitional: variable may be assigned a value only once
b) Parallel Composition: A parallel statement, which causes additional threads of control to
begin executing
c) Program Structure: Parallel programs generally follow one of the two execution models
transformational (goal is to transform input to required output) or reactive (the program
reacts to external events)
d) Communication: Parallel programs must communicate with each other. It may contain
shared memory with common data objects accessed by each parallel program;
messages
e) Synchronization: Parallel programs must be able to order the execution of its various
threads. Parallel programs must be able to coordinate actions.
Concurrent Execution
Programming constructs
o Using parallel execution primitives of the operating system (C can invoke the
fork operation of Unix )
o Using parallel constructs
A programming language parallel construct indicates parallel execution
Example
AND statement (programming language level)
Syntax:
statement1 and statement2 and statement


Semantics:
All statements execute in parallel.
call ReadProcess and
call Write process and
call ExecuteUserProgram ;
Guarded Commands
o Guard: a condition that can be true or false
o Guards are associated with statements
o A statement is executed when its guard becomes true
Example
Guarded if:
if B1 S1 | B2 S2 | | Bn Sn fi
Guarded repetition statement
do B1 S1 | B2 S2 | | Bn Sn od
Bi - guards, Si - statements

Tasks
A calls subprogram B
o Usually A is Suspended while B executes
o But if initiated as TASK
Execution of A continues while B executes
Original execution sequence split into two
A or B may both initiate further tasks
Each Task is Dependent of the task that initiated it.
When a task is ready to terminate it must wait until all its dependents have terminated.
Splitting process is reversed as tasks terminates
Task Management
Once the task is initiated
Statements in its body executed sequentially
When a task terminates it does not return control but its separate parallel execution
sequence ends


Task cannot terminate until its dependents have terminated
Synchronization of Tasks
During concurrent execution of different tasks each tasks proceeds asynchronously
For two tasks running asynchronously to co-ordinate the language must provide means
for synchronization
Synchronization can be done using different methods:
Interrupts
Semaphores
Messages
Guarded Commands
Rendezvous
I nterrupts
Common mechanism used in computer hardware for synchronization
If Task A wants to signal Task B that a particular event has occurred then
Task A executes an instruction that causes execution of Task B to be interrupted
immediately
Control transferred to interrupt handler
After execution of this interrupt handler Task B continues execution
Disadvantages
Program Structure Confusing
Interrupt handler separate from the main body
A task that waits for the interrupt must enter busy waiting loop
Task must be written so that an interrupt can be handled at any time correctly
Data shared between interrupt routine and task should be properly
protected.
Semaphores
Data object used for synchronization between tasks
Consists of two parts
An integer counter
Can be positive or Zero
Queue of tasks that are waiting for signals to be sent


Two types
Binary Semaphore
General Semaphore
Two primitive operations that are defined for a semaphore data object P :
Signal(P), When executed by Task A tests the value of counter P,
One resource has become free
If zero, then the first task in the queue is removed from queue and its
execution resumed
If not zero or queue is empty then the counter is incremented by one
Task A continues after the signal operation is complete
Wait(P), When executed by Task B Tests the value of P
Wants to enter the critical section
If non-zero then the counter is decremented by one
If zero then Task B is inserted at the end of Task queue for P and
execution of B is suspended
Disadvantages
Task can wait for only one semaphore at a time
If task fails to signal at appropriate point the entire system of tasks may deadlock
Programs involving several tasks and semaphores become increasingly difficult to
understand, debug and verify
Semaphore requires shared memory for signal and wait
Multiprocessing environment may not be able to provide this so go for
Messages
Messages
Message is placed into the pipe by send command
Task waiting for the message will issue the receive command and accept message from
other end of the pipe
Sending task is free to execute sending more messages
Receiving task will continue to execute as long as there are pending messages waiting to
be processed
Guarded Commands


The guarded commands that evaluates to be true determines which statement to be
executed next
Rendezvous
When two tasks synchronize their actions for a brief period that synchronization is
termed as Rendezvous
Tasks and Real-Time Processing
A program that must interact with other tasks within a fixed period of time is said to be
real time
Real-time processing requires some special consideration on the TIME factor
A task waiting for a rendezvous with another task may watch the clock
Tasks and Shared data
Two issues when dealing with shared data
Storage management
Mutual Exclusion
Execution begins with a single stack for the main procedure
As new tasks are initiated each task requires a new stack area
The original stack thus split into several new stacks
This storage structure is known as cactus stack
Hardware Developments
Speed difference exists between CPU and Large main memory
The CPU thus is often waiting for data to be retrieved
This effect is called Von Neumann Bottleneck
To improve the overall performance of computers hardware
Improve the performance of single CPU
Develop alternative architectures
Processor Design
Improving performance of CPU
The standard CPU Design is Complex Instruction Set Computer (CISC)
Improved system is Reduced Instruction Set Computer (RISC)
CI SC Computers


Early approach was to increase system performance by reducing the movement of data
between main memory and the CPU
By making the instructions powerful fewer instructions will be needed
This gave rise to CISC architecture
Instruction Syntax
[Operation] [source data] [destination data]
CISC approach puts burden on the compiler writer
RI SC Computers
More and more complex instructions will result in spiraling
So go for RISC
A RISC CPU incorporates several design principles
Single-Cycle Execution
Every instruction must execute in one cycle
Pipelined Architecture
Instruction execution consists of four stages
Retrieve instruction from main memory
Decode operation field, source data and destination data
Get source data for operation
Perform operation
We can improve performance by keeping all these stages in pipelined fasion
Large Control Memory
Large number of registers is provided to avoid main memory accessing
Fast Procedure Invocation
Activation record is handled with registers
So fast procedure invocation
System Design
Hardware speedup can be thus done by
Pipelining
Multiple Instruction Multiple Data (MIMD) Computer
Three MIMD Architecture
Single Bus


Crossbar Switch
Omega Network
MIMD forms Massively Parallel Computers
Hundreds or thousands of interconnected processors.
Software Architecture
While dealing with program execution there are two types of data
Transient data
Persistent data
Persistence requires some special consideration. It Need:-
Mechanism to indicate object is persistent
Mechanism to address a persistent object
To synchronize simultaneous access to an individual persistent object
To check type compatibility of persistent objects

You might also like