You are on page 1of 20

ASSIGNMENT ON

OPERATING
SYSTEMS

By:
Karthik Mohan
IInd YEAR, CSE B

CONTENTS
1. Computer System Overview
2. Basic Elements
3. Instruction Execution
4. Interrupt
5. Memory Hierarchy
6. I/O

COMPUTER SYSTEM
OVERVIEW
An operating system (OS) is system software that manages computer
hardware and software resources
and
provides
common
services for computer programs. The operating system is a component
of
the system
software in
a
computer
system. Application
programs usually require an operating system to function.

Time-sharing operating systems schedule tasks for efficient use of the


system and may also include accounting software for cost allocation of
processor time, mass storage, printing, and other resources.

For hardware functions such as input and output and memory allocation,
the operating system acts as an intermediary between programs and the
computer hardware, although the application code is usually executed
directly by the hardware and frequently makes system calls to an OS
function or is interrupted by it. Operating systems are found on many
devices that contain a computerfrom cellular phones and video game
consoles to web servers and supercomputers.

Examples of modern operating systems include Apple OS X, Linux and


its variants, and Microsoft Windows.

TYPES OF OPERATRING SYSTEMS


Single- and multi-tasking
A single-tasking system can only run one program at a time, while
a multi-tasking operating system allows more than one program to be
running in concurrency. This is achieved by time-sharing, dividing the
available processor time between multiple processes which are each
interrupted repeatedly in time-slices by a task scheduling subsystem of
the operating system. Multi-tasking may be characterized in pre-emptive
and co-operative types. In pre-emptive multitasking, the operating
system slices the CPU time and dedicates a slot to each of the
programs. UNIX-like operating systems, e.g., Solaris, Linux, as well
as AmigaOS support pre-emptive multitasking. Cooperative multitasking
is achieved by relying on each process to provide time to the other
processes in a defined manner. 16-bit versions of Microsoft Windows
used cooperative multi-tasking. 32-bit versions of both Windows NT and
Win9x, used pre-emptive multi-tasking.

Single- and multi-user


Single-user operating systems have no facilities to distinguish users, but
may allow multiple programs to run in tandem. A multi-user operating
system extends the basic concept of multi-tasking with facilities that

identify processes and resources, such as disk space, belonging to


multiple users, and the system permits multiple users to interact with the
system at the same time. Time-sharing operating systems schedule
tasks for efficient use of the system and may also include accounting
software for cost allocation of processor time, mass storage, printing,
and other resources to multiple users.

Distributed
A distributed operating system manages a group of distinct computers
and makes them appear to be a single computer. The development of
networked computers that could be linked and communicate with each
other gave rise to distributed computing. Distributed computations are
carried out on more than one machine. When computers in a group work
in cooperation, they form a distributed system.

Templated
In an OS, distributed and cloud computing context, templating refers to
creating a single virtual machine image as a guest operating system,
then saving it as a tool for multiple running virtual machines. The
technique is used both in virtualization and cloud computing
management, and is common in large server warehouses.

Embedded
Embedded operating systems are designed to be used in embedded
computer systems. They are designed to operate on small machines like
PDAs with less autonomy. They are able to operate with a limited
number of resources. They are very compact and extremely efficient by
design. Windows CE and Minix 3 are some examples of embedded
operating systems.

Real-time
A real-time operating system is an operating system that guarantees to
process events or data within a certain short amount of time. A real-time
operating system may be single- or multi-tasking, but when multitasking,
it uses specialized scheduling algorithms so that a deterministic nature
of behaviour is achieved. An event-driven system switches between
tasks based on their priorities or external events while time-sharing
operating systems switch tasks based on clock interrupts.

Library
A library operating system is one in which the services that a typical
operating system provides, such as networking, are provided in the form
of libraries. These libraries are composed with the application and
configuration
code
to
construct unikernels
which
are
specialised, single address space, machine images that can be
deployed to cloud or embedded environments.

HISTORY

Early computers were built to perform a series of single tasks, like a


calculator. Basic operating system features were developed in the
1950s, such as resident monitor functions that could automatically run
different programs in succession to speed up processing. Operating
systems did not exist in their modern and more complex forms until the
early 1960s. Hardware features were added, that enabled use of runtime
libraries, interrupts,
and parallel
processing.
When personal
computers became popular in the 1980s, operating systems were made
for them similar in concept to those used on larger computers.

In the 1940s, the earliest electronic digital systems had no operating


systems. Electronic systems of this time were programmed on rows of
mechanical switches or by jumper wires on plug boards. These were
special-purpose systems that, for example, generated ballistics tables for
the military or controlled the printing of payroll checks from data on
punched paper cards. After programmable general purpose computers
were invented, machine languages (consisting of strings of the binary
digits 0 and 1 on punched paper tape) were introduced that sped up the
programming process (Stern, 1981).
In the early 1950s, a computer could execute only one program at a
time. Each user had sole use of the computer for a limited period of time
and would arrive at a scheduled time with program and data on punched
paper cards or punched tape. The program would be loaded into the
machine, and the machine would be set to work until the program
completed or crashed. Programs could generally be debugged via a
front panel using toggle switches and panel lights. It is said that Alan
Turing was a master of this on the early Manchester Mark 1 machine,
and he was already deriving the primitive conception of an operating
system from the principles of the Universal Turing machine.

BASIC ELEMENTS
Imagine, if you can, that an operating system is broken down into five
layers. In the following list I'll start at the bottom most layer and work my
way up to the very top.
Layer 1: The Kernel.
The kernel is the heart of the operating system. Amongst its
responsibilities are ensuring that each running process is given a fair
amount of time to execute while a controlling the amount of resources
each process can use.

Layer 2: Memory Management.


The name of this layer gives you a good idea what it is all about. It is the
responsibility of this layer to share your computers physical memory
among the processes which want to use it. It also has to manage such
situations where there may not be enough physical memory to share
out.
Layer 3: Input/Output.
On this layer all the physical communication between your computers
hardware, such as disk drives, keyboards, mice, screens and so on,
takes place.
Layer 4: File Management.
Again the name of this layer may give you a clue as to what it does. It is
the job of this layer to control how the files on your computers hard drive
are stored and accessed by any application seeking to use them.
Layer 5: The User Interface.
The last element, or layer as we have been calling them, of an operating
system is the User Interface. This layer is probably the easiest of all to
understand since it is the first thing you see when your operating system
has logged you in. It is the job of this layer to provide a means for the
user to actually interact with the rest of the layers and as such the
system as a whole.
Keep in mind there are two different types of User interfaces. The first
one is probably the one you are most familiar with, the graphical user
interface, which is where you see windows and icons for each of your
files and so on.

The second is a command line interface, or text based interface where a


user would interact with the system using text based commands.

INSTRUCTION
EXECUTION
Step

Description

Fetch Instruction from Memory

Decode Instruction and Fetch Operands

Perform ALU Operations

Memory Access (for load/store)

Store ALU result to register file

Update PC

IN DETAIL:
The main purpose of a CPU is to execute instructions. We've already
seen some simple examples of instructions, i.e., add and addi.
The CPU executes the binary representation of the instructions, i.e.,
machine code.
Since programs can be very large, and since CPUs have limited
memory, programs are stored in memory (RAM). However, CPUs do its
processing on the CPU. So, the CPU must copy the instruction from
memory to the CPU, and once it's in the CPU, it can execute it.
The PC is used to determine which instruction is executed, and based
on this execution, the PC is updated accordingly to the next instruction to
be run.
Essentially, a CPU repeatedly fetches instructions and executes them.
The following is a summary of the six steps used to execute a single
instruction.

Step 1: Fetch instruction


For some reason, the verb "fetch" is always used with instruction.
We don't "get an instruction" or "retrieve an instruction". We "fetch
an instruction".
To fetch an instruction involves the following steps:
CPU must place an address to the MAR.
CPU must activate the tri-state buffer so MAR contents are
placed on the address bus.
CPU sends R/\W = 1 and CE = 1 to memory, to indicate it
wants to do a read.
Memory eventually puts instruction on the data bus.
Memory sends ACK = 1.
CPU loads the instruction to the MDR.
CPU transfers instruction from MDR to IR.
CPU sets CE = 0 to memory indicate it's done with fetching
the instruction.
As you can see, the steps are rather involved. You can speed up
this step if you assume instructions are in a fast instruction cache.
For now, we won't assume that.
You should go back to the notes on memory if you have forgotten
how it works, in particular, if you have forgotten the control signals
used by memory.
Step 2: Decode instruction and Fetch Operands
In the second step, the bits used for the opcode (and function, for
R-type instructions) are used to determine how the instruction
should be executed. This is what is meant by "decoding" the
instruction.

Recall that operands are arguments to the assembly instruction.


However, since R-type and I-type instructions both use registers,
and those registers are in specific locations of the instruction, we
can begin to fetch the values within the registers at the same time
we are decoding.
In particular, we're going to do the following:
Get IR31-26, the opcode
Get IR25-21, which is $rs, the first source register.
Get IR20-16, which is $rt, the second source register.
Get IR15-11, which is $rd, the destination register.
Get IR15-0, the immediate value
Get IR5-0, the function code
You'll notice that we're extracting these bits directly from the
instruction register.
You'll also notice that we extracted IR15-11 and IR15-0. How can we
do both? Well, they're merely wires, so there's no reason you can't
get both quantitie out.
The key is to realize that sometimes we use IR15-11 and sometimes
we use IR15-0. We need to have both of them ready because this is
hardware. It's easier to have everything we need, and then figure
out what we need, than to decide what we need and try to get it.
In particular, when we fetch the operands (i.e., the registers) we
want to send the source and destination registers bits to a device
called the register file.
For example, if IR25-21 has value 00111, this means we want
register $r7 from the register file. We sent in 00111 to this circuit,
and it returns the contents back to us.
We'll be discussing the register file soon.

If we are executing an I-type instruction, then typically, we'll signextend (or zero-extend, depending on the instruction) the
immediate part (i.e., IR15-0) to 32 bits.
Step 3: Perform ALU operation
The ALU has two 32-bit data inputs. It has a 32-bit output. The
purpose of the ALU is to perform a computation on the two 32-bit
data inputs, such as adding the two values. There are some
control bits on the ALU. These control bits specify what the ALU
should do.
For example, they may specify an addition, or a subtraction, or a
bitwise AND.
Where do the input values of the ALU come from?
Recall that an instruction stores information about its operands. In
particular, it encodes registers as 5-bit UB numbers. These register
encodings are sent to the register file as inputs.
The register file then outputs the 32-bit values of these registers.
These are the sent as inputs to the ALU.
Step 4: Access memory
There are only two
memory: load and store.

kind

of

instructions

that

access

Load copies a value from memory to a register. Store copies a


register value to memory.
Any other instruction skips this step.
Step 5: Write back result to register file
At this point, the output of the ALU is written back to the register
file. For example, if the instruction was: add $r2, $r3, $r4 then
the result of adding the contents of $r3 to the contents of $r4would
be stored back into $r2.
The result could also be due to a load from memory.

Some instructions don't have results to store. For example, branch


and jump instructions do not have any results to store.
Step 6: Update the PC
Finally, we need to update the program counter. Typically, we
perform the following update:
PC <- PC + 4
Recall that PC holds the current address of the instruction to be
executed. To update it means to set the value of this register to the
next instruction to be executed.
Unless the instruction is a branch or jump, the next instruction to
execute is the next instruction in memory. Since each instruction
takes up 4 bytes of memory, then the next address in memory is
PC + 4, which is the address of the current instruction plus 4.
The PC might change to some other address if there is a branch or
jump.
These are the six steps to executing an instruction. Not every instruction
goes through every step. However, we label each step so that you can
be aware they exist.
Some of these steps may not make much sense now, but hopefully,
they're be clearer once we start implementing the steps in depth.

INTERRUPT
In system
programming,
an interrupt is
a
signal
to
the processor emitted by hardware or software indicating an event that
needs immediate attention. An interrupt alerts the processor to a highpriority condition requiring the interruption of the current code the
processor is executing. The processor responds by suspending its
current activities, saving its state, and executing a function called
an interrupt handler (or an interrupt service routine, ISR) to deal with the
event. This interruption is temporary, and, after the interrupt handler
finishes, the processor resumes normal activities. There are two types of
interrupts: hardware interrupts and software interrupts.

Hardware interrupts are used by devices to communicate that they


require attention from the operating system. Internally, hardware
interrupts are implemented using electronic alerting signals that are sent
to the processor from an external device, which is either a part of the
computer itself, such as a disk controller, or an external peripheral. For
example, pressing a key on the keyboard or moving the mouse triggers
hardware interrupts that cause the processor to read the keystroke or

mouse position. Unlike the software type (described below), hardware


interrupts are asynchronous and can occur in the middle of instruction
execution, requiring additional care in programming. The act of initiating
a hardware interrupt is referred to as an interrupt request (IRQ).

A software interrupt is caused either by an exceptional condition in the


processor itself, or a special instruction in the instruction set which
causes an interrupt when it is executed. The former is often called
a trap or exception and is used for errors or events occurring during
program execution that are exceptional enough that they cannot be
handled within the program itself. For example, if the
processor's arithmetic logic unit is commanded to divide a number by
zero, this impossible demand will cause a divide-by-zero exception,
perhaps causing the computer to abandon the calculation or display an
error message. Software interrupt instructions function similarly
to subroutine calls and are used for a variety of purposes, such as to
request services from low-level system software such as device drivers.
For example, computers often use software interrupt instructions to
communicate with the disk controller to request data be read or written to
the disk.
Each interrupt has its own interrupt handler. The number of hardware
interrupts is limited by the number of interrupt request (IRQ) lines to the
processor, but there may be hundreds of different software interrupts.
Interrupts are a commonly used technique for computer multitasking,
especially in real-time computing. Such a system is said to be interruptdriven.

MEMORY HIERARCHY
Memory management refers to management of Primary Memory or
Main Memory. Main memory is a large array of words or bytes where
each word or byte has its own address.
Main memory provides a fast storage that can be access directly by the
CPU. So for a program to be executed, it must in the main memory.
Operating System does the following activities for memory
management.

Keeps tracks of primary memory i.e. what part of it are in use by


whom, what part are not in use.
In multiprogramming, OS decides which process will get memory
when and how much.
Allocates the memory when the process requests it to do so.
De-allocates the memory when the process no longer needs it or
has been terminated.
The term memory hierarchy is used in computer architecture when
discussing performance issues in computer architectural design,
algorithm predictions, and the lower level programming constructs such
as involving locality of reference. A "memory hierarchy" in computer
storage distinguishes each level in the "hierarchy" by response time.
Since response time, complexity, and capacity are related, [1] the levels
may also be distinguished by the controlling technology.
The many trade-offs in designing for high performance will include the
structure of the memory hierarchy, i.e. the size and technology of each
component. So the various components can be viewed as forming a
hierarchy of memories (m1,m2,...,mn) in which each member mi is in a
sense subordinate to the next highest member m i+1 of the hierarchy. To
limit waiting by higher levels, a lower level will respond by filling a buffer
and then signalling to activate the transfer.
There are four major storage levels.
1. Internal Processor registers and cache.
2. Main the system RAM and controller cards.
3. On-line mass storage Secondary storage.
4. Off-line bulk storage Tertiary and Off-line storage.
This is a general memory hierarchy structuring. Many other structures
are useful. For example, a paging algorithm may be considered as a
level for virtual memory when designing a computer architecture, and
one can include a level of near line between online and offline storage.

I/O
In computing, input/output or I/O (or,
informally, io or IO)
communication between an information processing system,
a computer, and the outside world, possibly a human or
information processing system. Inputs are the signals or data

is
the
such as
another
received

by the system and outputs are the signals or data sent from it. The term
can also be used as part of an action; to "perform I/O" is to perform
an input or output operation. I/O devices are used by a human (or other
system) to communicate with a computer. For instance,
a keyboard or mouse is
an
input
device
for
a
computer,
while monitors and printers are
output
devices.
Devices
for
communication between computers, such as modems and network
cards, typically perform both input and output operations.

Note that the designation of a device as either input or output depends


on perspective. Mice and keyboards take physical movements that the
human user outputs and convert them into input signals that a computer
can understand; the output from these devices is the computer's input.
Similarly, printers and monitors take signals that a computer outputs as
input, and they convert these signals into a representation that human
users can understand. From the human user's perspective, the process
of reading or seeing these representations is receiving input; this type of
interaction between computers and humans is studied in the field
of humancomputer interaction.

In computer architecture, the combination of the CPU and main memory,


to which the CPU can read or write directly using individual instructions,
is considered the brain of a computer. Any transfer of information to or
from the CPU/memory combo, for example by reading data from a disk
drive, is considered I/O. The CPU and its supporting circuitry may
provide memory-mapped I/O that is used in low-level computer
programming, such as in the implementation of device drivers, or may
provide access to I/O channels. An I/O algorithm is one designed to
exploit locality and perform efficiently when exchanging data with a
secondary storage device, such as a disk drive.

You might also like