You are on page 1of 62

Experiment #1

Objective: Case study of different phases of Compiler.

Description: Study different phases of compiler in detail.


Step 1: Define compiler
Step 2: Description of Language processing system.
Step 3: Other tools that work closely with compilers.
Step 4: Analysis and Synthesis model of compilation
Step 5: Various phases of compiler and their working.
Step 6: Compilation of C Programs.

I. Compiler: Compiler is a program (System Program/Utility) which translates source code into
equivalent target code if there is no error in the source code.
A compiler translates the code written in one language to some other language without changing
the meaning of the program. It is also expected that a compiler should make the target code
efficient and optimized in terms of time and space.

Input (Source Code)

Compiler

Figure 1: Compiler
Output (Target Code)
II. Language Processing System: We have learnt that any computer system is made of hardware and
software. The hardware understands a language, which humans cannot understand. So we write
programs in high-level language, which is easier for us to understand and remember. These
programs are then fed into a series of tools and OS components to get the desired code that can be
used by the machine. This is known as Language Processing System.
Figure 2: Cousins of compiler

The high-level language is converted into binary language in various phases. A compiler is a
program that converts high-level language to assembly language. Similarly, an assembler is a
program that converts the assembly language to machine-level language.

Before diving straight into the concepts of compilers, we should understand a few other tools that
work closely with compilers.

III. Other tools that work closely with compilers.


1. Preprocessor
A preprocessor, generally considered as a part of compiler, is a tool that produces input for
compilers. It deals with macro-processing, augmentation; file inclusion, language extension, etc.

2. Interpreter
An interpreter, like a compiler, translates high-level language into low-level machine language.
The difference lies in the way they read the source code or input. A compiler reads the whole
source code at once, creates tokens, checks semantics, generates intermediate code, executes the
whole program and may involve many passes. In contrast, an interpreter reads a statement from
the input converts it to an intermediate code, executes it, then takes the next statement in
1
sequence. If an error occurs, an interpreter stops execution and reports it. Whereas a compiler
reads the whole program even if it encounters several errors.

3. Assembler
An assembler translates assembly language programs into machine code. The output of an
assembler is called an object file, which contains a combination of machine instructions as well as
the data required to place these instructions in memory.

4. Linker
Linker is a computer program that links and merges various object files together in order to make
an executable file. All these files might have been compiled by separate assemblers. The major
task of a linker is to search and locate referenced module/routines in a program and to determine
the memory location where these codes will be loaded, making the program instruction to have
absolute references.

5. Loader
Loader is a part of operating system and is responsible for loading executable files into memory
and execute them. It calculates the size of a program (instructions and data) and creates memory
space for it. It initializes various registers to initiate execution.

Cross-compiler
A compiler that runs on platform (A) and is capable of generating executable code for platform
(B) is called a cross-compiler.

Source-to-source Compiler
A compiler that takes the source code of one programming language and translates it into the
source code of another programming language is called a source-to-source compiler.

IV. Analysis and Synthesis model of compilation

A compiler can broadly be divided into two phases based on the way they compile.

Analysis Phase
Known as the front-end of the compiler, the analysis phase of the compiler reads the source
program, divides it into core parts and then checks for lexical, grammar and syntax errors. The
analysis phase generates an intermediate representation of the source program and symbol table,
which should be fed to the Synthesis phase as input.

2
Synthesis Phase
Known as the back-end of the compiler, the synthesis phase generates the target program with the
help of intermediate source code representation and symbol table.

A compiler can have many phases and passes.

Pass : A pass refers to the traversal of a compiler through the entire program.

Phase : A phase of a compiler is a distinguishable stage, which takes input from the
previous stage, processes and yields output that can be used as input for the next stage. A
pass can have more than one phase.

The compilation process is a sequence of various phases. Each phase takes input from its previous
stage, has its own representation of source program, and feeds its output to the next phase of the
compiler. Let us understand the phases of a compiler.

3
V. Various phases of compiler and their working:
1. Lexical Analysis
The first phase of scanner works as a text scanner. This phase scans the source code as a stream of
characters and converts it into meaningful lexemes. Lexical analyzer represents these lexemes in
the form of tokens as:

<token-name, attribute-value>

Lexical analysis is the first phase of a compiler. It takes the modified source code from language
preprocessors that are written in the form of sentences. The lexical analyzer breaks these syntaxes
into a series of tokens, by removing any whitespace or comments in the source code.
If the lexical analyzer finds a token invalid, it generates an error. The lexical analyzer works
closely with the syntax analyzer. It reads character streams from the source code, checks for legal
tokens, and passes the data to the syntax analyzer when it demands.

4
Tokens
Lexemes are said to be a sequence of characters (alphanumeric) in a token. There are some
predefined rules for every lexeme to be identified as a valid token. These rules are defined by
grammar rules, by means of a pattern. A pattern explains what can be a token, and these patterns
are defined by means of regular expressions.
In programming language, keywords, constants, identifiers, strings, numbers, operators and
punctuations symbols can be considered as tokens.
For example, in C language, the variable declaration line
int value = 100;
contains the tokens:
int (keyword), value (identifier), = (operator), 100 (constant) and ; (symbol).

The lexical analyzer needs to scan and identify only a finite set of valid string/token/lexeme that
belongs to the language in hand. It searches for the pattern defined by the language rules.
Regular expressions have the capability to express finite languages by defining a pattern for finite
strings of symbols. The grammar defined by regular expressions is known as regular grammar.
The language defined by regular grammar is known as regular language.
Regular expression is an important notation for specifying patterns. Each pattern matches a set of
strings, so regular expressions serve as names for a set of strings. Programming language tokens
can be described by regular languages. The specification of regular expressions is an example of a
recursive definition. Regular languages are easy to understand and have efficient implementation.
There are a number of algebraic laws that are obeyed by regular expressions, which can be used to
manipulate regular expressions into equivalent forms.
Operations
The various operations on languages are:
Union of two languages L and M is written as
L U M = {s | s is in L or s is in M}
Concatenation of two languages L and M is written as
5
LM = {st | s is in L and t is in M}
The Kleene Closure of a language L is written as
L* = Zero or more occurrence of language L.
Notations
If r and s are regular expressions denoting the languages L(r) and L(s), then
Union : (r)|(s) is a regular expression denoting L(r) U L(s)
Concatenation : (r)(s) is a regular expression denoting L(r)L(s)
Kleene closure : (r)* is a regular expression denoting (L(r))*
(r) is a regular expression denoting L(r)
Precedence and Associativity
*, concatenation (.), and | (pipe sign) are left associative
* has the highest precedence
Concatenation (.) has the second highest precedence.
| (pipe sign) has the lowest precedence of all.
Representing valid tokens of a language in regular expression
If x is a regular expression, then:
x* means zero or more occurrence of x.
i.e., it can generate { e, x, xx, xxx, xxxx, }
x+ means one or more occurrence of x.
i.e., it can generate { x, xx, xxx, xxxx } or x.x*
x? means at most one occurrence of x
i.e., it can generate either {x} or {e}.
[a-z] is all lower-case alphabets of English language.
[A-Z] is all upper-case alphabets of English language.
[0-9] is all natural digits used in mathematics.
Representing occurrence of symbols using regular expressions
letter = [a z] or [A Z]
digit = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 or [0-9]
sign = [ + | - ]
Representing language tokens using regular expressions
Decimal = (sign)?(digit)+
Identifier = (letter)(letter | digit)*

6
The only problem left with the lexical analyzer is how to verify the validity of a regular expression
used in specifying the patterns of keywords of a language. A well-accepted solution is to use finite
automata for verification.

2. Syntax Analysis
The next phase is called the syntax analysis or parsing. It takes the token produced by lexical
analysis as input and generates a parse tree (or syntax tree). In this phase, token arrangements are
checked against the source code grammar, i.e. the parser checks if the expression made by the
tokens is syntactically correct.

Syntax analysis or parsing is the second phase of a compiler. In this chapter, we shall learn the
basic concepts used in the construction of a parser.
We have seen that a lexical analyzer can identify tokens with the help of regular expressions and
pattern rules. But a lexical analyzer cannot check the syntax of a given sentence due to the
limitations of the regular expressions. Regular expressions cannot check balancing tokens, such as
parenthesis. Therefore, this phase uses context-free grammar (CFG), which is recognized by push-
down automata.
CFG, on the other hand, is a superset of Regular Grammar, as depicted below:

It implies that every Regular Grammar is also context-free, but there exists some problems, which
are beyond the scope of Regular Grammar. CFG is a helpful tool in describing the syntax of
programming languages.
Context-Free Grammar
In this section, we will first see the definition of context-free grammar and introduce
terminologies used in parsing technology.
A context-free grammar has four components:
A set of non-terminals (V). Non-terminals are syntactic variables that denote sets of
strings. The non-terminals define sets of strings that help define the language generated by
the grammar.
A set of tokens, known as terminal symbols (). Terminals are the basic symbols from
which strings are formed.
A set of productions (P). The productions of a grammar specify the manner in which the
terminals and non-terminals can be combined to form strings. Each production consists of
a non-terminal called the left side of the production, an arrow, and a sequence of tokens
and/or on- terminals, called the right side of the production.

7
One of the non-terminals is designated as the start symbol (S); from where the production
begins.
The strings are derived from the start symbol by repeatedly replacing a non-terminal (initially the
start symbol) by the right side of a production, for that non-terminal.

Types of Parsing:
Syntax analyzers follow production rules defined by means of context-free grammar. The way the
production rules are implemented (derivation) divides parsing into two types : top-down parsing
and bottom-up parsing.

Top-down Parsing:
When the parser starts constructing the parse tree from the start symbol and then tries to transform
the start symbol to the input, it is called top-down parsing.
Recursive descent parsing : It is a common form of top-down parsing. It is called
recursive as it uses recursive procedures to process the input. Recursive descent parsing
suffers from backtracking.
Backtracking : It means, if one derivation of a production fails, the syntax analyzer
restarts the process using different rules of same production. This technique may process
the input string more than once to determine the right production.

8
Recursive Descent Parsing: Recursive descent is a top-down parsing technique that constructs
the parse tree from the top and the input is read from left to right. It uses procedures for every
terminal and non-terminal entity. This parsing technique recursively parses the input to make a
parse tree, which may or may not require back-tracking. But the grammar associated with it (if not
left factored) cannot avoid back-tracking. A form of recursive-descent parsing that does not
require any back-tracking is known as predictive parsing.
This parsing technique is regarded recursive as it uses context-free grammar which is recursive in
nature.
Back-tracking: Top- down parsers start from the root node (start symbol) and match the input
string against the production rules to replace them (if matched). To understand this, take the
following example of CFG:
S rXd | rZd
X oa | ea
Z ai
For an input string: read, a top-down parser, will behave like this:
It will start with S from the production rules and will match its yield to the left-most letter of the
input, i.e. r. The very production of S (S rXd) matches with it. So the top-down parser
advances to the next input letter (i.e. e). The parser tries to expand non-terminal X and checks
its production from the left (X oa). It does not match with the next input symbol. So the top-
down parser backtracks to obtain the next production rule of X, (X ea).
Now the parser matches all the input letters in an ordered manner. The string is accepted.

Predictive Parser
Predictive parser is a recursive descent parser, which has the capability to predict which
production is to be used to replace the input string. The predictive parser does not suffer from
backtracking.
To accomplish its tasks, the predictive parser uses a look-ahead pointer, which points to the next
input symbols. To make the parser back-tracking free, the predictive parser puts some constraints
on the grammar and accepts only a class of grammar known as LL(k) grammar.
Predictive parsing uses a stack and a parsing table to parse the input and generate a parse tree.
Both the stack and the input contains an end symbol $to denote that the stack is empty and the
input is consumed. The parser refers to the parsing table to take any decision on the input and
stack element combination.
In recursive descent parsing, the parser may have more than one production to choose from for a
single instance of input, whereas in predictive parser, each step has at most one production to
9
choose. There might be instances where there is no production matching the input string, making
the parsing procedure to fail.
LL Parser
An LL Parser accepts LL grammar. LL grammar is a subset of context-free grammar but with
some restrictions to get the simplified version, in order to achieve easy implementation. LL
grammar can be implemented by means of both algorithms namely, recursive-descent or table-
driven.
LL parser is denoted as LL(k). The first L in LL(k) is parsing the input from left to right, the
second L in LL(k) stands for left-most derivation and k itself represents the number of look
aheads. Generally k = 1, so LL(k) may also be written as LL(1).

Bottom-up Parsing:
As the name suggests, bottom-up parsing starts with the input symbols and tries to construct the
parse tree up to the start symbol.
Bottom-up parsing starts from the leaf nodes of a tree and works in upward direction till it reaches
the root node. Here, we start from a sentence and then apply production rules in reverse manner in
order to reach the start symbol. The image given below depicts the bottom-up parsers available.

3. Semantic Analysis
Semantic analysis checks whether the parse tree constructed follows the rules of language. For
example, assignment of values is between compatible data types, and adding string to an integer.
Also, the semantic analyzer keeps track of identifiers, their types and expressions; whether
identifiers are declared before use or not etc. The semantic analyzer produces an annotated syntax
tree as an output.

We have learnt how a parser constructs parse trees in the syntax analysis phase. The plain parse-
tree constructed in that phase is generally of no use for a compiler, as it does not carry any

10
information of how to evaluate the tree. The productions of context-free grammar, which makes
the rules of the language, do not accommodate how to interpret them.

For example

EE+T

The above CFG production has no semantic rule associated with it, and it cannot help in making
any sense of the production.

Semantics: Semantics of a language provide meaning to its constructs, like tokens and syntax
structure. Semantics help interpret symbols, their types, and their relations with each other.
Semantic analysis judges whether the syntax structure constructed in the source program derives
any meaning or not.

CFG + semantic rules = Syntax Directed Definitions

For example:

int a = value;

should not issue an error in lexical and syntax analysis phase, as it is lexically and structurally
correct, but it should generate a semantic error as the type of the assignment differs. These rules
are set by the grammar of the language and evaluated in semantic analysis. The following tasks
should be performed in semantic analysis:

Scope resolution

Type checking

Array-bound checking

Semantic Errors

We have mentioned some of the semantics errors that the semantic analyzer is expected to
recognize:

Type mismatch

Undeclared variable

Reserved identifier misuse.

11
Multiple declaration of variable in a scope.

Accessing an out of scope variable.

Actual and formal parameter mismatch.

4. Intermediate Code Generation:


After semantic analysis the compiler generates an intermediate code of the source code for the
target machine. It represents a program for some abstract machine. It is in between the high-level
language and the machine language. This intermediate code should be generated in such a way
that it makes it easier to be translated into the target machine code.

If a compiler translates the source language to its target machine language without having
the option for generating intermediate code, then for each new machine, a full native
compiler is required.

Intermediate code eliminates the need of a new full compiler for every unique machine by
keeping the analysis portion same for all the compilers.

The second part of compiler, synthesis, is changed according to the target machine.

It becomes easier to apply the source code modifications to improve code performance by
applying code optimization techniques on the intermediate code.

Intermediate Representation

Intermediate codes can be represented in a variety of ways and they have their own benefits.

High Level IR - High-level intermediate code representation is very close to the source
language itself. They can be easily generated from the source code and we can easily apply
code modifications to enhance performance. But for target machine optimization, it is less
preferred.

Low Level IR - This one is close to the target machine, which makes it suitable for
register and memory allocation, instruction set selection, etc. It is good for machine-
dependent optimizations.
12
Intermediate code can be either language specific (e.g., Byte Code for Java) or language
independent (three-address code).

Three-Address Code:

Intermediate code generator receives input from its predecessor phase, semantic analyzer, in the
form of an annotated syntax tree. That syntax tree then can be converted into a linear
representation, e.g., postfix notation. Intermediate code tends to be machine independent code.
Therefore, code generator assumes to have unlimited number of memory storage (register) to
generate code.

For example:

a = b + c * d;

The intermediate code generator will try to divide this expression into sub-expressions and then
generate the corresponding code.

r1 = c * d;

r2 = b + r1;

a = r2

r being used as registers in the target program.

A three-address code has at most three address locations to calculate the expression. A three-
address code can be represented in two forms: quadruples and triples.

Quadruples

Each instruction in quadruples presentation is divided into four fields: operator, arg1, arg2, and
result. The above example is represented below in quadruples format:

Op arg1 arg2 result

* C D r1

+ B r1 r2

13
+ r2 r1 r3

= r3 a

Triples:

Each instruction in triples presentation has three fields : op, arg1, and arg2.The results of
respective sub-expressions are denoted by the position of expression. Triples represent similarity
with DAG and syntax tree. They are equivalent to DAG while representing expressions.

Op arg1 arg2

* C d

+ B (0)

+ (1) (0)

= (2)

Triples face the problem of code immovability while optimization, as the results is positional and
changing the order or position of an expression may cause problems.

Indirect Triples: This representation is an enhancement over triples representation. It uses


pointers instead of position to store results. This enables the optimizers to freely re-position the
sub-expression to produce an optimized code.

5. Intermediate Code Optimization


The next phase does code optimization of the intermediate code. Optimization can be assumed as
something that removes unnecessary code lines, and arranges the sequence of statements in order
to speed up the program execution without wasting resources (CPU, memory).

6. Code Generation
14
In this phase, the code generator takes the optimized representation of the intermediate code and
maps it to the target machine language. The code generator translates the intermediate code into a
sequence of (generally) re-locatable machine code. Sequence of instructions of machine code
performs the task as the intermediate code would do.

Code Generator
A code generator is expected to have an understanding of the target machines runtime
environment and its instruction set. The code generator should take the following things into
consideration to generate the code:
Target language : The code generator has to be aware of the nature of the target language
for which the code is to be transformed. That language may facilitate some machine-
specific instructions to help the compiler generate the code in a more convenient way. The
target machine can have either CISC or RISC processor architecture.
IR Type : Intermediate representation has various forms. It can be in Abstract Syntax Tree
(AST) structure, Reverse Polish Notation, or 3-address code.
Selection of instruction : The code generator takes Intermediate Representation as input
and converts (maps) it into target machines instruction set. One representation can have
many ways (instructions) to convert it, so it becomes the responsibility of the code
generator to choose the appropriate instructions wisely.
Register allocation : A program has a number of values to be maintained during the
execution. The target machines architecture may not allow all of the values to be kept in
the CPU memory or registers. Code generator decides what values to keep in the registers.
Also, it decides the registers to be used to keep these values.
Ordering of instructions : At last, the code generator decides the order in which the
instruction will be executed. It creates schedules for instructions to execute them.
Descriptors
The code generator has to track both the registers (for availability) and addresses (location of
values) while generating the code. For both of them, the following two descriptors are used:
Register descriptor : Register descriptor is used to inform the code generator about the
availability of registers. Register descriptor keeps track of values stored in each register.
Whenever a new register is required during code generation, this descriptor is consulted for
register availability.
Address descriptor : Values of the names (identifiers) used in the program might be
stored at different locations while in execution. Address descriptors are used to keep track
of memory locations where the values of identifiers are stored. These locations may
include CPU registers, heaps, stacks, memory or a combination of the mentioned locations.

15
Symbol Table:
It is a data-structure maintained throughout all the phases of a compiler. All the identifier's names
along with their types are stored here. The symbol table makes it easier for the compiler to
quickly search the identifier record and retrieve it. The symbol table is also used for scope
management.

Implementation

If a compiler is to handle a small amount of data, then the symbol table can be implemented as an
unordered list, which is easy to code, but it is only suitable for small tables only. A symbol table
can be implemented in one of the following ways:

Linear (sorted or unsorted) list

Binary Search Tree

Hash table

Among all, symbol tables are mostly implemented as hash tables, where the source code symbol
itself is treated as a key for the hash function and the return value is the information about the
symbol.

Error Recovery

A parser should be able to detect and report any error in the program. It is expected that when an
error is encountered, the parser should be able to handle it and carry on parsing the rest of the
input. Mostly it is expected from the parser to check for errors but errors may be encountered at
various stages of the compilation process. A program may have the following kinds of errors at
various stages:

Lexical : name of some identifier typed incorrectly

Syntactical : missing semicolon or unbalanced parenthesis

Semantical : incompatible value assignment

Logical : code not reachable, infinite loop

There are four common error-recovery strategies that can be implemented in the parser to deal with
errors in the code.

Panic mode

16
When a parser encounters an error anywhere in the statement, it ignores the rest of the statement
by not processing input from erroneous input to delimiter, such as semi-colon. This is the easiest
way of error-recovery and also, it prevents the parser from developing infinite loops.

Statement mode

When a parser encounters an error, it tries to take corrective measures so that the rest of inputs of
statement allow the parser to parse ahead. For example, inserting a missing semicolon, replacing
comma with a semicolon etc. Parser designers have to be careful here because one wrong
correction may lead to an infinite loop.

Error productions

Some common errors are known to the compiler designers that may occur in the code. In addition,
the designers can create augmented grammar to be used, as productions that generate erroneous
constructs when these errors are encountered.

Global correction

The parser considers the program in hand as a whole and tries to figure out what the program is
intended to do and tries to find out a closest match for it, which is error-free. When an erroneous
input (statement) X is fed, it creates a parse tree for some closest error-free statement Y. This may
allow the parser to make minimal changes in the source code, but due to the complexity (time and
space) of this strategy, it has not been implemented in practice yet.

VI. Compilation of C Programs:

Let us understand how a program, using C compiler, is executed on a host machine.

User writes a program in C language (high-level language).

The C compiler compiles the program and translates it to assembly program (low-level
language).

An assembler then translates the assembly program into machine code (object).

A linker tool is used to link all the parts of the program together for execution (executable
machine code).

A loader loads all of them into memory and then the program is executed.

17
Experiment #2

Objective: Case study of working of Virtual Machines.

Description: Study of working of Virtual Machines (This study is focused on Process Level
Virtual Machine i.e. Java Virtual Machine)

Setp1: Define Virtual machine

Step2: Basic virtual machine model.

Step3: Virtual Machine Classifications

Step4: Java Virtual Machine (JVM) and working of JVM

I. Virtual Machine:
In computing, a virtual machine (VM) is an emulation of a computer system. Virtual machines are
based on computer architectures and provide functionality of a physical computer. Their
implementations may involve specialized hardware, software, or a combination.
There are different kinds of virtual machines, each with different functions:
System virtual machines (also termed full virtualization VMs) provide a substitute for a real
machine. They provide functionality needed to execute entire operating systems. A hypervisor uses
native execution to share and manage hardware, allowing for multiple environments which are
isolated from one another, yet exist on the same physical machine. Modern hypervisors
use hardware-assisted virtualization, virtualization-specific hardware, primarily from the host
CPUs.
Process virtual machines are designed to execute computer programs in a platform-independent
environment.

II. Basic virtual machine model:

A virtual machine provides a fully protected and isolated replica of the underlying physical system.
It takes a layered approach to achieve this goal. We need a new layer above the original bare
systems to abstract the physical resources and provide interface to operating systems running on it.
This layer is called the Virtual Machine Monitor (VMM). The VMM is the essential part of the
virtual machine implementation, because it performs the translation between the bare hardware and
virtualized underlying platform: providing virtual processors, memory, and virtualized I/O devices.
Since all the virtual machines share the same bare hardware, the Virtual Machine Monitor should
also provide appropriate protection so that each virtual machine is an isolated replica.

The basic virtual machine model should be like Figure 1, where the virtual machine monitor sits
between the bare system hardware and operating System virtual machines are our main focus and
18
also are the real virtual machines commonly recognized when the term virtual machine is
used.

III. Virtual Machine Classifications:

Virtual machines are generally divided into two categories- one is System level virtual machines
and another is process level virtual machines.

1. System Level Virtual Machines:

System virtual machines are the real virtual machines commonly recognized when the term
virtual machine is used. These are further classifies as following-

1.1 Classic Virtual Machines:


Classic virtual machines are the original model for system virtual machines. Like Figure 1, here
the VMM sits directly on bare hardware and provides hardware replica and resource management.
In most cases, this model will bring efficiency, but the VMM has to handle all the device drivers
and users have to install the VMM and guest OS after wiping the existing system clean.

1.2 Hosted Virtual Machines:


Hosted virtual machines, as the name infers, build a VMM on top of an existing host OS. So its
convenient for users to install the VMM, which is just like the installation of an application
program, because the VMM doesnt run in the privileged mode. Also the VMM can use the
facilities provided by the host OS, for example, device drivers. But this kind of virtual machine
implementation in most cases is less efficient than the classic virtual machines because of the extra
software layer.
In both the classic virtual machines and the hosted virtual machines, the ISA of the guest OS is the
same as the underlying hardware.

19
1.3 Whole system Virtual Machines:
Sometimes we need to run operating systems and applications on a different ISA. In these cases,
because of the different ISA, complete emulation and translation of the whole OS and application
are required, so its called whole system virtual machines. Usually the VMM stays on top of a host
OS running on the underlying ISA hardware.

2. Process Level Virtual Machines


Actually most of the process level virtual machines mentioned below are not commonly known as
virtual machines. But they all have the properties of virtual machines: provide virtual layers to
the modules above.

1. Multiprogramming:
Multiprogramming is a standard feature in modern operating systems. The Operating system
provides a replicated ABI to each process and each process thinks it owns the whole machine. So
actually the concurrently executing applications are running on process level virtual machines. In
this type of virtual machine, the guest and host systems are in the same ISA and same OS.

2. Emulation:
The second type of process level virtual machines are to run program binaries compiled to a source
ISA, while the underlying hardware is a different ISA. The virtual machine needs to emulate the
execution of the source ISA, and the simplest way is interpretation, i.e., the VMM interprets every
source instruction by executing several native ISA instructions. Clearly this method has poor
performance. So binary translation is more commonly used, which converts
source instructions to native instructions with equivalent functions, and after the block of
instructions is translated, it can be cached and reused repeatedly. We can see that interpretation has

20
minimal start up cost but a huge overhead for emulating each instruction, and binary translation, on
the contrary, has a bigger initial overhead but is fast for execution of each instruction.

3. Dynamic Optimizers:
In the above type of virtual machines, the source and target ISA are different, so the purpose of
virtual machine is to emulate execution. Some virtual machines are with the same source ISA and
target ISA, and the goal of the VM is to perform some optimizations in the process. The
implementations of the virtual machines for dynamic optimizers are very similar to that for
emulations.

4. High Level Virtual Machines:


The last type of process level virtual machine is the most commonly recognized one, partly due to
the popularity of Java. The purpose of the previous three virtual machines, except for dynamic
optimizer, is to improve cross platform portability. But their approaches need great effort for every
ISA, so a better way is to move the virtualization to a higher level: bring a process level virtual
machine to the high level language design. Two good examples for this
type of virtual machines are Pascal and Java. In a conventional system, the HLL programs are
compiled to abstract intermediate codes, and then generated into object code for specific ISA/OS
by a code generator. But in Pascal/Java, the code to be distributed is not the object code, but the
intermediate codes: P-code for Pascal and byte-code for Java. On every ISA/OS, theres a virtual
machine to interpret the intermediate codes to platform specific host instructions. So this type of
process virtual machines provides the maximal platform independence.

So from all the classification above, we can see, based on the level of virtual machines- ABI or ISA and
whether the host and guest VM are the same ISA, we can get an overall taxonomy like Figure 3.

Figure 3: A taxonomy of virtual machine architecture

ISA and ABI


Since a virtual machine is a layer which abstracts all the layers below it and provides an interface
to the layer above it, in which level the virtual machine does the abstraction can be a good criteria
to classify virtual machines.

21
There are two perspective of what a machine is. One is of a process, and the other is of the whole
system. From the perspective of process, the machine is the assigned memory address space,
instructions and user level registers. A process doesnt have direct access to disk, or other
secondary storage and I/O resources. It only can access the I/O resources by system calls. While
for the entire system, it provides a full environment that can support multiple processes
simultaneously, and allocates physical memory and I/O resources to the processes. Also, operating
system, as a part of the system, handles how the processes interact with their resources. So based
on the abstraction level, we have process virtual machines and system virtual machines. As the
names infer, a process virtual machine can support an individual process, while a system virtual
machine supports a complete operating system and its environment. To better understand these two
types of virtual machines, we need to know about two standardized interfaces: ISA and Application
Binary Interface (ABI).

Weve talked about ISA, it is the part of the processor that is visible to the programmer or compiler
writer and includes both user and system instructions. While ABI includes the whole set of user
instructions and the system call interfaces by which the applications can access the hardware
resources. In other words, ABI separates processes from the rest of the whole system, and ISA
separates the hardware from the rest. Given the definition of ISA and ABI, we can say that process
level virtual machines provides ABI to applications and system level virtual machines provides
ISA to the operating system and applications running on it. Based on whether they support ABI or
ISA, and whether the host and guest systems are the same ISA, we can classify virtual machines
into different types.

IV. Java Virtual Machine (JVM) and Its Working:

JVM (Java Virtual Machine) is an abstract machine. It is a specification that provides runtime
environment in which java bytecode can be executed. JVMs are available for many hardware and
software platforms (i.e. JVM is platform dependent).

JVM is:

1. A specification where working of Java Virtual Machine is specified. But implementation


provider is independent to choose the algorithm. Its implementation has been provided by
Sun and other companies.

2. An implementation Its implementation is known as JRE (Java Runtime Environment).

3. Runtime Instance Whenever you write java command on the command prompt to run the
java class, an instance of JVM is created.

The JVM performs following operation:

o Loads code

o Verifies code

22
o Executes code

o Provides runtime environment

JVM provides definitions for the:

o Memory area

o Class file format

o Register set

o Garbage-collected heap

o Fatal error reporting etc.

Internal Architecture of JVM

Let's understand the internal architecture of JVM. It contains classloader, memory area, execution
engine etc.

Classloader: Classloader is a subsystem of JVM that is used to load class files.


2) Class(Method) Area: Class(Method) Area stores per-class structures such as the runtime
constant pool, field and method data, the code for methods.
3) Heap: It is the runtime data area in which objects are allocated.
23
4) Stack:Java Stack stores frames.It holds local variables and partial results, and plays a part in
method invocation and return.
Each thread has a private JVM stack, created at the same time as thread.
A new frame is created each time a method is invoked. A frame is destroyed when its method
invocation completes.

5) Program Counter Register: PC (program counter) register. It contains the address of the Java
virtual machine instruction currently being executed.

6) Native Method Stack: It contains all the native methods used in the application.

7) Execution Engine: It contains:


1) A virtual processor
2) Interpreter: Read bytecode stream then execute the instructions.
3) Just-In-Time(JIT) compiler: It is used to improve the performance.JIT compiles parts of
the byte code that have similar functionality at the same time, and hence reduces the amount of
time needed for compilation.Here the term ?compiler? refers to a translator from the instruction
set of a Java virtual machine (JVM) to the instruction set of a specific CPU.

Comparing JVM with JRE and JDK


Understanding the difference between JDK, JRE and JVM is important in Java. We are having
brief overview of JVM here.

If you want to get the detailed knowledge of Java Virtual Machine, move to the next page. Firstly,
let's see the basic differences between the JDK, JRE and JVM

JVM: JVM (Java Virtual Machine) is an abstract machine. It is a specification that provides
runtime environment in which java bytecode can be executed.

JVMs are available for many hardware and software platforms. JVM, JRE and JDK are platform
dependent because configuration of each OS differs. But, Java is platform independent.

The JVM performs following main tasks:

o Loads code

o Verifies code

o Executes code

o Provides runtime environment

24
JRE: JRE is an acronym for Java Runtime Environment.It is used to provide runtime
environment.It is the implementation of JVM. It physically exists. It contains set of libraries +
other files that JVM uses at runtime. Implementation of JVMs are also actively released by other
companies besides Sun Micro Systems.

JDK: JDK is an acronym for Java Development Kit.It physically exists.It contains JRE + development
tools

25
Experiment #3

Objective: Write a program in C++ to implement Encapsulation.


Description:

Step1: Include required header/library files

Step2. Create Employee class to bind data members and functions. Use data hiding concepts.
Create functions to store employee informations like name and age. Also use some functions to
display employee details.

Step3: Create main function. Use some local variables and take input from user for employee
details like name and age (at least two employee details).

Step4: Create objects of employee class inside the main function and call member functions of
employee class to set employee details.

Step5: Call some display function of employee class with the help of object to display employee
details.

Program Code:

#include<iostream>
#include<string>
using namespace std;
class employee
{
private:
int age;
string name;
public:
void setdata(string n, int a)
{
name = n;
age = a;

26
}
void display()
{
cout<<"Employee name: "<<name<<"\tAge: "<<age<<"\n";
}
};
int main()
{
string name1, name2;
int age1, age2;
cout<<"******* Enter Employee details ********\n";
cout<<"\nFirst employee name: ";
cin>>name1;
cout<<"\nFirst employee age: ";
cin>>age1;
cout<<"\nSecond employee name: ";
cin>>name2;
cout<<"\nSecond employee age: ";
cin>>age2;
employee e1,e2;
e1.setdata(name1, age1);
e2.setdata(name2, age2);
cout<<"\n\n.............. Display Employee Details ..................\n";
e1.display();
e2.display();
cout<<"\n......................................................";
return(0);
}

27
Output of Program#3

28
Experiment #4

Objective: Write a program in Java to implement Encapsulation.

Description:
Step1: import java.util.Scanner class

Step2. Create Employee class to bind data and methods. Use data hiding concepts through access
modifier. Create methods to store employee informations like name and age. Also use some
method to display employee details.

Step3: Create another class called MainClass which contain main method. Use some local
variables and take input from user for employee details like name and age (at least two employee
details).

Step4: Create objects of employee class inside the main method and call method of employee class
to set employee details.

Step5: Call display method of employee class with the help of object to display employee details.

Program Code:
import java.util.Scanner;

class employee

private int age;

private String name;

void setdata(String n, int a)

name = n;

age = a;

void display()
29
{

System.out.println("Employee name: "+name+"\tAge: "+age+"\n");


}
}
class MainClass
{
public static void main(String args[])
{
employee e1 = new employee();
employee e2 = new employee();
Scanner sc = new Scanner(System.in);
System.out.println(".............. Enter Employee Details ..................");
System.out.print("Enter First Employee Name:");
String name1 = sc.nextLine();
System.out.print("Enter Second Employee Name:");
String name2 = sc.nextLine();
System.out.print("\nEnter First Employee Age:");
int age1 = sc.nextInt();
e1.setdata(name1, age1);
System.out.print("\nEnter Second Employee Age:");
int age2 = sc.nextInt();
e2.setdata(name2, age2);
System.out.println("\n.............. Employee Details ..................\n");
e1.display();
e2.display();
System.out.println("...................................................");
sc.close();
}
}

30
Output of Program #4

31
Experiment #5

Objective: Write a program in C# to implement Encapsulation.


Description:

Step1: Create Employee class to bind data and methods. Use data hiding concepts through
access modifier. Create methods to store employee informations like name and age. Also use
some method to display employee details.

Step2: Create another class called MainClass which contain main method. Use some local
variables and take input from user for employee details like name and age (at least two
employee details).

Step3: Create objects of employee class inside the main method and call method of employee
class to set employee details.

Step4: Call display method of employee class with the help of object to display employee
details.

Program Code:

using System;
class Employee
{
private int age;
private String name;
void setdata(String n, int a)
{
name = n;
age = a;
}
void display()
{
Console.WriteLine("Employee name: "+name+"\tAge: "+age+"\n");
}
}
class Mainclass

32
{
static void Main(string[] args)
{
employee e1 = new employee();
employee e2 = new employee();
Console.WriteLine (".............. Enter Employee Details ..................");
Console.WriteLine ("Enter First Employee Name: ");
String name1 = Console.ReadLine();
Console.WriteLine ("Enter First Employee age: ");
Int age1 = Console.Read();
Console.WriteLine ("Enter Second Employee Name: ");
String name2 = Console.ReadLine();
Console.WriteLine ("Enter Second Employee age: ");
Int age2 = Console.Read();
e1.setData(name1, age1);
e2.setData(name2, age2);
Console.WriteLine (".............. Employee Details ..................");
e1.display();
e2.display();
Console.WriteLine ("..........................................................");
Console.Read();
}
}

33
Output of Program #5

34
Experiment #6

Objective: Write a program in C++ to implement Inheritance.

Description:
Step1: include header files

Step2: Create a base class called Person having private data like name and age.

Step3: Create a constructor of Person class to initialize state of object called Person.

Step4: Create display method to display Persons information.

Step5: Create a child/derived class called Employee

Step6: Create constructor of Employee to initialize object state and also use some mechanism to
call super/base class constructor to initialize some instance variables.

Step7: Also create some display method to display Employee details and also call base class
methods from derived class.

Step8: Create new class which contain main method and create object of derived class and call
methods to store and display employee details.

Program Code:

#include<iostream>
#include<string>

using namespace std;

class person
{
private:
int age;
string name;
public:
void setdata(string n, int a)
{
name = n;
age = a;
}
void display()
{
cout<<"\n\nEmployee name: "<<name<<"\tAge: "<<age;
35
}
};
class employee:public person
{
private:
string empID;
public:
void setempID(string id)
{
empID = id;
}
void displayempID()
{
cout<<"\nEmployee ID: "<<empID<<"\n";
}
};
int main()
{
employee e1, e2;
e1.setdata("Mr. R Pandey",40);
e1.setempID("UIT-CSE001");
e2.setdata("Mr. P Yadav", 33);
e2.setempID("UIT-CSE001");

cout<<"\n.............. Employee Details ..................\n";


e1.display();
e1.displayempID();
e2.display();
e2.displayempID();
cout<<"\n.......................................................";
return(0);
}

Output of Program #6

36
Experiment #7

Objective: Write a program in Java to implement Inheritance.


Description:
37
Step1: Create a base class called Person having private data like name and age.

Step2: Create a constructor of Person class to initialize state of object called Person.

Step3: Create display method to display Persons information.

Step4: Create a child/derived class called Employee

Step5: Create constructor of Employee to initialize object state and also use some mechanism to
call super/base class constructor to initialize some instance variables.

Step6: Also create some display method to display Employee details and also call base class
methods from derived class.

Step7: Create new class which contain main method and create object of derived class and call
methods to store and display employee details.

Program Code:
class Person
{
private int age;
private String name;
Person(String n, int a)
{
name = n;
age = a;
}
void displayPersonDetails()
{
System.out.println("Name: "+name);
System.out.println("Age: "+age);
}
}
class Employee extends Person
{
int emp_no;
float salary;
Employee(int e_no, float e_salary, int a, String nm)
{
38
super(nm, a);
emp_no = e_no;
salary = e_salary;
}
void displayEmployeeDetails()
{
displayPersonDetails();
System.out.println("Employee no: "+emp_no);
System.out.println("Employee Salary: "+salary);
}
}
class Inheritance
{
public static void main(String args[])
{
Employee e = new Employee(101, 15000, 26, "Mr. Adom");
e.displayEmployeeDetails();
}
}

Output of Program #7

39
Experiment #8

Objective: Write a program in C# to implement Inheritance.


40
Description:
Step1: Create a base class called Person having private data like name and age.

Step2: Create a constructor of Person class to initialize state of object called Person.

Step3: Create display method to display Persons information.

Step4: Create a child/derived class called Employee

Step5: Create constructor of Employee to initialize object state and also use some mechanism to
call super/base class constructor to initialize some instance variables.

Step6: Also create some display method to display Employee details and also call base class
methods from derived class.

Step7: Create new class which contain main method and create object of derived class and call
methods to store and display employee details.

Program Code:
using System;
namespace ConsoleApplication3
{

class Base
{
public void show()
{
Console.WriteLine(" This is base class show method");
}
}
class Derived : Base
{
public void show1()
{
Console.WriteLine(" This is derived class method");
}
}
class Program

41
{
static void Main(string[] args)
{
Derived D1=new Derived();
Console.WriteLine("********* Inheritance Application *********\n");
Console.WriteLine(" inherited Show method call on derived class object\n ");
D1.show();
D1.show1();
Console.ReadLine();
}
}
}

Output of Program #8

42
Experiment #9

43
Objective: Write a program in C++ to implement Dynamic Polymorphism.

Description:
Step1: Create a base class called Person having data members like name and age.

Step2: Create a constructor of Person class to initialize state of object called Person.

Step3: Create display method to display Persons information.

Step4: Create a child/derived class called Employee.

Step5: Create constructor of Employee to initialize inherited data members.

Step6: In Derived class override display method of base class Person.

Step7: Create new class which contain main method and create an object of derived class whose
reference is stored in base class pointer variable. Also create base class object.

Step8: Now call override method on base class pointer variable which store address of derived
class object and on base class pointer variable which refer base class object itself.

Program Code:
#include<iostream>
#include<string>
using namespace std;

class Person
{
public:
string name;
int age;
Person()
{
name = "Mr.R Pandey";
age = 40;
}
virtual void displayInfo()
{
cout<<"\nBaseclass dispalyInfo method is call \n";

44
cout<<"Name: "<<name<<"\tAge: "<<age;

}
};

class Employee: public Person


{
public:
Employee()
{
name = "Mr. Yadav";
age = 33;
}
void displayInfo()
{
cout<<"\nDerived class dispalyInfo method is call \n";
cout<<"Name: "<<name<<"\tAge: "<<age;
}
};

int main(void)
{
Person *bp = new Employee();
bp->displayInfo(); // RUN-TIME POLYMORPHISM
Person *bp1 = new Person();
bp1->displayInfo();
return 0;
}

Output of Program #9
45
Experiment #10
46
Objective: Write a program in Java to implement Dynamic Polymorphism.

Description:
Step1: Create a base class called Person having data members like name and age.

Step2: Create a constructor of Person class to initialize state of object called Person.

Step3: Create display method to display Persons information.

Step4: Create a child/derived class called Employee.

Step5: Create constructor of Employee to initialize inherited data members.

Step6: In Derived class override display method of base class Person.

Step7: Create new class which contain main method and create an object of derived class whose
reference is stored in base class pointer variable. Also create base class object.

Step8: Now call override method on base class pointer variable which store address of derived
class object and on base class pointer variable which refer base class object itself.

Program Code:
class Person
{
int age;
String name;
Person(String n, int a)
{
name = n;
age = a;
}
void displayDetails()
{
System.out.println("Base class method is call");
System.out.println("Name: "+name);
System.out.println("Age: "+age);
}
}
class Employee extends Person

47
{
Employee(String n, int a)
{
super(n, a);
}
void displayDetails()
{
System.out.println("Derived class method is call");
System.out.println("Name: "+name);
System.out.println("age: "+age);
}
}
class DynamicPolymorphism
{
public static void main(String args[])
{
Person p = new Employee("Mr. Yadav", 32);
System.out.println("***********Display Employee Details **************");
Person p1 = new Person("Mr. Pandey", 35);
p.displayDetails();
p1.displayDetails();
}
}

48
Output of Program #10

49
Experiment #11

Objective: Write a program in C# to implement Dynamic Polymorphism.

Description:
Step1: Create a base class called Person having data members like name and age.

Step2: Create a constructor of Person class to initialize state of object called Person.

Step3: Create a virtual method to display Persons information.

Step4: Create a child/derived class called Employee.

Step5: Create constructor of Employee to initialize data members.

Step6: In Derived class override display method of base class Person.

Step7: Create new class which contain main method and create an object of derived class whose
reference is stored in base class variable of type Person. Also create base class object.

Step8: Now call override method on base class variable of type Person which store address of
derived class object and on base class method using object of base class which refer base class
object itself.

Program Code:
using System;

namespace ConsoleApplication1
{
public class Person
{
public int age;
public String name;
public Person() { }
public Person(String n, int a)
{
name = n;
age = a;
}
public virtual void displayDetails()
{
Console.WriteLine("Name: "+name);
Console.WriteLine("Age: " + age);
}

50
}

class Employee : Person


{

public Employee(String s, int a)


{
name = s;
age = a;
}
public override void displayDetails()
{
Console.WriteLine("Employee Name: "+ name);
Console.WriteLine("Employee Age: " + age);
}
}

class Program
{
static void Main(string[] args)
{
Person p = new Employee("Mr. Yadav", 33);
Console.WriteLine("********** Dynamic Polymorphism Example **************");
Console.WriteLine("************* Employee Details *******************");
Console.WriteLine("*\nFirst Employee Details*\n");
p.displayDetails();
Person p1 = new Employee("Mr. Pandey", 35);
Console.WriteLine("*\nSecond Employee Details*\n");
p1.displayDetails();
Console.WriteLine("*************************************************");

Console.ReadKey();

}
}
}

51
Output of Program #11

52
Experiment #12

Objective: Write a program in C++ to handle Exceptions.

Description:
Step1: Create a division method which returns the result.

Step2: In division method write exception handling code i.e write code in which exception may
occur inside try block.

Step3: catch the exception in catch block use what() method to display exception details.

Step4: Create main method and take input from user.

Step5: call division method pass two parameters.

Step6: If Second number is 0 then Exception is occur.

Program Code:
#include<iostream>
using namespace std;
int division(int a, int b)
{
return(a/b);
}
int main()
{
int num1, num2, result;
cout<<"******* Enter Two Numbers ********\n";
cout<<"\nEnter First Number: ";
cin>>num1;

cout<<"\nEnter Second Number: ";


cin>>num2;
try
{
result = division(num1, num2);
}

53
catch(exception& e)
{
cout<<"Exception is Caught: Division by 0: ";
cout<<e.what();
}
cout<<"\n\n.............. Display Result ..................\n";
cout<<"Result: "<<result;
cout<<"\n......................................................";
return(0);
}

54
Output of Program #12

If Exception Occur

55
Experiment #13

Objective: Write a program in Java to handle Exceptions.

Description: Handle ArithmeticException (Case: Divisible by Zero)


Step1: Create a class having a division method and a variable to store the result.

Step2: In division method write exception handling code i.e write code in which exception may
occur inside try block.

Step3: catch the exception in catch block.

Step4: Write Resource closing code inside finally block.

Step5: Create main method and take input from user.

Step6: create an object of class and call division method on that object and pass two variables.

Step7: If Second number is 0 then Exception is occur.

Program Code:

import java.util.Scanner;
class ExceptionHandling
{
int result;
public void division(int num1, int num2)
{
try
{
result = num1/num2;
}
catch (ArithmeticException e)
{
System.out.println("Exception is caught: "+e);
}
finally
{
System.out.println("Result: "+result);
}

56
}

public static void main(String args[])


{
int num1, num2;
Scanner sc = new Scanner(System.in);
System.out.println(" Enter two Numbers: ");
System.out.print(" Enter First Number: ");
num1 = sc.nextInt();
System.out.print(" \nEnter Second Number: ");
num2 = sc.nextInt();
System.out.print("\n\nResult of Division: ");
ExceptionHandling eh = new ExceptionHandling();
eh.division(num1, num2);

}
}

Output of Program #13

57
Experiment #14

58
Objective: Write a program in C# to handle Exceptions.

Description: Handle DivideByZeroException


Step1: Create a class having a division method and a variable to store the result.

Step2: In division method write exception handling code i.e write code in which exception may
occur inside try block.

Step3: catch the exception in catch block.

Step4: Write Resource closing code inside finally block.

Step5: Create main method and take input from user.

Step6: create an object of class and call division method on that object and pass two variables.

Step7: If Second number is 0 then Exception is occur.

Program Code:
using System;

namespace ConsoleApplication2
{
class Program
{
int result;
Program()
{
result = 0;
}
public void division(int num1, int num2)
{
try
{
result = num1/num2;
}
catch (DivideByZeroException e)
{
Console.WriteLine("Exception is caught: {0}", e);
}
finally
{

59
Console.WriteLine("Result: {0}", result);
}
}
static void Main(string[] args)
{
int n1, n2;
Program p = new Program();
Console.WriteLine("\n*** Exception Handling Program ***");
Console.WriteLine("Enter Two Numbers: \n");
Console.Write("Enter First Number: ");

n1 = int.Parse(Console.ReadLine());
Console.Write("\nEnter Second Number: ");
n2 = int.Parse(Console.ReadLine());

Console.Write("\nResult of Division: ");


p.division(n1, n2);
Console.ReadKey();
}
}
}

Output of Program #14

60
61