You are on page 1of 102

1.

EMBEDDED SYSTEMS

1.1 What are embedded systems?

• Embedded System is a system which takes some inputs and Process them
based on some

s/w written on it and output the Corresponding results. It is a part of large system.

• It is a system that has computer hardware with software embedded in it as one of


its most

important component.

Figure 1.1 Embedded system

• Embedded systems are computer systems that monitor, respond to, or control an
external

environment.

• Environment connected to system through sensors, actuators and other I/O


interfaces.

• Embedded systems must meet timing & other constraints imposed on it by


environment .

• “Any sort of device which includes a programmable computer, but itself is not
intended
to be a general-purpose computer” (Wayne Wolf)

• It does not need secondary memories as in computers.

• It has one single dedicated operation

1.2 Embedded System Composition

Figure1. 2 Embedded System Composition

1.3 Classification of Embedded Systems

• It is classified in to 3 types:

1.3.1 Small scale (8 to16 Bit )

Little S/W and H/W complexity on board.


Battery operated,editor,compiler specific to processor,assembler are the basic
tools.

Less s/w ,limiting power dessipation.

1.3.2 Medium Scale(16 to 32 bit)

Contains RISC processors with h/w and s/w complexity.

Tools, debuggers, simulators, IDE in addition.

1.3.3 Sophisticated embedded systems(16 to 32 bit)

More s/w and h/w complexity that medium as it contains scalable processors, have

processing speed constraints.

1.4 Embedded Systems components

• The hardware for an embedded system can come in many styles and flavours. It
can be as complex as a Linux system with 64 megabytes of memory, a disc,
specialty input/output devices Internet or other communications capability.It can
be as simple as a single microprocessor with the requisite input/output devices
integrated on the microprocessor integrated circuit.

• Microprocessors (µP)

– The “engine” that processes instructions

– These instructions are stored in Read Only Memory (ROM)

– No real time capability.


• Busses (data, address, Input/Output)

• System clock - steps µP through each instruction

• Read Only Memory (ROM):

– Permanently loaded with instructions (firmware):Basic Input Output


System (BIOS) - primitive operating system that allows chips to interface
with each other. Also known as micro kernel. Application code (provides
functionality)

• Random Access Memory (RAM)

– Stores Data in Processing (eg: Temperature Readings)

– Shares data with external partners (eg: Sensors)

• Real Time Clock RTC

– Required for keeping long times and for operation with Real Time
Operating System (RTOS), this controls the coordinated Operations of
multiple systems.

• Communication Circuitry

– Ethernet Port, Printer Port

– Communication Port (RS232C, RS422, IEEE488)

1.5 Von-Neumann architecture

Fig.1.3 Von-Neumann architecture


1.6 Harvard Architecture

Fig.1.4 Harvard Architecture

1.7 Examples of embedded systems

Keyboard and other controllers for computers, CD players and consumer


electronics, Timing and control electronics in microwave ovens, coffee makers.
Controllers for vacuum cleaners and washing machines for sensing dirt loads.
Control of the dashboard, ignition, fuel injection, suspension stiffness and
environmental temperature and noise in automobiles. Parking meter controllers.
Elevator, environmental and security systems in buildings, Internal operation of
medical instrumentation such as infusion pumps, pulse ox meters, Disc controllers
for computer systems, Control of data communications routers for wide-band
communications and many more. Ibo, the Robotic Dog of Sony Corp., incorporates
Emotion-Detecting Sensors, Voice Recognition System, a CCD Camera, and
Distance Measurement Sensor. It has 18 Joints that can generate different types of
movements.LG Electronics has introduced Refrigerators that can check E-Mail, Surf
the Web, Watch TV, and make Videophone Calls. Users have access to information
like Real-Time Grocery Prices and Stock Prices. Carrier and IBM are developing
Air-conditioners that can be controlled over the Internet.
2. COMPONENTS OF A COMPUTER SYSTEM

Fig.2.1 Components of a computer system

2.1 Types of computer

• Microcomputers
- desktop, laptop, notebook and palmtop personal computers (PCs)
- used in businesses, schools/colleges and homes
- cost from a few hundred pounds to a few thousand

• Minicomputers

- often used as multi-user systems, with 100’s of workstations or terminals


attached to central minicomputer, e.g. EPOS.
- cost from £10,000 to about £150,000.

• Mainframe computers

- used by large organisations which may have 1000’s of terminals, often remote
- cost ££ hundreds of thousands
• Supercomputers

- largest category of computer used mostly by scientific & industrial research


departments, NASA, the Weather Centre, stock exchanges
- cost ££ millions

2.2 CPU

Controls the transmission of data from input devices to memoryProcesses the data held
in main memory.Controls the transmission of information from main memory to output
devices. A microprocessor -- also known as a CPU or central processing unit -- is a
complete computation engine that is fabricated on a single chip. Using its ALU
(Arithmetic/Logic Unit), a microprocessor can perform mathematicaloperations like
addition, subtraction, multiplicationand division. Modern microprocessors contain
complete floating point processors that can perform extremely sophisticated
operations on large floating point numbers. A microprocessor can move data from one
memory location to another. A microprocessor can make decisions and jump to a new
set of instructions based on those decisions.

Figure 2.1 CPU


2.2.1 Clock Speed

To synchronize the steps of the fetch-decode,execute cycle, All processors have an


internal clock which generates regularly timed pulses. All activities of the fetch-decode-
execute cycle must begin on a clock pulse.

2.2.2 Word Size

The number of bits that the CPU can process simultaneously. Normally groups of 8, 16,
32, 64, 128 bit words are processed as a unit during input, output and logic
instructions.Word size is a major factor in determining the speed of a processor.

2.2.3 Bus Size

Buses are the lines along which data is transmitted. This data can be in the form of data
and instructions as well the addresses of the data and the instructions.The width of a
data bus determines how many bits can be transmitted simultaneously and the
maximum address which can be referenced. For 8 bits 28 – 1 = 255 i.e. 0 to 255 which is
256 locations. An address bus (that may be 8, 16 or 32 bits wide) that sends an address
to memory. A data bus (that may be 8, 16 or 32 bits wide) that can send data to memory
or receive data from memory An RD (read) and WR (write) line to tell the memory
whether it wants to set or get the addressed location. A clock line that lets a clock pulse
sequence the processor. A reset line that resets the program counter to zero (or
whatever) and restarts execution
2.2.4 Instruction Set

Instructions in machine language are in the form of binary codes, with each different
processor using different codes for the instruction set supported by its hardware. The
instruction set for a typical computer includes the following types of instructions:

• Data Transfer
• Arithmetic Operations
• Logical Operations
• Test and Branch Instructions

2.3 Memory

2.3.1 Main memory

Instructions and data are held in main memory, which is divided into millions of
individually-addressable storage units called bytes. One byte can hold one character, or
it can be used to hold a code representing, for example, a tiny part of a picture, a sound,
or part of a computer program instruction. The total number of bytes in main memory is
referred to as the computer’s memory size. Computer memory sizes are measured as
follows:

1 Kilobyte (Kb) = 1000 bytes (to be exact, 1024 bytes)


1 Megabyte (Mb) = 1,000,000 (1 million) bytes
1 Gigabyte (Gb) = 1,000,000,000 (1 billion) bytes
1 Terabyte (Tb) = 1,000,000,000,000 (1 trillion) bytes
2.3.2 Random Access Memory (RAM)

It is an ‘Ordinary’ memory. Used for storing programs which are currently running and
data which is being processed. This type of memory is volatile - it loses all its contents
as soon as the machine is switched off.

2.3.3 Read Only Memory (ROM)

It is a non-volatile memory with contents permanently etched into the memory chip at
the manufacturing stage. Used for example to hold the bootstrap loader, the program
which runs as soon as the computer is switched on and instructs it to load the operating
system from disk into memory.

2.3.3 Programmable read only memory(PROM)

Short for programmable read-only memory, a memory chip on which data can be written
only once. Once a program has been written onto a PROM, it remains there forever.
Unlike RAM, PROMs retain their contents when the computer is turned off. The
difference between a PROM and a ROM (read-only memory) is that a PROM is
manufactured as blank memory, whereas a ROM is programmed during the
manufacturing process. To write data onto a PROM chip, you need a special device
called a PROM programmer or PROM burner. The process of programming a PROM is
sometimes called burning the PROM. An EPROM (erasable programmable read-only
memory) is a special type of PROM that can be erased by exposing it to ultraviolet light.
Once it is erased, it can be reprogrammed. An EEPROM is similar to a PROM, but
requires only electricity to be erased.
2.3.4 Erasable Programmable read only memory(EPROM)

EPROM(Erasable Programmable Read Only Memory) can be programmed and erased


enabling them to be re-used. Erasure is accomplished using an UV (Ultra Violet) light
source that shines through a quartz erasing window in the EPROM package. Acronym
for erasable programmable read-only memory, and pronounced ee-prom, EPROM is a
special type of memory that retains its contents until it is exposed to ultraviolet light. The
ultraviolet light clears its contents, making it possible to reprogram the memory. To write
to and erase an EPROM, you need a special device called a PROM programmer or
PROM burner. An EPROM differs from a PROM in that a PROM can be written to only
once and cannot be erased. EPROMs are used widely in personal computers because
they enable the manufacturer to change the contents of the PROM before the computer
is actually shipped. This means that bugs can be removed and new versions installed
shortly before delivery.Pronounced double-ee-prom or e-e-prom, short for electrically
erasable programmable read-only memory. EEPROM is a special type of PROM that
can be erased by exposing it to an electrical charge. Like other types of PROM,
EEPROM retains its contents even when the power is turned off. Also like other types of
ROM, EEPROM is not as fast as RAM. EEPROM is similar to flash memory (sometimes
called flash EEPROM). The principal difference is that EEPROM requires data to be
written or erased one byte at a time whereas flash memory allows data to be written or
erased in blocks. which makes flash memory faster.
2.3.5 Flash EPROM

A flash EPROM is similar to an EEPROM except that flash EPROMs are erased all at
once while a regular EEROMs can erase one byte at a time. In- circuit writing and
erasing is possible because no special voltages are required. To accomplish in-circuit
operation, you have to write special application software routines. Flash EPROMs are
also called nonvolatile memory.

2.3.6 Cache Memory

Ita a very fast memory used to improve the speed of a computer, doubling it in some
cases. Acts as an intermediate store between the CPU and main memory. Stores the
most frequently or recently used instructions and data for rapid retrieval Generally
between 1Kb and 512Kb. It is much more expensive than normal RAM.

Figure 3 Cache Memory


2.3.7 Virtual memory

Figure 4.3 Virtual Memory

2.3.8 Auxiliary Storage

• Hard disks

- all standalone PCs have in-built hard disk

- typical capacity for Pentium PC is >= 40 Gb

- used for storing software including the operating system,

other systems software, application programs and data

• Floppy disks

- thin sheet of mylar plastic in hard 3½” casing

- capacity 1.44Mb
• CD-ROM

- holds about 700Mb

• Zip disks

- hold up to 250Mb
3. ‘C’ LANGUAGE

3.1 Machine Language through High-Level Languages and C

Compilers turn high level languages into object code, and the linker changes the object
code into machine code. In many IDE's the compile and link steps are done with one
command. On the command line, it is normally a 2 step process. High level languages
are usually only platform dependant after compiling, and assuming you do not use
specific OS or compiler functions. If you stick to the language standards, nearly every
language is platform independent. If you write a standard C or C++ program the only
thing you have to do is recompile it for a different platform.

• The C and C++ compilation is done in 3-stages.

• Stage 1. The source files are processed by the pre-processor, whose job it is to
combine all the include files and source files into one file. It also strips all
comments and expands all defines and other macros.

• Stage 2. the compiler takes the ourput of the pre-processor to produce the *.obj
file.

• Stage 3. The linker combines all the *.obj files in the project with *.lib libraries to
produce a *.exe program, or a DLL.
• How is a program converted into a form that a computer can use?

• What is the form of data that the computer uses?


– All data is converted into a set of binary codes, strings of 1s and 0s
– Binary codes are distinguished by the manner in which the computer uses
them.
– Machine language

• Machine language to High-level languages

• Machine language
o made up binary-coded instructions used directly by the computer
o tedious and error prone
o programs were difficult to read and modify
• Assembly language
o Low-level programming language in which a mnemonic is used to
represent each of the machine language instructions for a particular
computer
Assembly Language Machine Language
ADD 100101
SUB 010011

• Assembly language
– Computer not able to process instructions directly hence a program,
written in machine language, was used to translate from assembly
language to machine language: Assembler
– Easier for humans to use than machine language however programmers
still forced to think in terms of individual machine instructions
• High-level languages
– Closer to English, and other natural languages, than assembly and
machine languages.
3.1.1 Language-Processing System

3.1.2 Memory model

• Whilst near pointers simplify memory access in the segmented memory of Intel
processors by allowing direct arithmetic on pointers, they limit accessible
memory to 64 kilobytes.
• Far pointers can be used to access multiple code segments, each 64K, up to 1
megabyte, by using segment and offset addressing.
• The cost is that the programmer cannot use the simple pointer arithmetic that is
possible with near pointers.
• The default model is Small, which is effective for the majority of applications.
• The Tiny model is specifically designed for the production of TSR (memory-
resident) programs, which must fit into one code segment and be compiled as
.COM rather than .EXE files.
• The remaining models are selectable in the compilation process, through the IDE
or in the make file.
• Careful use of alternative memory models is one the hallmarks of thoughtful C
programming.
3.2 C Data Types

All C variables are defined with a specific type. C has the following built-in types.

char - characters
int - integers (whole numbers)
float - real numbers
void - valueless
3.4 Data Modifiers

• Data Modifiers enable you to have greater control over the data:
1. signed 2. unsigned 3. short 4. long

• The signed Modifier:


• Used to enable the sign bit.
• Used to indicate to the compiler that the int or char data type uses the sign
bit.
• All int variables in C are signed by default.
• The unsigned Modifier:
• used to tell the C compiler that no sign bit is needed in the specified data
type.
• Like the signed modifier, the unsigned modifier is meaningful only to the
int and char data types
• By default, ‘unsigned’ means ‘unsigned int’.
• The short Modifier:
• A data type can be modified to take less memory by using the short
modifier.
• By default, a short int data type is a signed number.
• %hd, %hi or % hu is to specify that the corresponding number is a short
int or unsigned short int.
• The long modifier:
• It is used to define a data type with increased storage space.
• The ANSI standard allows you to indicate that a constant has type long by
suffixing l or L to the constant.
• %ld or %Ld specifies that the corresponding datum is long int. %lu or %Lu
is then used for the long unsigned int data.
3.5 Control Flow statements

• The if statement

• The if-else statement

• The switch statement

• The break statement

• The continue statement

• The goto statement

3.6 Call by Value and Call by Reference

• The arguments passed to function can be of two types namely

• 1.Values passed
2. Address passed

• The first type refers to call by value and the second type refers to call by
reference.
3.7 Array

• An array is a collection of variables that are of the same data type. Each item in
an array is called an element. All elements in an array are referenced by the
name of the array and are stored in a set of consecutive memory slots.

• The general form to declare an array:

data-type Array-Name[Array-Size];

eg. int array_int[8];

• In C Array subscripts start at 0 and end one less than the array size.

• In above example the size of array_int starts with 0 and ends with 7.

• Indexing the Arrays all arrays in C are indexed starting at 0.

3.8 Pointers

• We can make a pointer that refers to the first element of an array by simply
assigning the array name to the pointer variable.

– A pointer is said to refer to an array when the address of the first element
in the array is assigned to the pointer. The address of the first element in
an array is also called the start address of the array.

– To assign the start address of an array to a pointer, you can either put the
combination of the address-of operator (&) and the first element name of
the array, or simply use the array name, on the right side of an assignment
operator (=).
3.9 Sorting Algorithm

One of the fundamental problems of computer science is ordering a list of items. There's
a plethora of solutions to this problem, known as sorting algorithms. Some sorting
algorithms are simple and intuitive, such as the bubble sort. Others, such as the quick
sort are extremely complicated, but produce lightening-fast results.The common sorting
algorithms can be divided into two classes by the complexity of their algorithms.

• Algorithmic complexity

• It is generally written in a form known as Big-O notation, where the O


represents the complexity of the algorithm and a value n represents the
size of the set the algorithm is run against.

• Relative Effeciency

• The run times on your system will almost certainly vary from these results,
but the relative speeds should be the same - the selection sort runs in
roughly half the time of the bubble sort.

There are four types of sorting techniques:-

1. Bubble sort

2. Insertion sort

3. Selection sort

4. Quick Sort
3.9.1 Bubble sort

The bubble sort is the oldest and simplest sort in use. Unfortunately is the slowest.The
bubble sort works by comparing each item in the list with the item next to it, and
swapping them if required. The algorithm repeats this process until it makes a pass all
the way through the list without swapping any items (in other words, all items are in the
correct order). This causes larger values to "bubble" to the end of the list while smaller
values "sink" towards the beginning of the list. The bubble sort is generally considered
to be the most inefficient sorting algorithm in common usage.

3.9.2 Insertion Sort

The insertion sort works just like its name suggests - it inserts each item into its proper
place in the final list. The simplest implementation of this requires two list structures -
the source list and the list into which sorted items are inserted.To save memory, most
implementations use an in-place sort that works by moving the current item past the
already sorted items and repeatedly swapping it with the preceding item until it is in
place. Although it has the same complexity, the insertion sort is a little over twice as
efficient as the bubble sort.And almost 40% faster than the selection sort.The insertion
sort is a good middle-of-the-road choice for sorting lists of a few thousand items or less.
The algorithm is significantly simpler than the shell sort, with only a small trade-off in
efficiency.The insertion sort shouldn't be used for sorting lists larger than a couple
thousand items or repetitive sorting of lists larger than a couple hundred items.

3.9.3 Selection Sort

The selection sort works by selecting the smallest unsorted item remaining in the list,
and then swapping it with the item in the next position to be filled. It is simple and easy
to implement. It yields a 60% performance improvement over the bubble sort, but the
insertion sort is over twice as fast as the bubble sort and is just as easy to implement as
the selection sort. In short, there really isn't any reason to use the selection sort - use
the insertion sort instead. If at all made in use, try to avoid sorting lists of more than a
1000 items.
3.9.4 Quick Sort

The quick sort is an in-place, divide-and-conquer, massively recursive sort. As a normal


person would say, it's essentially a faster in-place version of the merge sort.The quick
sort algorithm is simple in theory, but very difficult to put into code (computer scientists
tied themselves into knots for years trying to write a practical implementation of the
algorithm, and it still has that effect on university students). The efficiency of the
algorithm is majorly impacted by which element is choosen as the pivot point. The
worst-case efficiency of the quick sort occurs when the list is sorted and the left-most
element is chosen. Randomly choosing a pivot point rather than using the left-most
element is recommended if the data to be sorted isn't random.

• The recursive algorithm consists of four steps (which closely resemble the merge
sort):

• If there are one or less elements in the array to be sorted, return


immediately.

• Pick an element in the array to serve as a "pivot" point. (Usually the left-
most element in the array is used.)

• Split the array into two parts - one with elements larger than the pivot and
the other with elements smaller than the pivot.

• Recursively repeat the algorithm for both halves of the original array.
4.8051 MICROCONTROLLER

The AT89C51 is a low-power, high-performance CMOS 8-bit microcomputer with 8K


bytes of Flash programmable and erasable read only memory (PEROM). The device is
manufactured using Atmel’s high-density nonvolatile memory technology and is
compatible with the industry-standard MCS-51 instruction set and pin out. The on-chip
Flash allows the program memory to be reprogrammed in-system or by a conventional
nonvolatile memory programmer. The Atmel AT89C51 is a powerful microcomputer,
which provides a highly flexible and cost-effective solution to many embedded control
applications. It also provides 32 I/O lines, 256bytes of RAM for data storage.

Microprocessor, by-itself, completely useless . It does not have RAM,ROM and no I/O
ports on the chip itself. Must have external peripherals to Interact with outside world.
Makes system bulkier and expensive. A Microcontroller has a CPU along with fixed
amount of RAM, ROM and I/O ports and timer on a single chip.

• There are four major 8 bit microcontrollers.

» Freescale-68111

» Intel's -8051

» PIC 16x

» Microchip.

Each one has its own instruction set and register set so not compatible with each other.
4.1 Features Of Microcontroller

• 8051 is an 40 pin IC.

• 8051 is an 8-bit Microcontroller.

• 128 byes of RAM

• 4KBytes of inbuilt ROM.

• It have one serial port i.e. UART

• Four parallel ports i.e. P0,P1,P2&P3.

• It have two 16-bit Timers i.e. Timer0 ,Timer1.

• It have five sources and six Interrupts.

• It have four Register Banks (Bank0-3) .

• 16-bit Program Counter.

• DPTR(16-bit).

• 8-bit stack pointer (sp).

• External code and data memory up to 64KB.

• 8-bit PSW (program status word).

• On chip Oscillator.

• 8K Bytes of In-System Reprogrammable Flash Memory

• Low-power Idle and Power-down Modes

• 4v to 5.5v operating range.


4.2 8051 Architecture
PIN Configuration
4.2.1 Pin configuration of AT89C51

• VCC -Supply voltage.

• GND- Ground.

• RST -Reset input.

– A high on this pin for two machine cycles while the oscillator is running
resets the device.

• Port 0

– Port 0 is an 8-bit open drain bidirectional I/O port.

– When 1s are written to port 0 pins, the pins can be used as high
impedance inputs.

– Port 0 may also be configured to be the multiplexed low order


address/data bus during accesses to external program and data memory.

– Port 0 also receives the code bytes during Flash programming, and
outputs the code bytes during program verification.

– External pullups are required during program verification.

• Port 1

– Port 1 is an 8-bit bidirectional I/O port with internal pullups .

– When 1s are written to Port 1 pins they are pulled high by the internal
pullups and can be used as inputs.

– It also receives the low-order address bytes during Flash programming


and verification.
• Port 2

– Port 2 is an 8-bit bidirectional I/O port with internal pullups.

– When 1s are written to Port 2 pins they are pulled high by the internal
pullups and can be used as inputs.

– Port 2 emits the high-order address byte during fetches from external
program memory and during accesses to external data memory that use
16-bit addresses (MOVX @DPTR).

– During accesses to external data memory that use 8-bit addresses (MOVX
@ RI), Port 2 emits the contents of the P2 Special Function Register.

– Port 2 also receives the high-order address bits and some control signals
during Flash programming and verification.

• Port 3

– Port 3 is an 8-bit bidirectional I/O port with internal pullups.

– When 1s are written to Port 3 pins they are pulled high by the internal
pullups and can be used as inputs.

– Port 3 also receives some control signals for Flash programming and
verification.

– Port 3 also receives some control signals for Flash programming and
programming verification. RST Reset input. A high on this pin for two
machine cycles while the oscillator is running resets the device.
• Port Pin Alternate Functions:
– P3.0 RXD (serial input port)
– P3.1 TXD (serial output port)
– P3.2 INT0 (external interrupt 0)
– P3.3 INT1 (external interrupt 1)
– P3.4 T0 (timer 0 external input)
– P3.5 T1 (timer 1 external input)
– P3.6 WR (external data memory write strobe)
– P3.7 RD (external data memory read strobe)

• ALE/PROG:
– Address Latch Enable is an output pulse for latching the low byte of the
address during accesses to external memory. This pin is also the program
pulse input (PROG) during Flash programming.
– In normal operation, ALE is emitted at a constant rate of 1/6 the oscillator
frequency and may be used for external timing or clocking purposes. Note,
however, that one ALE pulse is skipped during each access to external
data memory. If desired, ALE operation can be disabled by setting bit 0 of
SFR location 8EH. With the bit set, ALE is active only during a MOVX or
MOVC instruction. Otherwise, the pin is weakly pulled high. Setting the
ALE-disable bit has no effect if the Microcontroller is in external execution
mode.

• PSEN:
– Program Store Enable is the read strobe to external program memory.
When the AT89C52 is executing code from external program memory,
PSEN is activated twice each machine cycle, except that two PSEN
activations are skipped during each access to external data memory.
• EA/VPP:
– External Access Enable: A must be strapped to GND in order to enable
the device to fetch code from external program memory locations starting
at 0000H up to FFFFH. Note, however, that if lock bit 1 is programmed,
EA will be internally latched on reset. EA should be strapped to VCC for
internal program executions. This pin also receives the 12-volt
programming enable voltage (VPP) during Flash programming when 12-
volt programming is selected.

• XTAL1:
– Input to the inverting oscillator amplifier and input to the Internal clock
operating circuit.

• XTAL2:
– Output from the inverting oscillator amplifier.

4.2.2 Memory Organisation

• MCS-51 devices have a separate address space for program and data memory
up to 64k bytes each of external program and data memory can be addressed.

Program Memory:
• If the EA pin is connected to GND, all program fetches are directed to external
memory. In the AT89C51RC if EA is connected to Vcc, program fetches the
addresses 0000H through 7FFFH are directed to internal memory and fetches to
addresses memory.
Data Memory:

• The AT89C51RC has internal data memory that is mapped into four separate
segments. The lower 128 bytes of RAM, upper 128 bytes special function
register (SFR) and 256 bytes expanded RAM (ERAM).

The four segments are:

1. The lower 128bytes of RAM (addresses 00H to 7FH) are directly and indirectly
addressable.
2. The upper 128 bytes of RAM (addresses 80H to FFH) are indirectly addressable
only.
3. The special function registers, (SFR’s) (addresses 80H to FFH) are directly
addressable only.
4. 256 bytes expanded RAM (ERAM, 00H-FFH) is indirectly accessed by MOVX
instruction and with the EXTRAM bit cleared.
Either direct or indirect addressing can access the lower 128 bytes. The upper
128 bytes can be accessed indirect addressing only. The upper 128 bytes occupy the
same address but are physically separate from the SFR space. When an instruction
accesses an internal location above address 7FH, the CPU knows whether the
accesses is to the upper 128 bytes of data RAM or to SFR space by the direct
addressing mode used in the instruction. Instructions that use direct addressing access
SFR space. For example MOV OAOH, #data.
• SFR (Special Function Register):

A map of the on-chip memory area is called the special function registers (SFR)
space. Note that in the SFR's not all the addresses are occupied. Unoccupied
addresses are not implemented on the chip. Read accesses to these addresses will in
general return random data and write accesses will have no-effect. The functions of
SFR's are described as follows:

• Accumulator:

Acc is the accumulator register. The mnemonics for accumulator, specific


Instructions. However, refer to the accumulator simply as "A".
• PSW (Program Status Register):
The PSW contains several status bits that reflect the current state of the cpu.the psw
resides in the SFR space.

• Stack Pointer:
This is of 8-bit wide. It is incremented before data is stored during PUSH and CALL
executions. While the stack pointer may any where in on chip RAM the stack pointer is
initialized to 07h after a reset. This causes the stack to begin at location 08h.

• Data Pointer (DPTR):


This consists of a high type and a low byte. Its intended function is to hold a 16-bit
address. It may be manipulated as a 16-bit register or as two independent 8-bit
registers.

• Ports 0 to 3:
P0, P1, P2, and P3 are the SFR of ports 0,1,2 & 3 respectively. Writing a one of a
port SFR causes the port output pin to switch low. When used as an input, the external
state of a pin will be held in the port SFR.

• Serial Data Buffer:


The serial buffer is actually two separate registers transmit buffer and a receive
buffer. When data is involved to SBUF. It goes to transmit buffer and is held for serial
transmission. When data is moved from SBUF. It comes from SBUF. It comes the
receive buffer.

• Timer Register Basic to 89C51:


Register pins (TH0, TL0), (TH1, TL1), (TH2, TL2) are 16-bit counting registers for
timer/counters 0,1&2, respectively.
• Control Register For The 89C51:

SFR, IP, IE, TMOD, SCON and PCON contain and status bits for the interrupt
system. The timer/counters and the serial port.

4.3.3 Instruction Set:

The 89C52 instruction set has 111 instructions. It includes


• Arithmetic
• Logical
• Data transfer
• Boolean and
• Branching instructions

4.4.4 Addressing Modes:

There are five addressing modes in the 89C52 instruction set explained as follows:
• Direct addressing
• Indirect addressing
• Register instructions
• Immediate addressing
• Indexed addressing
Real Time Operating System
Introduction
Timeliness is the single most important aspect of a real -time system. These systems
respond to a series of external inputs, which arrive in an unpredictable fashion. The
real-time systems process these inputs, take appropriate decis ions and also generate
output necessary to control the peripherals connected to them. As defined by Donald
Gillies "A real-time system is one in which the correctness of the computations not only
depends upon the logical correctness of the computation but also upon the time in
which the result is produced. If the timing constraints are not met, system failure is said
to have occurred."
It is essential that the timing constraints of the system are guaranteed to be met.
Guaranteeing timing behavior requires that the system be predictable.
The design of a real -time system must specify the timing requirements of the system
and ensure that the system performance is both correct and timely. There are three
types of time constraints:

 Hard: A late response is incorrect and implies a system failure. An example of


such a system is of medical equipment monitoring vital functions of a human
body, where a late response would be considered as a failure.
 Soft: Timeliness requirements are defined by using an average response time. If
a single computation is late, it is not usually significant, although repeated late
computation can result in system failures. An example of such a system includes
airlines reservation systems.
 Firm: This is a combination of both hard and soft timeliness requirements. The
computation has a shorter soft requirement and a longer hard requirement. For
example, a patient ventilator must mechanically ventilate the patient a certain
amount in a given time period. A few seconds' delay in the initiation of breath is
allowed, but not more than that.
One need to distinguish between on -line systems such as an airline reservation
system, which operates in real-time but with much less severe timeliness constraints
than, say, a missile control system or a telephone switch. An interactive system with
better response time is not a real-time system. These types of systems are often
referred to as soft real time systems. In a soft real -time system (such as the airline
reservation system) late data is still good data. However, for hard real -time systems,
late data is bad data.
Most real -time systems interface with and control hardware directly. The software for
such systems is mostly custom -developed. Real -time Applications can be either
embedded applications or non -embedded (desktop) applications. Real -time systems
often do not have standard peripherals associated with a desktop computer, namely the
keyboard, mouse or conventional display monitors. In most instances, real-time systems
have a customized version of these devices.

The following table compares some of the key features of real -time software systems
with other conventional software systems.

Feature Sequential Concurrent Real Time


Programming Programming Programming
Predetermined Multiple sequential Usually composed
Execution order programs executing of concurrent
in parallel programs

Numeric Independent of Generally dependent Dependent on


Results program on program program execution
execution speed execution speed speed

Examples Accounting, UNIX operating Air flight controller


payroll system
5.1 Real-time Programs: The Computational Model

A simple real -time program can be defined as a program P that receives an event from
a sensor every T units of time and in the worst case, an event requires C units of
computation time.

Assume that the processing of each event must always be completed before the arrival
of the next event (i.e., when there is no buffering). Let the deadline for completing the
computation be D. If D < C, the deadline cannot be met. If T < D, the program must still
process each event in a time O/ T, if no events are to be lost. Thus the deadline is
effectively bounded by T and we need to handle those cases where C O/ D O/T.

5.2. Design issue of Real Time Systems

Real-time systems are defined as those systems in which the correctness of the system
depends not only on the logical result of computation, but also on the time at which the
results are produced . A common misconception is to consider, that real-time computing
is equivalent to fast computing. In traditional non-real-time computer systems, the
performance goal is throughput: as many tasks should be processed as possible in
given time period. Real-time systems have a different goal to meet: as many tasks as
possible should be executed so, that they will complete and produce results before their
time limit expires. In other words, the behavior of real-time system must be predictable
in all situations.
To achieve predictability, all components of the real-time system must be time bounded.
A predictability of the system depends on many different aspects.

 The computer hardware must not introduce unpredictable delays into program
execution. For example, caching and swapping as well as DMA cycle stealing
are often problematic when determining process execution timing .

 An operating system must have a predictable behavior in all situations. Often the
common-purpose operating systems, like UNIX, are too large and complex, and
they have too much unpredictability. Thus, a special microkernel operating
systems like the Chorus microkernel have been designed for real-time purposes.

Also traditional programming concepts and languages are often not good for real-time
programming. No language construct should take arbitrary long to execute, and all
synchronization, communication, or device accessing should be expressible through
time-bounded constructs. However, despite all these real-time requirements could be
solved, a human factor - the real-time programmer - can always cause unpredictability
to the system. To assist the programming process, numerous methods have been
produced for real-time system design, specification, verification, and debugging.

Typically, a real-time system consists of controlling system and a controlled system.


The controlled system can be viewed as the environment with which the computer
interacts. The typical real-time system gather information from the various sensors,
process information and produce results. The environment for real-time system may be
highly in deterministic. Events may have unpredictable starting time, duration and
frequency. However, real-time system must react to all of these events within
prespecific time and produce adequate reaction.
To guarantee, that a real-time system has always a correct view of its environment, a
consistency must be maintained between them. The consistency is time-based. The
controlling system must scan its environment fast enough to keep track changes in the
system. The adequate fastness depends on application. For example, sensing a
temperature needs often slower tempo than sensing a moving object.

The need to maintain consistency between the environment and the controlling system
leads to the notion of temporal consistency. Temporal consistency has two components:

1. Absolute consistency between the state of the environment and the controlling
system. The controlling system's view from the environment must be temporally
consistent, it must not be too old.

2. Relative consistency among the data used to derive other data. Sometimes, the data
items depend on each other. If a real-time system uses all dependent values, they must
be temporally consistent to each other.

There are several possibilities to maintain temporal consistency. The state of the
environment is often scanned in periodical basis and an image of the environment is
maintained in the controlling system. A timestamp methodology is often used to figure
out validity of the system's image of the environment.

A typical real-time system consists of several tasks, which must be executed in


simultaneous manner. Each task has a value which is gained to the system if a
computations finishes in specific time. Each task has a deadline, which indicates a time
limit, when a result of the computing becomes useless, ie. gains zero or negative value
to the system. Furthermore, a task may have some value after the deadline has been
expired.
The deadlines have been divided into three types: hard, soft and firm deadlines. The
hard deadline means, that a task may cause very high negative value to the system, if
the computation is not completed before deadline. Opposite to this, in a soft deadline, a
computation has the decreasing value after the deadline, and the value may become
zero at later time. In the middle of these extremes, a firm deadline is defined: a task
loses its value after deadline, but no negative consequence will occur. Figure 1 plots the
value versus time behavior for different deadline types. A real-time system must
guarantee, that all hard deadlines and as many as possible of other deadlines are met.

Examples of deadline types are quite intuitive. A typical example of hard deadline can
be found from the system controlling the nuclear power plant: adding coolant to the
reactor must be done before temperature gets too high. A typical example for firm
deadline can be found from the industry automation. A real-time system attempts to
recognize a moving object. An object can be scanned only, when it is in the sight of the
scanning device. If the object is not recognized, it can be rejected or rescanned later.
Thus, an operation loses its value after a certain time period, but no harm will occur due
to failure. A typical example of soft deadline is a combined operation sequence. The
whole sequence has a firm deadline, but if some of the components of the sequence
miss their deadlines, the overall sequence might still be able to make its deadline.

As a summary, the timing requirements for real-time systems can be expressed as


following constraints to the processing:

1. Response time, deadline: the system must respond to the environment within a
specified time after input (or stimuli) is recognized.

2. Validity of data: in some cases, the validity of the input or output is a function of time.
That is, some stimulus and the corresponding response become obsolete with time, and
the time interval for which data is valid must be accounted in processing requirements.
3. Periodic execution: in many control systems, sensors collect data at predetermined
time intervals.

4. Coordinating inputs and outputs: in some applications, input data from various
sensors must be synchronized. Otherwise, decisions would be made based on
inconsistent information.

An example of real-time system is simplified unmanned vehicle system (SUVS) [20].


The SUVS controls a vehicle with no assistance from a human driver. It periodically
receives data from sensors such as the speedometer, temperature sensor, and
direction sensor. It controls the vehicle by generating appropriate signals to actuators,
such as the accelerator, brake and steering wheel.

Decisions are made based on the current inputs from the sensors and the current status
of the road and the vehicle. Decisions must be made within a specified time. The events
can occur very unexpectedly. Let's think about a scenario, where an obstacle suddenly
falls into the road. The braking and steering decisions must be done within a short time
period to avoid crashing. These decisions must also be synchronized and the status of
the road and other traffic must be taken into account. Thus, most of tasks in SUVS have
a hard deadline. This is typical in many safety critical systems . A real-time behavior is
often essential when designing a safety critical system.
5.3. Scheduling

To support timeliness in a real-time system, a special real-time task scheduling methods


have been introduced. Traditional real-time research has been concerned to uni
processor scheduling. As a complexity and scale of real-time system grows, the
processing power of real-time system is increased by adding new processors or by
distribution. These issues introduce several new concepts to scheduling research,
numerous schemes have been introduced for multiprocessor and distributed
scheduling. In this section, we discuss two main principles for real-time scheduling, a
scheduling paradigm and a priority inversion problem.

5.3.1 Scheduling paradigms

The simplest real-time scheduling method is not to schedule at all. This method sounds
dummy, but in many real-time systems, only one task exists. A common example is a
typical programmable logic. Programmable logics are widely used in the industry
automation. A programmable logic's program is executed periodically. During every
execution, the program reads all inputs, makes simple calculations and sets appropriate
outputs. Same program is executed at every time, and a state of the system depends
on internal variables set in the previous runs. However, interrupts can be used to catch
asynchronous events, but an advanced processing must be made within standard run
periods.

In more advanced real-time systems, several simultaneous tasks can be executed.


Every task may have different timing constraints and they may have a periodic or an
aperiodic execution behavior. The periodic behavior mens, that a task must be executed
within prespecific time intervals. When a task has an aperiodic behavior, it will execute,
when an external stimulus occurs. In a typical real-time systems, both type of tasks
exists. However, all aperiodic tasks can always be transformed to periodic. A task
structure of the system describes, when the tasks can be started. Many systems have a
static task structure, where tasks are installed during system startup and no new tasks
can be started afterwards. In dynamic task structure, tasks can be started and ended
during system uptime.

Depending on particular system's behavior, the different scheduling paradigms have


been introduced :

1. Static table-driven approaches: These perform static schedulability analysis and the
resulting schedule (table) is used at run time to decide, when a task must begin
execution. This is a highly predictable approach, but it is very inflexible, because a table
must always be reconstructed, when a new task is added to the system. Due to
predictability, this is often used when absolute hard deadlines are needed.

2. Static priority-driven pre-emptive approaches: These perform static schedulability


analysis, but unlike in the previous approach, no explicit schedule is constructed. At run
time, tasks are executed "highest priority first". This is a quite commonly used approach
in concrete real-time systems.

3. Dynamic planning-based approaches: Unlike the previous two approaches, feasibility


is checked at run time. A dynamically arriving task is accepted for execution only if it is
found feasible.

4. Dynamic best effort approaches: Here no feasibility checking is done. The system
tries to do its best to meet deadlines, but a task may be aborted during its execution.

Unfortunately the most scheduling problems are NP-complete. However, many good
heuristic approaches have been presented. Thus, numerous algorithms have been
introduced to support these scheduling paradigms. Algorithms are either based on the
single scheduling paradigm or they can spread over several paradigms. These
algorithms include least common multiply (LCM) method, earliest deadline first (EDF)
method, rate monotonic (RM), and many others. A survey of scheduling methods is
found in.
5.3.2 Priority inversion problem

In a multitasking environment, the shared resources are often used. The usage of these
resources must be protected with a well-known methods, like semaphores and
mutexes. However, a priority inversion problem arises, if these methods are used in
real-time system with a pre-emptive priority-driven scheduling. In priority-driven
scheduling, the eligible task with a highest priority is always executed. The task is
eligible, when it is not waiting for any event, so when it is runnable. When the higher
priority task becomes eligible, it is always started to run. The task with a lower priority is
pre-empted, the task is released from a processor in favor to higher priority task.

A priority inversion problem arises, when a lower priority task has reserved a shared
resource. If a higher priority task needs the same resource, the resource cannot be
acquired, because it is already reserved by another task. This forces the higher priority
task to block, which may lead to the missing of its deadline. Also the deadlock situation
is possible.

Several approaches have been introduced to rectify the priority inversion problem. The
use of priority inversion protocols is one approach. The basic idea of priority inversion
protocols is that when a task blocks one or more higher priority tasks, it ignores its
original priority assignment and executes its critical section at the highest priority level of
all the tasks it blocks. After exiting its critical section, the task returns its original priority.
5.4. Real-time operating system

Real-time operating systems are an integral part of real-time systems. Examples of


these systems are process control systems, flight control systems and all other systems
that require result of computation to be produced in certain time schedule. Nowadays
real-time computing systems are applied to more dynamic and complex missions,
and their timing constraints and characteristics are becoming more difficult to
manage. Early systems consisted of a small number of processors performing
statically determined set of duties. Future systems will be much larger, more widely
distributed, and will be expected to perform a constantly changing set of duties in
dynamic environments. This also sets more requirements for future real-time operating
systems.

Real-time operating systems need to be significantly different from those in traditional


time-sharing system architectures because they have to be able to handle the added
complexity of time constraints, flexible operations, predictability and dependability.
5.5. Real-time operating system requirements and basic abstractions

In real-time systems, operating system plays a considerable role. The most important
task of it is to schedule system execution (processes) and make sure that all
requirements that the system is meeting are filled.

5.5.1 General terms


Real-time operating systems require certain tasks to be computed within strict time
constraints. Time constraints define deadline, the response time in which the
computation has to be completed. There are three different kinds of deadlines; hard,
soft and firm. Hard deadline means that if the computation is not completed before
deadline, it may cause a total system failure. Soft deadline means that computing has
a decreasing value after the deadline but not a zero or negative number. If firm
deadline is missed, the value of task is lost but no negative consequence is
occurred.

Fault tolerance, the capability of a computer, subsystem or program to withstand


the effects of internal faults, is also a common requirement for real-time operating
systems.

Several real-time systems collect data from their environment at predetermined


time intervals which requires periodic execution.
5.5.2 Predictability

One common denominator in real-time systems seems to be that all designers want
their real-time systems to be predictable. Predictability means, that it should be
possible to show, demonstrate, or prove that requirements are met subject to any
assumptions made, for example, concerning failures and workloads. In other words
predictability is always subject to the underlying assumptions made.[2]

For static real-time systems in deterministic, stable environments we can easily


predict the overall system performance over large time frames as well as predict
the performance of some individual task. In more complicated, changing,
nondeterministic environment , for example a future system of robots co-operating
on Mars, it is far more complicated task to predict in design phase, how the system is
actually going to act in its real environment. In operating system level system must be
designed to be so simple, that it is possible to predict worst-case situations in execution.

5.5.3 Temporal consistency

The controlling system must scan its environment fast enough to maintain a
correct view of it. If that is taken care of, it can be said that consistency is
achieved between real-time system and its environment. This leads to notion of
temporal consistency, which has two components;

 Absolute consistency, which is achieved between the


controlling system and the state of the environment if the systems
view of the environment is temporally consistent.

 Relative consistency, which is achieved if data items


dependent of each other in the system are temporally consistent
to each other.
5.6. Real-time operating system structure

When systems have become larger and unwieldy, they are difficult to comprehend,
develop and maintain.

A recent trend in operating system development consists of structuring the


operating system as a modular set of system servers which sit on top of a
minimal kernel, microkernel, rather than using the traditional monolithic kernel, in
which all functionality of the system is provided.
Monolithic kernel of the UNIX system

The idea in microkernel-based operating systems are that those functions which
are needed universally, by every component of the system, form the microkernel. Other
functionality is handled outside the kernel and they can be specially tailored for
each application. Notations close and open system are also used for this
purpose. Real-time operating system are also usually based on microkernel
architecture and must support these main functional areas:

 process management and synchronization


 memory management
 inter process communication
 I/O
5.7. Real Time Operating System Types

They can be divided into 3 categories :

 small proprietary kernels

 real-time extensions to commercial timesharing operating systems

 research kernels.

5.7.1 Small Proprietary Kernels

The small, proprietary kernels are often used for small embedded systems when very
fast and highly predictable execution must be guaranteed. The kernels are often
stripped down and optimized to reduce the run-time overhead caused by the kernel.
The usual characteristics of these kernels are:

 fast context switching.

 small size and stripped-down functionality.

 quick response to external interrupts.

 minimized intervals, when interrupts are disabled.

 fixed or variable sized partitions for memory management.

 ability to lock code and data into memory.


To deal with timing requirements, the kernel should provide a priority scheduling
mechanism and a bounded execution time for most primitives. For timing purposes, the
kernel should maintain a real-time clock and provide special alarm and timeout services,
as well as primitives for delay by a fixed amount of time. In general, the kernel also
performs multitasking and intertask communication and synchronization via standard
well-known constructs such as mailboxes, events, signals, and semaphores.
Additionally, the kernel should support real-time queuing methods such as delivering
messages by a priority order. These kernels are suitable for small applications, such as
instrumentation, communication front ends, intelligent peripherals, and process control.
Since the applications are simple, it is relatively easy to determine, that all timing
constraints are met. As complexity of the system increases, it becomes more and more
difficult to map all timing, computation time, resource, and value requirements to a
single priority for each task. In these situations demonstrating predictability becomes
very difficult. Because of these reasons, some researchers believe, that more
sophisticated kernels are needed to address timing and fault tolerance constraints.
Recently there are efforts to produce scalable kernels. The smallest level of support is a
microkernel, and the support can be broaden by demands of the application. An
example of this type of kernel is the Chorus microkernel, which can be scaled from a
simple embedded microkernel to a full POSIX/UNIX support.
USB

6.1 Introduction:

Universal Serial Bus (USB) is a set of interface specifications for high speed wired
communication between electronics systems peripherals and devices with or without
PC/computer. The USB was originally developed in 1995 by many of the industry
leading companies like Intel, Compaq, Microsoft, Digital, IBM, and Northern Telecom.
The major goal of USB was to define an external expansion bus to add peripherals to a
PC in easy and simple manner. The new external expansion architecture, highlights,

1. PC host controller hardware and software


2. Robust connectors and cable assemblies
3. Peripheral friendly master-slave protocols
4. Expandable through multi-port hubs.

USB offers users simple connectivity. It eliminates the mix of different connectors for
different devices like printers, keyboards, mice, and other peripherals. That means
USB-bus allows many peripherals to be connected using a single standardized interface
socket. Another main advantage is that, in USB environment, DIP-switches are not
necessary for setting peripheral addresses and IRQs. It supports all kinds of data, from
slow mouse inputs to digitized audio and compressed video.

USB also allows hot swapping. The "hot-swapping" means that the devices can be
plugged and unplugged without rebooting the computer or turning off the device. That
means, when plugged in, everything configures automatically. So the user needs not
worry about terminations, terms such as IRQs and port addresses, or rebooting the
computer. Once the user is finished, they can simply unplug the cable out, the host will
detect its absence and automatically unload the driver. This makes the USB a plug-and-
play interface between a computer and add-on devices.
The loading of the appropriate driver is done using a PID/VID (Product ID/Vendor ID)
combination. The VID is supplied by the USB Implementer's forum

Fig : The USB "trident" logo

The USB has already replaced the RS232 and other old parallel communications in
many applications. USB is now the most used interface to connect devices like mouse,
keyboards, PDAs, game-pads and joysticks, scanners, digital cameras, printers,
personal media players, and flash drives to personal computers. Generally speaking,
USB is the most successful interconnect in the history of personal computing and has
migrated into consumer electronics and mobile products.
USB sends data in serial mode i.e. the parallel data is serialized before sends and de-
serialized after receiving.

The benefits of USB are low cost, expandability, auto-configuration, hot-plugging and
outstanding performance. It also provides power to the bus, enabling many peripherals
to operate without the added need for an AC power adapter.
6.2 Various versions USB:

As USB technology advanced the new version of USB are unveiled with time. Let us
now try to understand more about the different versions of the USB.

USB1.0: Version 0.7 of the USB interface definition was released in November 1994.
But USB 1.0 is the original release of USB having the capability of transferring 12 Mbps,
supporting up to 127 devices. And as we know it was a combined effort of some large
players on the market to define a new general device interface for computers. This USB
1.0 specification model was introduced in January1996. The data transfer rate of this
version can accommodate a wide range of devices, including MPEG video devices,
data gloves, and digitizers. This version of USB is known as full-speed USB.

Since October-1996, the Windows operating systems have been equipped with USB
drivers or special software designed to work with specific I/O device types. USB got
integrated into Windows 98 and later versions. Today, most new computers and
peripheral devices are equipped with USB.

USB1.1: USB 1.1 came out in September 1998 to help rectify the adoption problems
that occurred with earlier versions, mostly those relating to hubs.
USB 1.1 is also known as full-speed USB. This version is similar to the original release
of USB; however, there are minor modifications for the hardware and the specifications.
USB version 1.1 supported two speeds, a full speed mode of 12Mbits/s and a low speed
mode of 1.5Mbits/s. The 1.5Mbits/s mode is slower and less susceptible to EMI, thus
reducing the cost of ferrite beads and quality components.

USB2.0: Hewlett-Packard, Intel, LSI Corporation, Microsoft, NEC, and Philips jointly led
the initiative to develop a higher data transfer rate than the 1.1 specifications. The USB
2.0 specification was released in April 2000 and was standardized at the end of 2001.
This standardization of the new device-specification made backward compatibility
possible, meaning it is also capable of supporting USB 1.0 and 1.1 devices and cables.
Supporting three speed modes (1.5, 12 and 480 megabits per second), USB 2.0
supports low-bandwidth devices such as keyboards and mice, as well as high-
bandwidth ones like high-resolution Web-cams, scanners, printers and high-capacity
storage systems.

USB 2.0, also known as hi-speed USB. This hi-speed USB is capable of supporting a
transfer rate of up to 480 Mbps, compared to 12 Mbps of USB 1.1. That's about 40
times as fast! Wow!

USB3.0: USB 3.0 is the latest version of USB release. It is also called as Super-Speed
USB having a data transfer rate of 4.8 Gbit/s (600 MB/s). That means it can deliver over
10x the speed of today's Hi-Speed USB connections.

The USB 3.0 specification was released by Intel and its partners in August 2008.
Products using the 3.0 specifications are likely to arrive in 2009 or 2010. The technology
targets fast PC sync-and-go transfer of applications, to meet the demands of Consumer
Electronics and mobile segments focused on high-density digital content and media.
USB 3.0 is also a backward-compatible standard with the same plug and play and other
capabilities of previous USB technologies. The technology draws from the same
architecture of wired USB. In addition, the USB 3.0 specification will be optimized for
low power and improved protocol efficiency.

6.3 USB system overview:

The USB system is made up of a host, multiple numbers of USB ports, and multiple
peripheral devices connected in a tiered-star topology. To expand the number of USB
ports, the USB hubs can be included in the tiers, allowing branching into a tree structure
with up to five tier levels.

The tiered star topology has some benefits. Firstly power to each device can be
monitored and even switched off if an overcurrent condition occurs without disrupting
other USB devices. Both high, full and low speed devices can be supported, with the
hub filtering out high speed and full speed transactions so lower speed devices do not
receive them.

The USB is actually an addressable bus system, with a seven-bit address code. So it
can support up to 127 different devices or nodes at once (the "all zeroes" code is not a
valid address). However it can have only one host: the PC itself. So a PC with its
peripherals connected via the USB forms a star local area network (LAN).

On the other hand any device connected to the USB can have a number of other nodes
connected to it in daisy-chain fashion, so it can also form the hub for a mini-star sub-
network. Similarly it is possible to have a device, which purely functions as a hub for
other node devices, with no separate function of its own. This expansion via hubs is
possible because the USB supports a tiered star topology. Each USB hub acts as a kind
of traffic cop. for its part of the network, routing data from the host to its correct address
and preventing bus contention clashes between devices trying to send data at the same
time.

On a USB hub device, the single port used to connect to the host PC either directly or
via another hub is known as the upstream port, while the ports used for connecting
other devices to the USB are known as the downstream ports. USB hubs work
transparently as far as the host PC and its operating system are concerned. Most hubs
provide either four or seven downstream ports or less if they already include a USB
device of their own.

The host is the USB system's master, and as such, controls and schedules all
communications activities. Peripherals, the devices controlled by USB, are slaves
responding to commands from the host. USB devices are linked in series through hubs.
There always exists one hub known as the root hub, which is built in to the host
controller.
A physical USB device may consist of several logical sub-devices that are referred to as
device functions. A single device may provide several functions, for example, a web-
cam (video device function) with a built-in microphone (audio device function). In short,
the USB specification recognizes two kinds of peripherals: stand-alone (single function
units, like a mouse) or compound devices like video camera with separate audio
processor.

The logical channel connection host to peripheral-end is called pipes in USB. A USB
device can have 16 pipes coming into the host controller and 16 going out of the
controller.

The pipes are unidirectional. Each interface is associated with single device function
and is formed by grouping endpoints.

Fig: The USB "tiered star" topology


The hubs are bridges. They expand the logical and physical fan-out of the network. A
hub has a single upstream connection (that going to the root hub, or the next hub closer
to the root), and one to many downstream connections.

Hubs themselves are considered as USB devices, and may incorporate some amount
of intelligence. We know that in USB users may connect and remove peripherals
without powering the entire system down. Hubs detect these topology changes. They
also source power to the USB network. The power can come from the hub itself (if it has
a built-in power supply), or can be passed through from an upstream hub.

6.4 USB connectors & the power supply:

Connecting a USB device to a computer is very simple -- you find the USB connector on
the back of your machine and plug the USB connector into it. If it is a new device, the
operating system auto-detects it and asks for the driver disk. If the device has already
been installed, the computer activates it and starts talking to it.

The USB standard specifies two kinds of cables and connectors. The USB cable will
usually have an "A" connector on one end and a "B" on the other. That means the USB
devices will have an "A" connection on it. If not, then the device has a socket on it that
accepts a USB "B" connector.

Fig: USB Type A & B Connectors


The USB standard uses "A" and "B" connectors mainly to avoid confusion:

1. "A" connectors head "upstream" toward the computer.


2. "B" connectors head "downstream" and connect to individual devices.

By using different connectors on the upstream and downstream end, it is impossible to


install a cable incorrectly, because the two types are physically different.

Individual USB cables can run as long as 5 meters for 12Mbps connections and 3m for
1.5Mbps. With hubs, devices can be up to 30 meters (six cables' worth) away from the
host. Here the high-speed cables for 12Mbps communication are better shielded than
their less expensive 1.5Mbps counterparts. The USB 2.0 specification tells that the
cable delay to be less than 5.2 ns per meter

Inside the USB cable there are two wires that supply the power to the peripherals-- +5
volts (red) and ground (brown)-- and a twisted pair (yellow and blue) of wires to carry
the data. On the power wires, the computer can supply up to 500 milliamps of power at
5 volts. A peripheral that draws up to 100ma can extract all of its power from the bus
wiring all of the time. If the device needs more than a half-amp, then it must have its
own power supply. That means low-power devices such as mice can draw their power
directly from the bus. High-power devices such as printers have their own power
supplies and draw minimal power from the bus. Hubs can have their own power
supplies to provide power to devices connected to the hub.

Pin No: Signal Color of the cable

1 +5V power Red

2 - Data White / Yellow

3 +Data Green / Blue

4 Ground Black/Brown

Table: USB pin connections


USB hosts and hubs manage power by enabling and disabling power to individual
devices to electrically remove ill-behaved peripherals from the system. Further, they can
instruct devices to enter the suspend state, which reduces maximum power
consumption to 500 micro amps (for low-power, 1.5Mbps peripherals) or 2.5ma for
12Mbps devices.

Fig: USB Type A & B Connectors

In short, the USB is a serial protocol and physical link, which transmits all data
differentially on a single pair of wires. Another pair provides power to downstream
peripherals.

Note that although USB cables having a Type A plug at each end are available, they
should never be used to connect two PCs together, via their USB ports. This is because
a USB network can only have one host, and both would try to claim that role. In any
case, the cable would also short their 5V power rails together, which could cause a
damaging current to flow. USB is not designed for direct data transfer between PCs.
But the "sharing hubs" technique allows multiple computers to access the same
peripheral device(s) and work by switching access between PCs, either automatically or
manually.
6.5 USB Electrical signaling

The serial data is sent along the USB in differential or push-pull mode, with opposite
polarities on the two signal lines. This improves the signal-to-noise ratio by doubling the
effective signal amplitude and also allowing the cancellation of any common-mode
noise induced into the cable. The data is sent in non-return-to-zero (NRTZ) format. To
ensure a minimum density of signal transitions, USB uses bit stuffing. I.e.: an extra 0 bit
is inserted into the data stream after any appearance of six consecutive 1 bits. Seven
consecutive 1 bits is always considered as an error.

The low speed/full speed USB bus (twisted pair data cable) has characteristic
impedance of 90 ohms +/- 15%. The data cable signal lines are labeled as D+ and D-.

Transmitted signal levels are as follows.

1. 0.0V to 0.3V for low level and 2.8V to 3.6V for high level in Full Speed (FS) and Low
Speed (LS) modes
2. -10mV to 10 mV for low level and 360mV to 440 mV for high level in High Speed (HS)
mode.

In FS mode the cable wires are not terminated, but the HS mode has termination of 45O
to ground, or 90O differential to match the data cable impedance. The USB connection
is always between a host / hub at the "A" connector end, and a device or hub's
upstream port at the other end. The host includes 15 k ohm pull-down resistors on each
data line. When no device is connected, this pulls both data lines low into the so-called
"single-ended zero" state (SE0), and indicates a reset or disconnected connection.

A USB device pulls one of the data lines high with a 1.5kO resistor. This overpowers
one of the pull-down resistors in the host and leaves the data lines in an idle state called
"J". The choice of data line indicates a device's speed support; full-speed devices pull
D+ high, while low-speed devices pull D- high. In fact the data is transmitted by toggling
the data lines between the J state and the opposite K state.

A USB bus is reset using a prolonged (10 to 20 milliseconds) SE0 signal. USB 2.0
devices use a special protocol during reset, called "chirping", to negotiate the High-
Speed mode with the host/hub. A device that is HS capable first connects as an FS
device (D+ pulled high), but upon receiving a USB RESET (both D+ and D- driven LOW
by host for 10 to 20 mS) it pulls the D- line high. If the host/hub is also HS capable, it
chirps (returns alternating J and K states on D- and D+ lines) letting the device know
that the hub will operate at High Speed.

6.6 How do USB communicate?

When a USB peripheral device is first attached to the network, a process called
enumeration process gets started. This is the way by which the host communicates with
the device to learn its identity and to discover which device driver is required. The
enumeration starts by sending a reset signal to the newly connected USB device. The
speed of the USB device is determined during the reset signaling. After reset, the host
reads the USB device's information, and then the device is assigned a unique 7-bit
address (will be discussed in next section). This avoids the DIP-switch and IRQ
headaches of the past device communication methods. If the device is supported by the
host, the device drivers needed for communicating with the device are loaded and the
device is set to a configured state. Once a hub detects a new peripheral (or even the
removal of one), it actually reports the new information about the peripheral to the host,
and enables communications with it. If the USB host is restarted, the enumeration
process is repeated for all connected devices.
In other words, the enumeration process is initiated both when the host is powered up
and a device connected or removed from the network.

Technically speaking, the USB communications takes place between the host and
endpoints located in the peripherals. An endpoint is a uniquely addressable portion of
the peripheral that is the source or receiver of data. Four bits define the device's
endpoint address; codes also indicate transfer direction and whether the transaction is a
"control" transfer (will be discussed later in detail). Endpoint 0 is reserved for control
transfers, leaving up to 15 bi-directional destinations or sources of data within each
device. All devices must support endpoint zero. Because this is the endpoint, which
receives all of the devices control, and status requests during enumeration and
throughout the duration while the device is operational on the bus.

All the transfers in USB occur through virtual pipes that connect the peripheral's
endpoints with the host. When establishing communications with the peripheral, each
endpoint returns a descriptor, a data structure that tells the host about the endpoint's
configuration and expectations. Descriptors include transfer type, max size of data
packets, perhaps the interval for data transfers, and in some cases, the bandwidth
needed. Given this data, the host establishes connections to the endpoints through
virtual pipes.

Though physically configured as a tiered star, logically (to the application code) a direct
connection exists between the host and each device.

The host controller polls the bus for traffic, usually in a round-robin fashion, so no USB
device can transfer any data on the bus without an explicit request from the host
controller.
USB can support four data transfer types or transfer mode, which are listed below.

1. Control
2. Isochronous
3. Bulk
4. Interrupt

Control transfers exchange configuration, setup and command information between the
device and the host. The host can also send commands or query parameters with
control packets.

Isochronous transfer is used by time critical, streaming device such as speakers and
video cameras. It is time sensitive information so, within limitations, it has guaranteed
access to the USB bus. Data streams between the device and the host in real-time, and
so there will not be any error correction.

Bulk transfer is used by device like printers & scanners, which receives data in one big
packet. Here the timely delivery is not critical. Bulk transfers are fillers, claiming unused
USB bandwidth when nothing more important is going on. The error correction protects
these packets.

Interrupt transfers is used by peripherals exchanging small amounts of data that need
immediate attention. It is used by devices to request servicing from the PC/host.
Devices like a mouse or a keyboard comes in this category. Error checking validates the
data.

As devices are enumerated, the host is keeping track of the total bandwidth that all of
the isochronous and interrupt devices are requesting. They can consume up to 90
percent of the 480 Mbps of bandwidth that is available. After 90 percent is used up, the
host denies access to any other isochronous or interrupt devices. Control packets and
packets for bulk transfers use any bandwidth left over (at least 10 percent).
The USB divides the available bandwidth into frames, and the host controls the frames.
Frames contain 1,500 bytes, and a new frame starts every millisecond. During a frame,
isochronous and interrupt devices get a slot so they are guaranteed the bandwidth they
need. Bulk and control transfers use whatever space is left.

6.7 USB packets & formats

All USB data is sent serially, of course, and least significant bit (LSB) first. USB data
transfer is essentially in the form of packets of data, sent back and forth between the
host and peripheral devices. Initially, all packets are sent from the host, via the root hub
and possibly more hubs, to devices. Some of those packets direct a device to send
some packets in reply.

Each USB data transfer consists of a:


1. Token Packet (Header defining what it expects to follow)
2. Optional Data Packet, (Containing the payload)
3. Status Packet (Used to acknowledge transactions and to provide a means of error
correction)

The host initiates all transactions. The first packet, also called a token is generated by
the host to describe what is to follow and whether the data transfer will be a read or
write and what the device's address and designated endpoint is. The next packet is
generally a data packet carrying the content information and is followed by a
handshaking packet, reporting if the data or token was received successfully, or if the
endpoint is stalled or not available to accept data.
USB packets may consist of the following fields:

1. Sync field: All the packets start with this sync field. The sync field is 8 bits long at low
and full speed or 32 bits long for high speed and is used to synchronize the clock of the
receiver with that of the transmitter. The last two bits indicate where the PID fields
starts.

2. PID field: This field (Packet ID) is used to identify the type of packet that is being
sent. The PID is actually 4 bits; the byte consists of the 4-bit PID followed by its bit-wise
complement, making an 8-bit PID in total. This redundancy helps detect errors.

3. ADDR field: The address field specifies which device the packet is designated for.
Being 7 bits in length allows for 127 devices to be supported.

4. ENDP field: This field is made up of 4 bits, allowing 16 possible endpoints. Low
speed devices however can only have 2 additional endpoints on top of the default pipe.

5. CRC field: Cyclic Redundancy Checks are performed on the data within the packet
payload. All token packets have a 5-bit CRC while data packets have a 16-bit CRC.

6. EOP field: This indicates End of packet. Signaled by a Single Ended Zero (SE0) for
approximately 2 bit times followed by a J for 1 bit time.

The USB packets come in four basic types, each with a different format and CRC field:
1. Handshake packets
2. Token packets
3. Data packets
4. PRE packet
5. Start of Frame Packets
6.7.1 Handshake packets:

Handshake packets consist of a PID byte, and are generally sent in response to data
packets. The three basic types of handshake packets are

1. ACK, indicating that data was successfully received,


2. NAK, indicating that the data cannot be received at this time and should be retried,
3. STALL, indicating that the device has an error and will never be able to successfully
transfer data until some corrective action is performed.

USB 2.0 added two additional handshake packets.

1. NYET which indicates that a split transaction is not yet complete,


2. ERR handshake to indicate that a split transaction failed.

The only handshake packet the USB host may generate is ACK; if it is not ready to
receive data, it should not instruct a device to send any.

6.7.2Token packets:
Token packets consist of a PID byte followed by 11 bits of address and a 5-bit CRC.
Tokens are only sent by the host, not by a device.
There are three types of token packets.

1. In token - Informs the USB device that the host wishes to read information.
2. Out token- informs the USB device that the host wishes to send information.
3. Setup token - Used to begin control transfers.
IN and OUT tokens contain a 7-bit device number and 4-bit function number (for
multifunction devices) and command the device to transmit DATA-packets, or receive
the following DATA-packets, respectively.

An IN token expects a response from a device. The response may be a NAK or STALL
response, or a DATA frame. In the latter case, the host issues an ACK handshake if
appropriate. An OUT token is followed immediately by a DATA frame. The device
responds with ACK, NAK, or STALL, as appropriate.

SETUP operates much like an OUT token, but is used for initial device setup.

USB 2.0 added a PING token, which asks a device if it is ready to receive an
OUT/DATA packet pair. The device responds with ACK, NAK, or STALL, as
appropriate. This avoids the need to send the DATA packet if the device knows that it
will just respond with NAK.

USB 2.0 also added a larger SPLIT token with a 7-bit hub number, 12 bits of control
flags, and a 5-bit CRC. This is used to perform split transactions. Rather than tie up the
high-speed USB bus sending data to a slower USB device, the nearest high-speed
capable hub receives a SPLIT token followed by one or two USB packets at high speed,
performs the data transfer at full or low speed, and provides the response at high speed
when prompted by a second SPLIT token.
6.7.3 Data packets:

There are two basic data packets, DATA0 and DATA1. Both consist of a DATA PID
field, 0-1023 bytes of data payload and a 16-bit CRC. They must always be preceded
by an address token, and are usually followed by a handshake token from the receiver
back to the transmitter.

1. Maximum data payload size for low-speed devices is 8 bytes.


2. Maximum data payload size for full-speed devices is 1023 bytes.
3. Maximum data payload size for high-speed devices is 1024 bytes.
4. Data must be sent in multiples of bytes

USB 2.0 added DATA2 and MDATA packet types as well. They are used only by high-
speed devices doing high-bandwidth isochronous transfers, which need to transfer more
than 1024 bytes per 125 µs "micro-frame" (8192 kB/s).

6.7.4 PRE packet:


Low-speed devices are supported with a special PID value, PRE. This marks the
beginning of a low-speed packet, and is used by hubs, which normally do not send full-
speed packets to low-speed devices. Since all PID bytes include four 0 bits, they leave
the bus in the full-speed K state, which is the same as the low-speed J state. It is
followed by a brief pause during which hubs enable their low-speed outputs, already
idling in the J state, then a low-speed packet follows, beginning with a sync sequence
and PID byte, and ending with a brief period of SE0. Full-speed devices other than hubs
can simply ignore the PRE packet and its low-speed contents, until the final SE0
indicates that a new packet follows.
6.7.5 Start of Frame Packets:

Every 1ms (12000 full-speed bit times), the USB host transmits a special SOF (start of
frame) token, containing an 11-bit incrementing frame number in place of a device
address. This is used to synchronize isochronous data flows. High-speed USB 2.0
devices receive 7 additional duplicate SOF tokens per frame, each introducing a 125 µs
"micro-frame".

6.8 The Host controllers

As we know, the host controller and the root hub are part of the computer hardware.
The interfacing between the programmer and this host controller is done by a device
called Host Controller Device (HCD), which is defined by the hardware implementer.

In the version 1.x age, there were two competing HCD implementations, Open Host
Controller Interface (OHCI) and Universal Host Controller Interface (UHCI). OHCI was
developed by Compaq, Microsoft and National Semiconductor. UHCI and its open
software stack were developed by Intel. VIA Technologies licensed the UHCI standard
from Intel; all other chipset implementers use OHCI. UHCI is more software-driven,
making UHCI slightly more processor-intensive than OHCI but cheaper to implement.
With the introduction of USB 2.0 a new Host Controller Interface Specification was
needed to describe the register level details specific to USB 2.0. The USB 2.0 HCD
implementation is called the Enhanced Host Controller Interface (EHCI). Only EHCI can
support hi-speed (480 Mbit/s) transfers. Most of PCI-based EHCI controllers contain
other HCD implementations called 'companion host controller' to support Full Speed (12
Mbit/s) and may be used for any device that claims to be a member of a certain class.
An operating system is supposed to implement all device classes so as to provide
generic drivers for any USB device.

But remember, USB specification does not specify any HCD interfaces. The USB
defines the format of data transfer through the port, but not the system by which the
USB hardware communicates with the computer it sits in.

6.9 Device classes

USB defines class codes used to identify a device's functionality and to load a device
driver based on that functionality. This enables a device driver writer to support devices
from different manufacturers that comply with a given class code.

There are two places on a device where class code information can be placed. One
place is in the Device Descriptor, and the other is in Interface Descriptors. Some
defined class codes are allowed to be used only in a Device Descriptor, others can be
used in both Device and Interface Descriptors, and some can only be used in Interface
Descriptors.
Socket
7.1Introduction

In the classic client-server model, the client sends out requests to the server, and the
server does some processing with the request(s) received, and returns a reply (or
replies) to the client. The terms request and reply here may take on different meanings
depending upon the context, and method of operation. An example of a simple and
ubiquitous client-server application would be that of a web-server. A client (Internet
Explorer, or Netscape) sends out a request for a particular web page, and the web-
server (which may be geographically distant, often in a different continent!) receives and
processes this request, and sends out a reply, which in this case, is the web page that
was requested. The web page is then displayed on the browser (client).

Fig: Client Server Model


Further, servers may be broadly classified into two types based on the way they serve
requests from clients. Iterative Servers can serve only one client at a time. If two or
more clients send in their requests at the same time, one of them has to wait until the
other client has received service. On the other hand, Concurrent Servers can serve
multiple clients at the same time. Typically, this is done by spawning off a new server
process on the receipt of a request - the original process goes back to listening to new
connections, and the newly spawned off process serves the request received. We can
realize the client-server communication described above with a set of network protocols,
like the TCP/IP protocol suite, for instance. In this tutorial, we will look at the issue of
developing applications for realizing such communication over a network. In order to
write such applications, we need to understand sockets.
7.2 What are sockets?

Sockets (also called Berkeley Sockets, owing to their origin) can simply be defined as
end-points for communication. To provide a rather crude visualization, we could imagine
the client and server hosts in Figure 1 being connected by a pipe through which data-
flow takes place, and each end of the pipe can now be construed as an \end-point".
Thus, a socket provides us with an abstraction, or a logical end point for
communication. There are different types of sockets. Stream Sockets, of type SOCK
STREAM are used for connection oriented, TCP connections, whereas Data- gram
Sockets of type SOCK DGRAM are used for UDP based applications. Apart from these
two, other socket types like SOCK RAW and SOCK SEQPACKET are also defined.

7.3 TCP/IP and UDP/IP communications

There are two communication protocols that one can use for socket programming:
datagram communication and stream communication.

7.3.1 Datagram communication:


The datagram communication protocol, known as UDP (user datagram protocol), is a
connectionless protocol, meaning that each time you send datagram, you also need to
send the local socket descriptor and the receiving socket's address. As you can tell,
additional data must be sent each time a communication is made.
7.3.2 Stream communication:

The stream communication protocol is known as TCP (transfer control protocol). Unlike
UDP, TCP is a connection-oriented protocol. In order to do communication over the
TCP protocol, a connection must first be established between the pair of sockets. While
one of the sockets listens for a connection request (server), the other asks for a
connection (client). Once two sockets have been connected, they can be used to
transmit data in both (or either one of the) directions.

Fig. TCP/IP Protocol Stack


7.4 Basic Socket system calls

Figure below shows the sequence of system calls between a client and server for a
connection- oriented protocol. Let us take a detailed look at some of the socket system
calls:

Fig. Socket system calls for connection-oriented case.

7.4.1The socket() system call (API):


socket() system call syntax:
int sd = socket (int domain, int type, int protocol);

The socket() system call creates a socket and returns a socket file descriptor to the
socket created. The descriptor is of data type int.
Here, domain is the address family specification, type is the socket type and the
protocol field is used to specify the protocol to be used with the address family
specified. The address family can be one of AF INET (for Internet protocols like TCP,
UDP, which is what we are going to use) or AF UNIX (for Unix internal protocols), AF
NS, for Xerox network protocols or AF IMPLINK for the IMP link layer. (The type field is
the socket type - which may be SOCK STREAM for stream sockets (TCP connections),
or SOCK DGRAM (for datagram connections). Other socket types are defined too.
SOCK RAW is used for raw sockets, and SOCK SEQPACKET is used for a sequenced
packet socket. The protocol argument is typically set to 0. You may also specify a
protocol argument to use a specific protocol for your application.

7.4.2 The bind() system call:

The bind() system call is used to specify the association <Local-Address, Local-Port>. It
is used to bind either connection oriented or connectionless sockets. The bind() function
basically associates a name to an unnamed socket. \Name", here refers to three
components - The address family, the host address, and the port number at which the
application will provide its service. The syntax and arguments taken by the bind system
call is given below:

int result = bind(int sd, struct sockaddr *address, int addrlen);

Here, sd is the socket ¯ le descriptor returned by the socket() system call before, and
name points to the sockaddr structure, and addrlen is the size of the sockaddr structure.
Like all other socket system calls, upon success, bind() returns 0. In case of error, bind()
returns -1.
7.4.3 The listen() system call:

After creation of the socket, and binding to a local port, the server has to wait on
incoming connection requests. The listen() system call is used to specify the queue or
backlog of waiting (or incomplete) connections. The syntax and arguments taken by the
listen() system call is given below:

int result = listen(int sd, int backlog);

Here, sd is the socket file descriptor returned by the socket() system call before, and
backlog is the number of incoming connections that can be queued. Upon success,
listen() returns 0. In case of error, listen() returns -1.

7.4.4. The accept() system call:

After executing the listen() system call, a server waits for incoming connections. An
actual connection setup is completed by a call to accept(). accept() takes the first
connection request on the queue, and creates another socket descriptor with the same
properties as sd (the socket descriptor returned earlier by the socket() system call). The
new socket descriptor handles communications with the new client while the earlier
socket descriptor goes back to listening for new connections. In a sense, the accept()
system call completes the connection, and at the end of a successful accept(), all
elements of the four tuple (or the five tuple - if you consider “protocol" as one of the
elements) of a connection are filled. The “four-tuple" that we talk about here is <Local
Addr, Local Port, Remote Addr, Remote Port>. This combination of fields is used to
uniquely identify a flow or a connection. The fifth tuple element can be the protocol field.
No two connections can have the same values for all the four (or five) fields of the tuple.
accept () syntax:
int newsd = accept(int sd, void *addr, int *addrlen);

Here, sd is the socket file descriptor returned by the socket() system call before, and
addr is a pointer to a structure that receives the address of the connecting entity, and
addrlen is the length of that structure.

Upon success, accept() returns a socket file descriptor to the new socket created.
In case of error, accept() returns -1.

7.4.5 The connect() system call

A client process also starts out by creating a socket by calling the socket() system call.
It uses connect() to connect that socket descriptor to establish a connection with a
server. In the case of a connection oriented protocol (like TCP/IP), the connect() system
call results in the actual connection establishment of a connection between the two
hosts. In case of TCP, following this call, the three-way handshake to establish a
connection is completed. Note that the client does not necessarily have to bind to a
local port in order to call connect(). Clients typically choose ephemeral port numbers for
their end of the connection. Servers, on the other hand, have to provide service on well-
known (premeditated) port numbers. connect() syntax:

int result = connect(int sd, struct sockaddr *servaddr, int addrlen);

Here, sd is the socket file descriptor returned by the socket() system call before,
servaddr is a pointer to the server's address structure (port number and IP address).
addrlen holds the length of this parameter and can be set to sizeof(struct sockaddr).
Upon success, connect() returns 0. In case of error, connect() returns -1.
7.4.6 The send(), recv(), sendto() and recvfrom() system calls:

After connection establishment, data is exchanged between the server and client using
the system calls send(), recv(), sendto() and recvfrom(). The syntax of the system calls
are as below:

int nbytes = send(int sd, const void *buf, int len, int flags);

int nbytes = recv(int sd, void *buf, int len, unsigned int flags);

int nbytes = sendto(int sd, const void *buf, int len, unsigned int flags, const struct
sockaddr *to, int tolen);

int nbytes = recvfrom(int sd, void *buf, int len, unsigned int flags, struct sockaddr
*from, int *fromlen);

Here, sd is the socket file descriptor returned by the socket() system call before, buf is
the buffer to be sent or received, flags is the indicator specifying the way the call is to be
made (usually set to 0). sendto() and recvfrom() are used in case of connectionless
sockets, and they do the same function as send() and recv() except that they take more
arguments (the “to" and “from" addresses - as the socket is connectionless)
Upon success, all these calls return the number of bytes written or read. Upon failure,
all the above calls return -1. recv() returns 0 if the connection was closed by the remote
side.
7.4.7 The close() system call

The close() system call is used to close the connection. In some cases, any remaining
data that is queued is sent out before the close() is executed. The close() system call
prevents further reads or writes to the socket. close() syntax:

int result = close (int sd);

Here, sd is the socket ¯ le descriptor returned by the socket() system call before.

7.4.8 The shutdown() system call:

The shutdown() system call is used to disable sends or receives on a socket.


shutdown() syntax:
int result = shutdown (int sd, int how);

Here, sd is the socket file descriptor returned by the socket() system call before. The
parameter how determines how the shutdown is achieved. If the value of the parameter
how is 0, further receives on the socket will be disallowed. If how is 1, further sends are
disallowed, and finally, if how is 2, both sends and receives on the socket are
disallowed. Remember that the shutdown() function does not close the socket. The
socket is closed and all the associated resources are freed only after a close() system
call. Upon success, shutdown() returns 0. In case of error, shutdown() returns -1.

7.5 Byte ordering, and byte ordering routines

When data is transmitted on networks, the byte ordering of data becomes an issue.
There are predominantly two kinds of byte orderings - Network Byte Order (Big- Endian
byte order) and Host Byte Order (Little-Endian byte order). The network byte order has
the most significant byte first, while the host byte order has the least significant byte
first. Different processor architectures use different kinds of byte orderings. Data
transmitted on a network is sent in the network byte order. Hence, because of
disparities among different machines, when data needs to be transmitted over a
network, we need to change the byte ordering to the network byte order. The following
routines help in changing the byte order of the data:

u short result = ntohs (u short netshort);

u short result = htons (u short hostshort);

u long result = ntohl (u long netlong);

u long result = htonl (u long hostlong);

The hton* routines convert host-byte-order to the network-byte-order. The ntoh*


routines do the opposite.

7.6 Important structs

This section has definitions for the structs used in the socket system calls.
struct sockaddr

{
unsigned short sa family; // address family, AF xxx
char sa data [14]; // 14 bytes of protocol address
};

The sockaddr structure holds the socket address information for all types of sockets.
The sa_family field can point to many address families. The Internet address family is
denoted by AF_INET, and that encompasses most of the popular protocols we use
(TCP/UDP), and so the Internet addresses specific parallel sockaddr structure is called
sockaddr_in. The fields are self-explanatory. The sin_zero field serves as padding, and
is typically set to all zeroes using memset ().

struct sockaddr_ in {
short int sin family;
unsigned short int sin port;
struct in addr sin addr;
unsigned char sin zero[8];
};
Project
DC Motor Control Using PWM

8.1 Pulse Width Modulation (PWM) Basics

There are many forms of modulation used for communicating information. When
a high frequency signal has amplitude varied in response to a lower frequency signal we
have AM (amplitude modulation). When the signal frequency is varied in response to
the modulating signal we have FM (frequency modulation. These signals are used for
radio modulation because the high frequency carrier signal is needs for efficient
radiation of the signal. When communication by pulses was introduced, the amplitude,
frequency and pulse width become possible modulation options. In many power
electronic converters where the output voltage can be one of two values the only option
is modulation of average conduction time.

Fig. Unmodulated, sine modulated pulses

1. Linear Modulation

The simplest modulation to interpret is where the average ON time of the pulses
varies proportionally with the modulating signal. The advantage of linear processing for
this application lies in the ease of de-modulation. The modulating signal can be
recovered from the PWM by low pass filtering. For a single low frequency sine wave as
modulating signal modulating the width of a fixed frequency (fs) pulse train the spectra
is as shown in Fig 1. Clearly a low pass filter can extract the modulating component fm.
Fig. Spectra of PWM

2. Sawtooth PWM

The simplest analog form of generating fixed frequency PWM is by comparison


with a linear slope waveform such as a saw tooth. As seen in Fig 1.2 the output signal
goes high when the sine wave is higher than the saw tooth. This is implemented using a
comparitor whose output voltage goes to logic HIGH when ne input is greater than the
other. Other signals with straight edges can be used for modulation a rising ramp carrier
will generate PWM with Trailing Edge Modulation.

Fig. 1.3 Sine Sawtooth PWM

It is easier to have an integrator with a reset to generate the ramp in Fig1.4 but
the modulation is inferior to double edge modulation.
Fig.Trailing Edge Modulation

3. Regular Sampled PWM

The scheme illustrated above generates a switching edge at the instant of


crossing of the sine wave and the triangle. This is an easy scheme to implement using
analog electronics but suffers the imprecision and drifts of all analog computation as
well as having difficulties of generating multiple edges when the signal has even a small
added noise. Many modulators are now implemented digitally but there is difficulty is
computing the precise intercept of the modulating wave and the carrier. Regular
sampled PWM makes the width of the pulse proportional to the value of the modulating
signal at the beginning of the carrier period. In Fig 1.5 the intercept of the sample values
with the triangle determine the edges of the Pulses. For a saw tooth wave of frequency
fs the samples are at 2fs.

Fig. Regular Sampled PWM


There are many ways to generate a Pulse Width Modulated signal other than fixed
frequency sine sawtooth. For three phase systems the modulation of a Voltage Source
Inverter can generate a PWM signal for each phase leg by comparison of the desired
output voltage waveform for each phase with the same sawtooth. One alternative which
is easier to implement in a computer and gives a larger modulation depth is using space
vector modulation.

4. Modulation Depth

Fig. 1.6 Saturated


Pulse Width
Modulation

For a single phase inverter modulated by a sine-sawtooth comparison, if we


compare a sine wave of magnitude from -2 to +2 with a triangle from -1 to +1 the linear
relation between the input signal and the average output signal will be lost. Once the
sine wave reaches the peak of the transgle the pulses will be of maximum width and the
modulation will then saturate. The Modulation depth is the ratio of the current signal to
the case when saturation is just starting. Thus sine wave of peak 1.2 compared with a
triangle with peak 2.0 will have a modulation depth of m=0.6.
8.2 DPDT RELAY

Relays are switch that can be turned on and off using electricity. They are
electromagnetic switch, which means that a magnet is responsible for turning the
switch. Relays can drive a voltage +V that is as large as you need (using the
appropriate relay, of course) will being controlled by a power +Vcc that can be small
(usually 5 V). This mean that with a computer (working on 5 V, with very small power
~.05 Ampere, you can control a motor of 9 V, 1 ampere, like the Lego motors).

The previous diagram illustrates the different parts of the DPDT relays. The all have 8
pins, two on one side and the remaining 6 on the other side. Normally, a power voltage
is applied to the pin labeled +V, and the opposite pin is connected to the ground (not
pictured here). This current will get out thru pin a, can be send to a load (such as a
motor), and must be send to c so that it can connect with ground. In this situation, pins b
and d are passive. If current is applied to +Vcc and the opposite pin is connected to
ground, things are reversed: a and c become passive, will b and d drive the +V power.
Therefore with this kind of relay, you can choose to either power one appliance or
another.

For a test: Using a power source ,connect one of the pin on one side of the relay to the
positive end, and the facing pin the other contact. Immediately, you should here an
audible click: the relay has just switch from a-c to b-d which mean it is working properly.
If not, reverse the connections.
8.2.1 Controlling the position of the relay

The first thing is to be able to trigger the relay using the computer. You can use pin
number 2. +Vcc must be a power source compatible with the relay specification. If thing
works fine, you should be able to hear the thick when relay turns on.

8.2.2 Making an H-bridge circuit with a DPDT relay

The DPTP relay is in fact a very convenient way of implementing an H-bridge. When the
relay is not connected, a and c are conducting current, therefore, +V makes the motor
turning clockwise. If the relay is activated, b and d are conducting, and try it, the motor
will turn counter clockwise. The only way current from b can return to ground is to pass
through the load (the motor). The nice thing is that (a and d) or (b and c) cannot be both
on at the same time.

8.2.3 The final schematic

In order to realize an electronic project, you need to go through 4 steps: making a


schematic of the circuit, that is an abstract representation of the pieces and how they
are connected together. The above diagrams are all schematic. Next, you need to
device how the pieces will be actually connected (that is physically). You have to think
both of the component placement on the one side of the circuit board, and the traces
(the connections) on the other side of the board. Finally, you have to actually make the
circuit. This is certainly the most difficult part, the one that require the largest amount of
patience.

8.2.4 Driver IC Used For Driving The Relay:

The output of the micro controller being in the range of a few mA, the output as such
becomes in sufficient to drive the relay. Hence the output of the microcontroller has to
be buffered so as to make the relay work. The driver IC used is, ULN2003A. The
ULN2003 are high voltage, high current Darlington arrays each containing seven open
collector Darlington pairs with common emitters. Each channel rated at 500mA and can
withstand peak currents of 600mA. This versatile device is useful for driving a wide
range of loads including solenoids, relays, LED displays filament lamps, thermal print
heads and high power buffers. The ULN2003A are supplied in 16 pin plastic DIP
packages with a copper lead frame to reduce thermal resistance.
8.3 Principle of Working of Dc Motor:

An Electric motor is a machine, which converts electric energy into mechanical energy.
Its action is based on the principle that when a current-carrying conductor is placed in a
magnetic field, it experiences a mechanical force whose direction is given by Fleming's
Left-hand Rule and whose magnitude is given by F = BIl newton.

As regard to the construction, there is no basic difference between a d.c. Generator and
a d.c. Motor. In fact, the same d.c, machine can be used interchangeably as a generator
or as a motor. d.c. motors are also like generators, shunt-wound or series-wound or
compound-wound.

The figure shown below is a part of multi polar d.c motor. When its field magnets are
excited and its armature conductors are supplied with current from the supply mains,
they experience a force tending to rotate the armature. Armature conductors under N-
pole are assumed to carry current downwards (crosses) and those under S-poles, to
carry current upwards (dots). By applying Fleming's Left-hand Rule, the direction of the
force on each conductor can be found. It is shown by small arrows placed above each
conductor. It will be seen that each conductor experiences a force F which tends to
rotate the armature in the anticlockwise direction. These forces collectively produce a
driving torque, which sets the armature rotating.

By reversing current in each conductor as it passes from one pole to another, the
commutator helps to develop a continuous and unidirectional torque.
8.3.1 Load Test Of Dc Shunt Motor

In order to determine the maximum torque and speed of a shunt motor, the load test is
performed. The motor is mounted and fixed on to a rigid base (wooden plank). A long
thread tied on to the shaft is suspended and the other end, tied onto a one-end fixed
spring balance. For different voltages the force is calculated and the current drawn
correspondingly is noted. The reading of the spring balance gives the weight and the
weight multiplied by the acceleration due to gravity, gives the forces. The radius of the
shaft is now found out and the torque calculated as follows:

Torque, ( T ) = Weight x 9.8 x r Newton meter (N-m) ,

where r = radius of the shaft (3mm)

Fixed end
8.4 Characteristics Of Dc Shunt Motor

8.4.1 Voltage – Speed Characteristics:

The motor is rated for 150 rpm @ 12V. The plot shows the variation of speed of the
motor with input voltage at no load condition. It is seen that it is a straight line varying
linearly with the input voltage.

Voltage (v) Speed (rpm)

5 70
6 75
7 90
8 100
9 115
10 128
11 140
12 150

voltage-speed graph

160

140

120
Speed (rpm)

100

80

60

40

20

0
0 2 4 6 8 10 12 14
Voltage (V)
8.4.2 Torque – Current Characteristics:

Assuming the Φ to be practically a constant, we find that Torque ∝ Armature Current.


Hence the electrical characteristics are as shown as in the plot. It is a straight line
through the origin.
Current (A) Torque (KNm)
0.5 .1176
0.55 .1176
0.6 .1176
0.68 .1323
0.77 .147
0.85 .1764
0.91 .2058
1.01 .2352

current-torque graph

0.3

0.25

0.2
)
m
N
(0.15
e
u
q
r
o
T0.1

0.05

4.4)0 Gear Arrangement:


0 0.2 0.4 0.6 0.8 1 1.2
current(A)
Examination itself reveals the transfer functions. The (right) driving gear with 20 teeth,
1 inch radius and 2 ounce-inches torque rotates counter clockwise at 200 RPM and
exerts a 2 ounce force on the driven gear with forty teeth for a ratio of 2:1.
If the teeth are of equal size to permit proper mesh, the driven gear must have 2 inch
radius. During one revolution of the driver, 20 teeth pass 20 teeth on the driven,
resulting in only one half a revolution clock wise, which means the driver must turn twice
for the driven to turn once. This produces 100 RPM or a 2:1 speed reduction.
Since the driver output force is at a radius of 1 inch:
F = T / R = 2 / 1 = 2 oz.

This same force is applied to the driven at a radius of 2 inches:


T = F x R = 2 x 2 = 4 oz-in.

Thus the torque ratio is 2:1 increase.

The basic rules of gear systems are:


• RPM is divided by the gear ratio.
• TORQUE is multiplied by the gear ratio.
• POWER does not change during the transfer.
8.5 INTRODUCTION of 556 Timer

A popular version is the NE555 and this is suitable in most cases where a 555 timer is
specified. The 556 is a dual version of the 555 housed in a 14-pin package, the two
timers (A and B) share the same power supply pins. The circuit diagrams show a 555,
but they could all be adapted to use one half of a 556.

The circuit symbol for a 556 is a box with the pins arranged to suit the circuit diagram:
for example 555 pin 8 at the top for the +Vs supply, 555 pin 3 output on the right.
Usually just the pin numbers are used and they are not labeled with their function.

The 556 can be used with a supply voltage (Vs) in the range 4.5 to 15V (18V absolute
maximum).

8.5.1 PIN DESCRIPTION

Fig. Pin Diagram

The IC 556 is a dual timer 14 pin IC as shown in fig above. There are two sets of six
pins (pin no.1 – 6 and pin no. 8 - 13) are same as the pin no. 2 – 7 in IC 555. The brief
description of each pin is as follows.

Pin 1 & 13: Discharge. This pin is connected internally to the collector of transistor Q1.
When the output is high Q1 is OFF and acts as an open circuit to external capacitor C
connected across it. On the other hand, when the output is low, Q1 is saturated and
acts as a short circuit, shorting out the external capacitor C to ground.

Pin 2 & 12: Threshold. This is the non-inverting input of comparator 1, which monitors
the voltage across the external capacitor. When the voltage at this pin is greater than or
equal to the threshold voltage 2/3 VCC, the output of comparator 1 goes high, which
inturn switches the output of the timer low.
Pin 3 & 11: Control. An external voltage applied to this terminal changes the threshold
as well as trigger voltage. Thus by imposing a voltage on this pin or by connecting a pot
between this pin and ground, the pulse width of the output waveform can be varied.
When not used, the control pin should be bypassed to ground with a 0.01µF Capacitor
to prevent any noise problems.

Pin 4 & 10: Reset. The 555 timer can be reset (disabled) by applying a negative pulse
to this pin. When the reset function is not in use, the reset terminal should be connected
to +VCC to avoid any possibility of false triggering.

Pin 5 & 9: Output. There are two ways by which a load can be connected to the output
terminal: either between pin 3 and ground or between pin3 and supply voltage +VCC.
When the output is low the load current flows through the load connected between pin3
and +VCC into the output terminal and is called sink current. The current through the
grounded load is zero when the output is low. For this reason the load connected
between pin 3 and +VCC is called the normally on load and that connected between pin
3 and ground is called normally off-load. On the other hand, when the output is high the
current through the load connected between pin 3 and +VCC is zero. The output terminal
supplies current to the normally off load. This current is called source current. The
maximum value of sink or source current is 200mA.

Pin 6 & 8: Trigger. The output of the timer depends on the amplitude of the external
trigger pulse applied to this pin. The output is low if the voltage at this pin is greater than
2/3 VCC. When a negative going pulse of amplitude greater than 1/3 VCC is applied to
this pin, comparator 2 output goes low, which in turn switches the output of the timer
high. The output remains high as long as the trigger terminal is held at a low voltage.

Pin 7: Ground. All voltages are measured with respect to this terminal.

Pin 14: +VCC. The supply voltage of +5V to + 18V is applied to this pin with respect to
ground.

8.5.2 INPUTS OF 556

 Trigger input: when < 1/3 Vs ('active low') this makes the output high (+Vs). It
monitors the discharging of the timing capacitor in an astable circuit. It has a high
input impedance > 2M .

 Threshold input: when > 2/3 Vs ('active high') this makes the output low (0V)*. It
monitors the charging of the timing capacitor in astable and monostable circuits.
It has a high input impedance > 10M .

 Reset input: when less than about 0.7V ('active low') this makes the output low
(0V), overriding other inputs. When not required it should be connected to +Vs. It
has an input impedance of about 10k .
 Control input: this can be used to adjust the threshold voltage which is set
internally to be 2/3 Vs. Usually this function is not required and the control input is
connected to 0V with a 0.01µF capacitor to eliminate electrical noise. It can be
left unconnected if noise is not a problem.

 The discharge pin is not an input, but it is listed here for convenience. It is
connected to 0V when the timer output is low and is used to discharge the timing
capacitor in astable and monostable circuits.

8.5.3 OUTPUT OF 556

The output of a standard 556 can sink and source up to 200mA. This is more than most
chips and it is sufficient to supply many output transducers directly, including LEDs (with
a resistor in series), low current lamps, piezo transducers, loudspeakers (with a
capacitor in series), relay coils (with diode protection) and some motors (with diode
protection). The output voltage does not quite reach 0V and +Vs, especially if a large
current is flowing.

8.5.4 APPLICATION

• Astable - producing a square wave


• Monostable - producing a single pulse when triggered

8.5.5 ASTABLE OPERATION

If we rearrange the circuit slightly so that both the trigger and threshold inputs are
controlled by the capacitor voltage, we can cause the 555 to trigger itself repeatedly. In
this case, we need two resistors in the capacitor charging path so that one of them can
also be in the capacitor discharge path. This gives us the circuit shown to the left.

Fig. A stable Operation


In this mode, the initial pulse when power is first applied is a bit longer than the others,
having duration of T= 1.1( Ra + Rb ) * C .

However, from then on, the capacitor alternately charges and discharges between the
two comparator threshold voltages. When charging, C starts at (1/3)Vcc and charges
towards VCC. However, it is interrupted exactly halfway there, at (2/3)VCC. Therefore, the
charging time,

t1 = 0.693( Ra + Rb ) * C

When the capacitor voltage reaches (2/3)VCC, the discharge transistor is enabled (pin
7), and this point in the circuit becomes grounded. Capacitor C now discharges through
Rb alone. Starting at (2/3)VCC, it discharges towards ground, but again is interrupted
halfway there, at (1/3)VCC. The discharge time,

t 2 = 0.693Rb * C

The total period of the pulse train is t1 + t 2 = 0.693( Ra + 2 Rb ) * C

The output frequency of this circuit is the inverse of the period,

1.45
f =
( Ra + 2 Rb ) * C

Note that the duty cycle of the 555 timer circuit in astable mode cannot reach 50%. On
time must always be longer than off time, because Ra must have a resistance value
greater than zero to prevent the discharge transistor from directly shorting VCC to
ground. Such an action would immediately destroy the 555 IC.

One interesting and very useful feature of the 555 timer in either mode is that the timing
interval for either charge or discharge is independent of the supply voltage, VCC. This is
because the same VCC is used both as the charging voltage and as the basis of the
reference voltages for the two comparators inside the 555. Thus, the timing equations
above depend only on the values for R and C in either operating mode.

In addition, since all three of the internal resistors used to make up the reference
voltage divider are manufactured next to each other on the same chip at the same time,
they are as nearly identical as can be. Therefore, changes in temperature will also have
very little effect on the timing intervals, provided the external components are
temperature stable. A typical commercial 555 timer will show a drift of 50 parts per
million per Centigrade degree of temperature change (50 ppm/°C) and 0.01%/Volt
change in VCC. This is negligible in most practical applications.
8.5.6 MONOSTABLE OPERATION

The 555 timer configured for monostable operation is shown in figure.

Fig. Monostable Operation


Monostable multivibrator often called a one shot multivibrator In monostable mode, the
timing interval, t, is set by a single resistor and capacitor, as shown to the right. Both the
threshold input and the discharge transistor (pins 6 & 7) are connected directly to the
capacitor, while the trigger input is held at +VCC through a resistor. In the absence of
any input, the output at pin 3 remains low and the discharge transistor prevents
capacitor C from charging.
When an input pulse arrives, it is capacitively coupled to pin 2, the trigger input. The
pulse can be either polarity; its falling edge will trigger the 555. At this point, the output
rises to +VCC and the discharge transistor turn off. Capacitor C charges through R
towards +VCC. During this interval, additional pulses received at pin 2 will have no effect
on circuit operation.
Time period, T = 1.1RC

The value of 1.1RC isn't exactly precise, of course, but the round off error amounts to
about 0.126%, which is much closer than component tolerances in practical circuits, and
is very easy to use. The values of R and C must be given in Ohms and Farads,
respectively, and the time will be in seconds. You can scale the values as needed and
appropriate for your application, provided you keep proper track of your powers of 10.
For example, if you specify R in megohms and C in microfarads, t will still be in
seconds. But if you specify R in kilohms and C in microfarads, t will be in milliseconds.
It's not difficult to keep track of this, but you must be sure to do it accurately in order to
correctly calculate the component values you need for any given time interval.

The timing interval is completed when the capacitor voltage reaches the +(2/3)VCC
upper threshold as monitored at pin 6. When this threshold voltage is reached, the
output at pin 3 goes low again, the discharge capacitor (pin 7) is turned on, and the
capacitor rapidly discharges back to ground once more. The circuit is now ready to be
triggered once again.
CIRCUIT DIAGRAM

Fig. Circuit Diagram

Fig. 4.3 PWM signal of varying duty-cycles


As shown in circuit diagram all the timing components are placed as per the calculation
carried out in the portion Circuit Design.

A diode D1 is added in parallel with R5 to improve duty cycle in case of Astable


multivibrator. This D1 bypasses R2 during the discharging time of the cycle so that TOFF
depends only on R2 and C1 only. Hence, discharging time reduces and duty cycle
improves.

Resistor R4 (22Ω, 2W) serves as current limiter resistor. It avoids overheating of


transistor T1 by limiting load current.

Transistor T1 drives the motor. T1 turns ON and OFF according to the output pulses of
monostable oscillator at pin no. 9. As the transistor gets pulses on its base, it turns ON
and motor runs.

A diode D2 acts as freewheeling diode. As the T1 turns ON and OFF with high
frequency, energy is stored in winding of motor. During OFF period this energy is
dissipated in form of circulating current through D2 and winding of motor. If freewheeling
diode is not provided, it may damage the transistor T1.

The speed can be varied by adjusting VR1, which changes the threshold value to which
capacitor C1 in the monostable circuit is charged. This, in turn, determines its output
pulse width and hence the average voltage applied to the motor.

The position of DPDT switch determines the direction of rotation of motor. By changing
the position of switch, we can make the motor to rotate in forward or reverse direction.

For effective speed control, ON period of a stable should be equal to the maximum
pulse width of monostable.

You might also like