You are on page 1of 75

Online course on Embedded Systems

MODULE -1 (Introduction)

Embedded Systems is simply the brain of most of the electronics based systems to access,
process, store and control the data. Few simple electronics circuits can be intelligently hardware
designed without a microprocessor or microcontroller but is not worth the economics except for
simple passive operations. So it's more or less must to put this so called silicon brain, which we
engineers call as microcontroller in all electronics systems.

Embedded market is experiencing best of it's times and double-digit market growth will continue
for some more years. It will continue to grow as long as semiconductor ICs are used for data
processing. This may be vague but to be precise, your career growth is assured for another five
years. This is only a prediction/extrapolation based on the current trends in the industry. Who
knows what may come in future, however next two years, it's going to be very hot and growing
field. That too for India, where engineers have extra edge over other regions when it comes to
programming will surely lead in the growth. Though programming is a major task in embedded
systems. Programming knowledge alone won't help much in getting into this world. The real
challenge is in understanding the electronics hardware and also other interface hardware
(Automobile engine, heart patient's ECG, to a motor in a satellite).

Here in this free online course, our objective is to train in embedded system design to pass
entry-level stage and prepare to deep dive into embedded world if you find it easy and
interesting.

We recommend the readers/attendees for this course should be a holder of Bachelor of


Engineering/Technology or Bachelor of Sciences in Electronics or it's closely allied branches.
Otherwise if you are strong in theoretical parts of analog circuits, digital circuits, and
microprocessor can also grasp this content easily. One thing is must; you should love C
programming.

What is Embedded System?

Being an electronic engineer, you might have seen PC desktop's motherboard; it's an embedded
system. It has microprocessor (Pentium or Athlon), memory (DRAM DIMM module and onboard
SRAM) , I/O interface (keyboard, mouse etc..), Peripheral communication interface (PCI, USB
port, etc). This PC system's architecture is designed for application such as net surfing, excel,
word, powerpoint, and you know the rest!!!. Say you want to use same computer to monitor the
engine of your bike or car. Can you think of using big PC for that purpose? It's so impractical.
The i/p and o/p are totally different, here comes customizing your own
microprocessor/microcontroller, memory, display, i/o and peripheral interface and also the
operating system. This field of designing application specific computer systems is called
embedded systems development. If the response of this computer system need to be real time
and highly reliable then it's called Real Time Embedded System. The real time means, say in a
control system where a speed of motor need to varied the moment some parameter deflect from
it's original value, then it's real time; no waiting or hanging.

To define in a sentence, Embedded Systems is a special purpose computer system/board,


which encapsulates all the devices such as processor, memory, interface and control in single
package or board to perform only a specific application tasks.

.
Figure 1: Sample block diagram of a typical embedded system.

Figure 2: PowerPC based embedded board.

The most common examples are,

Cell-phones
Automatic Teller Machine
The Digital Interfaced Gasoline Station
Airborne Flight Control System
Automotive Engine Health Monitoring System
Home Security Systems
Modern Air-conditioners
Washing Machines
Medical Equipment
DVD Players
Printers
Medical Equipment
The list goes on. Wherever the microcontroller is used it's embedded computer.

The leading applications of embedded market are,

Communication
Computer Peripherals
Industrial Control and Automotive
Consumer Electronics
Test and Measurement
Medical
Military/Aerospace

This only is a list of popular applications. The embedded is now getting into lot more interesting
applications such as RFID, Agriculture etc.. Each application need some domain knowledge of
it's interface hardware. Say the project is to develop a coffee vending machine controller; the
embedded programmer/designer has to have knowledge on how the valves dispensing hot
water and milk operate and their technical specification.
The user interface design is different for each application. Some application may not need
graphic interface at all but some may need audio interface.

Embedded systems - Learning curve


Development for embedded systems is different from common practices in many ways. For new
developers in the embedded systems world, there is a learning curve to understand where
conventional practices are no longer valid in this new environment. To be an embedded systems
developer, s/he need to know many things about the hardware on which software will be
executed. Often embedded systems are connected to some sort of control system (activating
some switch, rotating a motor) and the developer also needs knowledge of that system as well.
If the CPU and/or the operating system are different on the target embedded platform, s/he have
to do cross-platform development, which has its own issues. There are different testing
techniques as well because most of the embedded systems don't have a monitor screen where
error messages or test results can be displayed.

All of these issues make embedded systems development much more complicated than writing
a program on a UNIX machine (or a windows PC) and then executing it.

Click on the text below to enter Module-2

MODULE -2 (Microcontroller and programming)

Microprocessor, Microcontroller and System on Chip

Microprocessor:
Microprocessor is the Central Processing Unit (CPU) of embedded system. It does arithmetic and logic operations
of the digital binary data.Very old embedded systems circuit/board was generally made up of separate
microprocessor (8085), I/P interface, O/P interface, memory, clock and timing devices, power supply devices, and
analog/linear devices.

Microcontroller:
In the early days of embedded systems, engineers have built embedded systems with separate set of devices
connected on a printed circuit board. The complexity involved in manufacturing and re-engineering was very high
with many Integrated Circuits and other components on-board. Also the advance in technology has enabled
processor manufacturers to add one device after one into single IC. It started with adding I/O interface and
memory, now we see lot more functions inside the processor chip. These microprocessors with all the additional
support built-in are called microcontrollers.

To define, Microcontroller is an Integrated Circuit device with CPU, memory, I/O interface and any other logic and
analog function on a single chip.

System On Chip (SOC)

Even though MCU holds most of the functions, it still lacks in few special analog functions and application specific
functions. The idea of putting entire system (all the semiconductor IC functions) on a single chip is called System
on chip. On a printed circuit board, you see a single IC accompanied with few discrete and passive components.

If we look at the recent microcontrollers released in the market, most of them are very close to System On Chip.
The concept of SOC is well ticking in the market.

SOC is a common sense solution, that means, why we have to go for a complex board when we can put
everything into a single IC. SOC saves board space, ease manufacturing, and score higher in reliability over non-
SOC solutions. It's drawback is, the manufacturer profits from this product only if it's used in millions. Also it steals
some design flexibility for the design engineer.

Processor Architecture

The two most popular architectures used in embedded world are Harvard and Van Neumann. Read this separate
article describing the differences between these two architectures.

The Van Neumann V/S harvard processor architecture.

The popular microcontrollers and companies

There are plenty of microcontroller manufacturers all around the world. We in India don't have a LOCAL IC
manufacturer who can supply microcontroller chips. However all the major microcontroller vendors in the world
have support offices in all our metros. To learn about latest trends in microcontrollers read this report.

Market and technology trends of microcontrollers in the year 2006

Programming: machine language, assembly language, and C programming

Throughout this course these below books will be suggested to you for further reference. We will mention the
page number and book title wherever is required in this course material. Please buy these or refer them in any
nearby library.
The books are,
1) Embedded Systems Building Blocks, by Jean LaBrosse
2) An Embedded Software Primer, by David Simon
3) The Art of Designing Embedded Systems, by Jack Ganssle
4) Fundamentals of Embedded software by Daniel Lewis

Assembly and machine language

Now let's start embedding! We will begin training you in programming now, and in coming modules we will be
covering on details of functional blocks available on a microcontroller.

The microcontroller is the one, which decides what need to be done, what need not be done, and how to be done.
Basic rule we need to keep in mind while "instructing" the microcontroller is - microcontroller is like a very
intelligent child. The child (controller) would do exactly what was told it to do - nothing more nothing less. If the
instruction is ambiguous then the behavior of the microcontroller would go haywire.

Example: In a bread toaster, the sequence of operations is,

a) Turn on the heater


b) Check whether the bread is properly roasted or not (by checking the temperature or set time)
c) If bread is not yet completely toasted properly again go to step (b)
d) Stop the heater as the bread is toasted properly.

Now how do you tell this sequence to a microcontroller inside a bread toaster? You should tell it (microcontroller)
in a way it understands. It is like speaking to a person who knows some language, which you can't speak. The
instant option left to you to speak to such person is to catch hold of a translator, who knows both the languages
and translate/convert your language to other's language.

The language what all microcontrollers understand is called machine language. Here is just a few lines of
machine language for Freescale's 6812 microcontroller.

CF0C00180B8000024D008018030FA008009600847FB1F033260EFE080009
7E080026EE4C008020E918030FA008004D008020DE23F000

Does this jumble of hexadecimal codes dismay you? Obviously you should be! Any way don't get disheartened by
this magic series of numbers. But make it very clear this (machine language) is the native language of all the
microcontrollers and you should "instruct" them only in their language. Also this machine language is different for
each microcontroller families (8051, PIC, ARM etc..).

In the very early stages itself computer scientists/ chip designers noted this problem instantaneously and came
out with a solution. For each of the operation that microcontroller can do (execute) they assigned an "English like"
word so that programmer/ designer can easily instruct the microcontroller. This is called assembly language.

Here below is table of assembly languages instructions for popular PIC16xx microcontroller. In total it has only 35
instructions.
In this above table, the English type words in the first column are assembly language instructions and the binary
codes in the fourth column are machine language instructions.

With this background let us do a small exercise. Let us try to add two numbers, say 3 and 4.

Again remember this,


[Microcontroller is like a very intelligent child. The child (microcontroller) would do exactly what was told it to do -
nothing more nothing less. If the instruction is ambiguous then the behavior of the microcontroller would go
haywire.]

MOVLW #3; Move value 3 into register W (working register).

What is register?
Register in the context of microcontrollers: Register is some temporary space which it can be used to keep some
value temporarily. Generally every microcontroller will have some registers. Some registers have special purpose
capability. In this context the register we are using is called "W" register or working register.

ADDLW #4; Now the working register content is added with another value 4.

So now the result 7 is in Working register and it can be used by programmer in any way he/she wants (like
display/ store it future arithmetic operations etc)

Basically assembly level language is all about knowing what all the instructions are available with particular
microcontroller and write the program (code) according to requirement using the list of available assembly
instructions. So by now we know little bit of knowledge on how to speak to microcontroller in their own language.

Now the questions arise. We the programmers use the assembly language as the instructions to "instruct" the
microcontroller. But it is already mentioned that microcontrollers only understand machine language as machine
language is the native language of all the microcontrollers and we should "instruct" them only in their language.
How does the assembly language become machine language?

Here comes the "Assembler" (You language translator friend): Assembler is a program, which converts assembly
instructions to machine language. It is like a translator (dhubashi) who would be used when two persons are
communicating in completely different languages.

The C language - very essential

Now we know some basics of machine and assembly language, so that we can instruct microcontrollers in their
own language.

Now let us C!

The assembly language programming would work only for simple embedded applications, as you develop bigger
and complex applications the assembly language code will be very difficult to manage and the time and effort
required to program and debug (fixing errors) rises exponentially with the total code size.

Assembly can still be used for simple programs, only if you wish to experiment. Otherwise C language is the only
practical and efficient solution. The thumb rule to decide whether the program is simple or complex is, less than
1000 lines of assembly code than it's simple, greater than 1000 lines of assembly code; better call it complex.

Here few examples of some of the assembly instruction set of few popular microcontrollers.

680x0 based microcontrollers (680x0 are popular 16 bit series of microprocessor/ micro microcontrollers from
Motorola)
BRA - Branch;
JNOV - Jump on No Overflow;
DBcc - Test Condition, Decrement, and Branch;
STJ - Store jump-register;

Some of the PIC based assembly instructions.


DECFSZ f,d - Decrement f, Skip if 0
BTFSS - Bit Test f, Skip if Set
IORWF f,d - Inclusive OR W with f
RETLW k - Return with literal in W

Some of the 8051 instruction set


DJNZ - Decrement Register and Jump if Not Zero
JBC - Jump if Bit Set and Clear Bit
LCALL - Long Call
LJMP - Long Jump
XCHD - Exchange Digits

What do you think of these instructions? Seems like some combination of English alphabets isn't it? We
immediately feel intimidated by seeing them in the code. How much ever comments/ explanation is provided, it
would be very difficult for "new" guy to understand the logic. (New guy is just an acronym here - if you see the
code you have written in assembly after 6 months - you will be the "new" guy - In the beginning of my
career(author) I have become "new guy" several times and left the difficult portion of the code - simply re-wrote
the module - which saved my time and effort). So first problem is its readability. No matter what ever the clarity in
description in the form of comments/explanation would make a new guy to feel uncomfortable with the code. So
imagine a case of complex embedded application written by multiple developers. Simply it is hell.

Next problem is its (assembly languages) compatibility. Assume with great difficulty the embedded application is
developed in assembly language and is fairly working well. Now suddenly market scenario changes and instead
of microcontroller-X(which is used by you) microcontroller-Y is cheaper/ affordable (We have seen cases where
the microcontroller (and its associated hardware design) changed overnight as another microcontroller was
available for 10 cents less). Now in no way you can complete the project as,
--> You need to completely unlearn the assembly language of microcontroller-X and learn that of
microcontroller-Y
--> Logically design the flow and implement the code using new assembly language
--> Test the entire setup again.

Here comes the silver bullet - C language. Basically C language is universally known and any "new guy" can
learn the basics of C in couple of weeks and understand the design / flow. Also if the hardware (controller) is
changed/ redesigned all you need to do is re-compile your program for the new microcontroller :) Life is very easy.

Again - let us remember our postulate. Microcontrollers only understand machine language as machine language
is the native language of all the microcontrollers and we should "instruct" them only in their language. So how
does the C language code become machine language?

Here comes the "Compiler", compiler is a program which converts C language to machine language. It is like
another (high level) translator

But bear in mind - Assembly language (or machine language) is the one which gives fastest and compact code.
Basically assembly language is used in these two things.

The first is for when you need to access hardware. Writing routines to interact with the hardware can be easier
and cleaner than the equivalent operation in C. It is not difficult to export the assembly routines to make them
callable from C or some other language, so you can get the advantage of having precise control over the
hardware without having to write your entire program in assembly.

The other thing assembly is good for is optimizing certain parts of a program. If you have an extremely time
critical routine that is called a lot, then it makes sense to go through it with a fine toothed comb and choke every
possible cycle out of it. You have to look at the speed gain and compare it to the time you spent optimizing the
code. If you spent three hours optimizing a routine, and you only get a 2 microsecond speed gain, then you have
to call that routine billions of times to make it worthwhile. In most cases, it's simply not worth the effort. Embedded
applications and DSP are areas where hand optimization might make a significant difference, but unless you
doing some extreme number crunching on a PC, it's probably not worth it.

Other than these two things assembly language does not play much role in embedded systems. There are some
extreme cases where the complete windows programming is done in assembly. This just shows how much
complexity a person can handle and remember this can not be done by next (other) guy.

Ok now we are little bit into the embedded systems. At this point it would be ideal to have some hardware/
assembler/ compiler to play with. We would provide some of the example hardware which we would be using (it
would be better if you have them and keep trying the next sections/ examples). Otherwise it will be like hearing a
nice story and forgetting everything.
Kits to buy to practice this course:

From this module onwards our teaching gets more practical. If you own a personal computer at home, you can
establish your own lab to practice this course. You got to do little shopping to establish an embedded lab.

You need microcontroller development and support material. We have decided to tailor this course for ARM 7
based microcontrollers. The kit we are using is AME-51 lite (ML67Q4050) from OKI Semiconductor.

Here are brief specs of the kit:

Kit Name: AME-51 Lite, the kit consists of,

AME-51 Lite CPU Board with processor ML67Q4050 (ARM 7)


RS232 Serial Cable
OKI AME-51 Lite CD (GNU Compiler)
Quick Start Guide

The cost of the kit is with in 6000/- Rs (Inclusive of taxes)

The contact details to buy this kit are,

Contact person: Amit Agarwal


OKI Semiconductor Singapore Pte. Ltd.
906 Prestige Meridian -1
29 MG Road Bangalore
Ph: 91 -80- 41530990/91/92
Mobile: 91- 99001-59714

However you can continue to learn from this course without buying kit. There won't be much of a loss except for
the practical exposure. If in case you wish to write few sample code on yourself (after completing reading of few
more modules) send to us your code/program to get vaidated. We can publish and reward good programs (both c
and assembly).

Also you can buy any other ARM7 kit, and try to intrepret this course material to that kit.

Our email: editor@eeherald.com

Click on the text below for the next module

As already mentioned in module-2, we are using AME-51 Lite Microcontroller board in our
course. It's based on 32 bit ML67Q4051 ARM processor from OKI Semiconductor.

The items bundled with this kit are,


AME-51 Lite evaluation board with ML67Q4051 MCU on board
Serial RS232 cable - 9-pin male/female
OKI AME-51 Lite CD
Quick Start Guide

Kit does't provide 5V DC power suppy. It has to be purchased separately from any electronic
shop. The rating of this power supply is,
Output Voltage = 5 to 7.5V unregulated DC Voltage

Current rating = 1Amp

with a 2.1 mm power adapter (Center pin positive)

The components on the AME-51 Lite evaluation board are,

MCU ML67Q4051
Oscillator 32.768 MHz (main clock); 32.768 KHz (sub-clock)
SRAM 1MB 256K x 32-bit
Serial Ports UART0 (J1-DB9 female pins); UART1 (P1-DB9 male pins)
JTAG interface CONN1 20-pin header (10 x 2 dual row)
Power Supply 3.3 V and 2.5 V regulated on board power supply
Power connector J2 connect optional external 5-to-9 VDC, 400 mA supply
Power indicator D6 red LED
DIP Switch SW1 - 8 position - configures MCU operating modes
Pushbutton Switches SW2 EFIQ; SW3 EXIRQ1; SW4 RESET
LEDs D2 green; D4 yellow; D5 red
LED display LED1- 7 segment numeric
Inter-board connectors CONN2, CONN3, CONN4, CONN5

Here is the picture of the board

1. White cable above number '1' is RS 232 cable.

2. Black plug below number '2' is power plug.

Kit installation
Please follow these below steps to connect the board to your PC.

Place your board at a convenient place next to your PC.


Plug in 5V DC adapter to CPU board. Ensure your 5V DC power supply plug's center pin is
positive and it should be of size 2.1 mm. Also make sure your 5V DC adapter current rating is
1Amp.
Connect the provided serial cable from your PC's serial port to the board serial port UART0
There is an 8 position DIP switch to tell the board what to do.
Board can go into four operating modes through four different configurations of the switch.
Here is the switch position table for the four modes.

Serial Flash Stand- JTAG SRAM


SW1
Program alone debug debug
FWJ OFF OFF OFF OFF
ROMSEL X ON ON ON
EXBUSE X ON ON ON
EXIROME OFF OFF OFF OFF
BOOT1 OFF OFF ON ON
BOOT0 ON OFF OFF OFF
BOOTCLK X X X X
JTAGE X X ON X

X - Dont care (can be either Off or On)

Default set position is stand-alone mode. If not in stand-alone mode, set it to stand-alone mode.

Installation of software part (compiler):

Load the CD provided in the kit into your PC's CD ROM drive and look for ame51setup.exe and
double click to install. Follow the easy instructions and complete the installation. It's must to
install it on C drive to avoid the complexity of changing root setting in the makefiles.

Unzip the file ttermp23 and extract them to a folder. In the unzipped files click setup.exe to
install Tera Term Terminal Emulator software. Follow the simple instructions and complete the
installation.

This software will only work on following Operating Systems


Windows XP professional
Windows 2000 with service pack 1 installed
Windows 98 second edition

At this stage both hardware and software installation is complete.

PS: We have used ARM based kit over 8051 and PIC for the reason of its rising
popularity and growing importance.

Click on the text below to enter Module-4

OK - Let us start now with "real" hardware (board) and real "software".
One thing you should make sure is that you have the hardware board and the necessary
software installed on your PC.(if at all you have purchased the OKI development board) and
read about the various documents available with that.

Reading of Module-3 is good enough to install h/w and s/w. However if you need further details,
search for AME-51 lite on www.okisemi.com for user manual docs of this kit. Click on this link to
dowload the same from our website.

First thing you need to have is an "editor". Basically editor is a program by which we can "see"
and "alter" source codes. There are plenty of editors available from normal notepad to highly
sophisticated editors, which would be part of IDE (Integrated Development Environment). To
start with let us start using an editor called "Notepad ++". This is freeware (no need to pay any
money for this). You can download this from following link. In case if you can not find this
freeware, the ordinary notepad of any windows operating system can be used.

http://sourceforge.net/project/showfiles.php?group_id=95717&package_id=102072

Once you have installed the development environment provided by OKI - you would have
"ame51gnu" directory (folder) in C:\ drive. You can browse through various folders in this "C:\
ame51gnu\" and get to know how the source code is organized for examples etc.

674051 Directory
There are at least 4 sub directories within 674051 Directory. Below figure explains the meaning
of each of this item.
Name of the folder Explanation
Directory for common assembler and C
COMMON
Sources
NewProj Template project directory
Hello Hello world sample program
Bootimage SRAM downloaded binary
LED LED Sample program
Created to for this discussion purpose (you will
TestLED
not have this folder)
Some other different Would contain corresponding source files.
example folders

Common Directory holds two more subfolders (i) INC and (ii) SRC.

INC folder holds all the header files for source programs.

SRC holds the common assembler and C programs.

They would be locate at "C:\ame51gnu\Examples\674051\COMMON\INC" and


"C:\ame51gnu\Examples\674051\COMMON\SRC" respectively.

The contents of these folders would be shown as below.

Project Directory (example Hello)


This "Hello" directory is taken as an example folder (as most of you would be aware of "Hello
world" program in the beginning of your C language learning days). This folder (as well others
also) consists of "hello.c" (this would be different for other example programs), "flash.ld" and
"Makefile" - this would be same (at least the name of these files) in all other example programs
also.
Hello.c : This is the main source program.
Flash.ld : Linker script which defines how the program places into RAM and ROM of the chip
Makefile : It defines the compiler and linker setting. It automates the whole compiling process
by just typing "gnumake"

With this background of understanding file structure, how they are organized, and how the
typical source code directory would look like (like hello directory), let us start embed ourselves
in to our first embedded program.

You should know how an output device is connected (and what is the output device used),
intricacies of this output device etc So let us start our first program with LED. Let us turn ON
an LED. Here are the basic steps to be followed.

1) First copy the LED directory as TestLED. Now you would have TestLED folder as well in
"C:\ame51gnu\Examples\674051" directory
2) Rename the LED.c in this directory as TestLED.c (you can use dos command "ren" or press
"F2" in windows to rename).
3) Open the TestLED.c file with "Notepad++" and delete all the content.
4) Insert the following code in this file (TestLED.c)

int main(void)
{
int i;
volatile unsigned char * ModeRegister;
volatile unsigned char * OutputRegister;

ModeRegister = 0xB7A04008;
OutputRegister = 0xB7A04000;
*ModeRegister = 0x01; // Configure Port E bit 0 as outuput
while (1)
{
*OutputRegister = 0x01; // Set the Port E bit 0 as 1
//Delay
for (i=0; i<1000000; i++)
;
*OutputRegister = 0x00; // Set the Port E bit 0 as 0
//Delay
for (i=0; i<1000000; i++)
;
}
}

5) Open the "Makefile" (in the same directory) and replace the line number 12 as OUT =
TestLED . Basically this command tells the compiler to name the outfile as TestLED.hex

6) Now compile the code -


To compile, open the DOS prompt of your system (to open the dos prompt click on the start
button/icon of your window OS and look for run in the menu list and click on it. Type "cmd"
or "command" in the entry space and click ok). Now the DOS prompt will open to a default
location. Now type the DOS command
"cd \ame51gnu\examples\674051\TestLED\" at the DOS prompt.
Type "gnumake" at the DOS prompt to compile. Now the program should compile (any error!
read the embedded_kit_manual.pdf. for detailed program compiling and running
guidance).

7) After compiling check in the folder \ame51gnu\examples\674051\TestLED\, you could see a


new file called TestLED.hex is created. This stores the hexadecibal machine langauge code
of this program.

8) Load this TestLED.hex file into the board through serial port connected from PC to board
using Tera term pro software already loaded on your PC. To learn how to load the program read
embedded_kit_manual.pdf.

9) After loading the program change the switch positions from stand alone mode to SRAM
mode and press the reset button.

10)Now you would see the RED LED blinking. It will turn ON and OFF - continuously.

Here is the explanation of the code.


1) Generally any processor or microcontroller would have some ports. These are called
General Purpose Input Output (GPIO).

Here you may ask - why these are called GPIO or Ports? If you recall - in earlier days any
"goods" that should enter or leave a country would be transported through ships and these
ships would be entering or leaving ports of that country. In the same way if you want to send
any signal you should put the signal in the port and it would be send. Conversely, if you wish to
receive any signal - you should receive (or read) using the ports.

2) The ARM Chipset used in this board has Port 0 to Port 15. Each port has different bit
width(some ports has 8 bits, some have 7, some have 6 and some are with only 5, which could
be used for any purpose (so called as GPIO).
3) In this example we are using Port E - bit (0). Each port can contain any number of individual
bits that can be used (generally port would have 8 bits). This is like saying 8 ships can arrive or
depart from this port. Always note that these bits (or anything in embedded world) would be
counted from 0! So 8 bits means bit 0 to bit 7 are available.
4) In a way - we are forced to use this Port E - bit( 0), because this is the bit, which is
connected to RED LED in our kit. So it is always required for embedded engineers/
programmers to have the complete understanding of the hardware - how it is connected like?
what is connected to? where it is connected ?and why. Read the manual
embedded_kit_hw_manual.pdf to know how the LEDs are connected.
5) Now we know that, we need to make this Port E - bit (0) high (called some times logical 1 - or
simply the voltage becomes +5V) to make the LED to glow and Port E - bit (0) to Low (called
some times logical 0 - or simply the voltage becomes 0V) to turn off the LED.
6) You cannot use a port to write and read just like that! You need to tell the processor (in our
case ARM chipset) that we are using the particular port and particular bit as output or input.
This is like having two-way line - we need to "go" out in left and "come" in back in right. ARM
chipset provides a "Mode control" register, which does this job.
7) So in summary - we need to "tell" the processor that we are using Port E - bit (0) as output
and make this bit high and low in a continuous loop.
8) Now look at the code once again.
I. "int i" is used as general purpose variable (used for delay).

II. ModeRegister and OutputRegister are used as 8 bit pointers (both have 8 - GPIOs)

III. ModeRegister is at 0xB7A04008 and OutputRegister is at 0xB7A04000 (Note any ports


would have address - This address is used to access particular port)
IV. Now configure this Port e - bit 0 as out put by writing 0x01 (as last bit is made 1).
V. Now write 0x01 to OutputRegister to glow the LED and write 0x00 to turn off the LED.
VI. Do this in a loop so that LED turns ON and OFF continuously.

Click on the text below to enter Module-5

The background of ARM:

ARM is the acronym for Advanced RISC Machine, an UK based company, which has pioneered the growth of
RISC processor Architecture.
What is RISC? RISC stands for Reduced Instruction Set Computer. RISC based architecture although invented
quite earlier but has become popular and overtook its rival architecture Complex Instruction Set Computer (CISC)
somewhere during late nineties. Most popular CISC architecture is 80x86 processors from Intel. Recently Intel too
has adopted RISC kind of features and architecture in latest CPUs.

The advantage of RISC is in the simplicity (in terms of processor resource consumption) of the instructions and
processing time. Each instruction takes only single clock cycle. Overall power consumption is very less. Due to
this fast response, low power consumption and coding flexibility, RISC architecture is highly suitable for
embedded systems. However there is one drawback with RISC, that is the instruction set code is longer and
takes more memory. This issue is no more a concern with the growth in the memory technology.

As said earlier in the previous modules, we are using OKI Semi's ML67Q4051 microcontroller in this course
material. The processor core used in ML67Q4051 is ARM7TDMI. ARM7TDMI is the most used RISC core from
ARM.

The architecture of ARM7TDMI is shown in figure below


Highlights of ARM7TDMI:
--There are 37 registers of 32 bit wide in this processor core. 16 registers are available for the programmer.
--It's pipeline architecture; that is 3 instructions are processed simultaneously at 3 different stages.
--The bus architecture is of Von Neumann type where single 32-bit data bus carry both instructions and data.
--The data-types can be of 8 bit /16 bit/32 bit wide.
--Processor can run on seven different modes based on the application requirement.
--Has built in 32x8 multiplier and a 32 bit barrel shifter (both needed much for DSP functionality)
--This processor can also execute another instruction set called THUMB state (16 bit) to give the
programmer an option to use this processor like CISC processor. The total instruction set can be tidier and
takes less memory space.

To study in-depth, the architecture and other capabilities of ARM7DMI, please read the pdf file from the link below.
The content of this pdf file is simple and self-understanding.
http://www.arm.com/pdfs/DDI0210B_7TDMI_R4.pdf

OKI'S Microcntroller ML67Q4051:

Now let us read about the OKI Semiconductor's MCU ML67Q4051. It has quite a good list of latest features.

The features of this MCU are,


--Has built in SRAM of 16 KB and FlashROM of 128 KB.
--Has robust clock network
--Interrupt controller supports 41 interrupt resources
--External memory controller to access ROM, SRAM, and I/O connected to the external memory space.
--Has system timer of 16bit auto reload timer with its interrupt given high priority.
--Built-in DMA controller enables direct data transfer between memory-memory, I/O -memory, and between I/O
-I/O devices to spare the CPU from simple data transfer burden.

--Watchdog timer to monitor the program from running out of control and generate interrupt or reset signal.
--The built-in 4-channel, 10-bit resolution analog-to-digital converter supports two modes of operation: Scan
mode sequentially converts input from the selected range of channels; select mode converts input from a single
channel.
--Has 15 general purpose I/O ports: 8channels of 8-bit, 3 channels of 7-bit, 3 channels of 6-bits, and one
channel of 5 bits.
--Integrates one channel of I2C bus interface, one I2S (serial audio interface) bus interface, one UART
interface, one SIO interface and one SPI interface.
--Real time clock (RTC) with 10,000-year calendar with resolution down to 1 second.
--Flexible timer block with 6 channels of 16 bit timer.
--Has JTAG interface to debug the program from a host computer

The internal architecture of this MCU is shown below.


To learn in-depth about this MCU please visit following link:
http://www.okisemi.com/eu/docbox/ML67Q4050_4060-DS_rev1.2.pdf

Sample Program 2: Display decimal numbers from 0-9 in the seven segment display.

As we learn the basic idea about programming and how to run and execute it in the previous
module (module -4), its time to move on to next level of programming. Instead of turning ON &
OFF an LED, let us use the seven segment display device in the kit to display or count the
numbers from 0 to 9.

Before going into programming steps, let us have an idea about the seven segment display and
its connectivity to the driver & the MCU unit.
The following fig (Fig 6.a) shows various elements of a 7 - segment display and how it is
connected to different pots of the MCU GPIO (General Purpose Input and Output). Please note
segment A- F are connected to one port(PF) and elements G and DP are connected to other
port (PD)

Fig 6.a

For circuit details download the circuit diagram file. Though reading into circuit is little stressful
but gives more insight into how the ports are connected.

The display used is a common anode type RED display unit , which is connected to the MCU
through a Low-Voltage Octal Bus Buffer (inverted), TC74LCX240F.

So now what we have do is, to find out the Hexadecimal code word for each number (from 0 to
9) to get displayed on the 7-segment LED display unit.

Port Segment 1 2 3 4 5 6 7 8 9 0
PF 0 A OFF ON ON OFF ON ON ON ON ON ON
PF 1 B ON ON ON ON OFF OFF ON ON ON ON
PF 2 C ON OFF ON ON ON ON ON ON ON ON
PF 3 D OFF ON ON OFF ON ON OFF ON OFF ON
PF 4 E OFF ON OFF OFF OFF ON OFF ON OFF ON
PF 5 F OFF OFF OFF ON ON ON OFF ON ON ON
PD 3 G OFF ON ON ON ON ON OFF ON ON OFF
PD 4 dp X X X X X X X X X X
Chart 1

The above chart (chart 1) provides the information on which output lines/pins to be made high
or low to display the corresponding numbers. We have not used dp (decimal point) segment
and so is the status given as 'X' meaning don't care.

Now, let us figure out the hexadecimal code for each number.

For example let us take the number "2". From the chart given above, it can be understood that,
to display 2, we need to make a, b, d, e & g segments of the display "HIGH" and the remaining
segments "LOW".

Thus the GPIO output registers PF & PD should hold the data values as shown below.

Bit PF5 PF4 PF3 PF2 PF1 PF0


Data X 1 1 0 1 1

Bit PD5 PD4 PD3 PD2 PD1 PD0


Data X 0 1 X X X

X - Don't care

Here let's assume to substitute X with 0 (zero). The hexadecimal equivalent of "011011" is "1B"
and that of "001000" is "08". So to display the digit "2" we need to load 1B to the register of the
port PF and 08 to the register of the port PD.

Similarly we can find out the hexadecimal equivalent code for each digit to be displayed.

Once we got the Hexadecimal code for each number, it is time to move to the programming
section. Here are the basic steps to be followed.

1. As we did in the Module-4 , copy the TestLED directory as COUNTER. Now you would
have COUNTER folder as well in "C:\ame51gnu\Examples\674051" directory
2. Rename theTestLED.c in this directory as counter.c

3. Open the counter.c file with "Notepad++" or any other text editor and delete all the
content and insert the following code in this file

int main(void)
{
int i;
volatile unsigned char * ModeRegister1;
volatile unsigned char * OutputRegister1;
volatile unsigned char * ModeRegister2;
volatile unsigned char * OutputRegister2;

ModeRegister1 = 0xB7A05008; // PF GPIO mode register address in Hex


OutputRegister1 = 0xB7A05000; // PF GPIO output register address in Hex
ModeRegister2 = 0xB7A03008; // PD GPIO mode register address in Hex
OutputRegister2 = 0xB7A03000; // PD GPIO output register address in Hex
*ModeRegister1 =0x00FF; // Configure Port F bit 0 as output
*ModeRegister2 =0x00FF; // Configure Port D bit 0 as output

while (1)
{
*OutputRegister1 = 0x003F; // Display 0
*OutputRegister2 = 0x0000; // Display 0

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x0006; // Display 1


*OutputRegister2 = 0x0000; // Display 1

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x001B; // Display 2


*OutputRegister2 = 0x0008; // Display 2

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x000F; // Display 3


*OutputRegister2 = 0x0008; // Display 3

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x0026; // Display 4


*OutputRegister2 = 0x0008; // Display 4

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x002D; // Display 5


*OutputRegister2 = 0x0008; // Display 5

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x003D; // Display 6


*OutputRegister2 = 0x0008; // Display 6

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x0007; // Display 7


*OutputRegister2 = 0x0000; // Display 7

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x003F; // Display 8


*OutputRegister2 = 0x0008; // Display 8

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x0027; // Display 9


*OutputRegister2 = 0x0008; // Display 9
for (i=0; i<1000000; i++); //Delay
}
}

5. Now compile and run the program by following the same steps as given in module -4.

6. Follow the procedures to load the program into the board as mentioned in module-4 or read
this user manual.

Now you should see the 7-segment LED display, counting from 0 to 9 and repeats the same.

You might have already got an idea about the ARM chipset used in this kit, different ports, bits,
different type of modes in which it can operate (or can be configured), and how to use the mode
register & output register to make the MCU to work according to our needs. Same kind of
configuration is done in this sample program also, except the registers used are PF & PD.

Sample Program 3: Simultaneously display decimal numbers from 0-9 in the seven
segment display and lighting the three LEDs available on the board/kit.

The following fig (Fig 6.b) shows the connectivity of the 3 LEDs (Green, Red & Yellow) to the
Micro controller ports.

Fig 6.b

Hope you remember that we used the port PE in the sample program in module 4.
Same port is used here too to connect the three LEDs to the MCU.
And also, as you saw in the previous sample program in this module (module 6), the ports PF &
PD are used for the 7-segment display unit to get connected to the MCU.

So let us do it in a different way. Make the green LED to glow when the 7-segment displays
digits '2' and '4', both green & yellow LEDs to glow while displaying '5', '6' and '7' and all the
three LEDs to glow while displaying '8' and '9' on the 7-segment display.

So let us configure the output register of the port PE for the three LED to work along with the 7-
segment display unit.

As you can see in the fig 6.b, the port bitsPE0, PE1 & PE2 are used to drive the red, Yellow &
green LEDs respectively. And remember they are wired (in the kit) as "active low".

Bit PE6 PE5 PE4 PE3 PE2 PE1 PE0


Data X X X X 0 0 0

Suppose we want to glow the three LEDs at a time. For this, all the three bits PE0, PE1 & PE2
are made Low (0) (bits PE3 to PE6 kept as "Dontcare" in this case)

Its clear that the hexadecimal code to display all the three LEDs is 00 and to make all of them
OFF, the code is 07.
Now its time to focus on the program code. Let us change the previous program as show
below.

int main(void)
{
int i;
volatile unsigned char * ModeRegister1;
volatile unsigned char * OutputRegister1;
volatile unsigned char * ModeRegister2;
volatile unsigned char * OutputRegister2;

volatile unsigned char * ModeRegister3;


volatile unsigned char * OutputRegister3;

ModeRegister1 = 0xB7A05008; // PF GPIO mode register address in Hex


OutputRegister1 = 0xB7A05000; // PF GPIO output register address in Hex
ModeRegister2 = 0xB7A03008; // PD GPIO mode register address in Hex
OutputRegister2 = 0xB7A03000; // PD GPIO output register address in Hex

ModeRegister3 = 0xB7A04008; // PE GPIO mode register address in Hex


OutputRegister3 = 0xB7A04000; // PE GPIO output register address in Hex

*ModeRegister1 =0x00FF; // Configure Port Fbit 0 as outuput


*ModeRegister2 =0x00FF; // Configure Port D bit 0 as outuput

*ModeRegister3 =0x00FF; // Configure Port E bit 0 as output


while (1)
{
*OutputRegister1 = 0x003F; // display 0
*OutputRegister2 = 0x0000; // display 0

*OutputRegister3 = 0x0007; // all the three LEDs OFF

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x0006; // Display 1


*OutputRegister2 = 0x0000; // display 1
*OutputRegister3 = 0x0007; // all the three LEDs OFF

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x001B; // display 2


*OutputRegister2 = 0x0008; // display 2
*OutputRegister3 = 0x0003; //Green LED ON

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x000F; // display 3


*OutputRegister2 = 0x0008; // display 3
*OutputRegister3 = 0x0003; //Green LED ON

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x0026; // display 4


*OutputRegister2 = 0x0008; // display 4
*OutputRegister3 = 0x0003; //Green LED ON
for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x002D; // display 5


*OutputRegister2 = 0x0008; // display 5
*OutputRegister3 = 0x0001; // Green & Red LEDs ON

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x003D; // display 6


*OutputRegister2 = 0x0008; // display 6
*OutputRegister3 = 0x0001; // Green & Yellow LEDs ON

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x0007; // display 7


*OutputRegister2 = 0x0000; // display 7
*OutputRegister3 = 0x0001; // Green & Yellow LEDs ON

for (i=0; i<1000000; i++); //Delay

*OutputRegister1 = 0x003F; // display 8


*OutputRegister2 = 0x0008; // display 8
*OutputRegister3 = 0x0000; //green, Yellow, & Red LEDs ON

for (i=0; i<1000000; i++); //Delay


*OutputRegister1 = 0x0027; // dis9
*OutputRegister2 = 0x0008; // dis9
*OutputRegister3 = 0x0000; //green, Yellow, & Red LEDs ON

for (i=0; i<1000000; i++); //Delay

}
}

Now what you have to do is just edit the "counter.c" file in the COUNTER folder and alter the
program as shown in the above code, or copy this program entirely to the counter.c file after
deleting the previous code . compile and run the code as explained along with the previous
sample programs. Now you can see the LED getting on in the planned sequence while the
seven segment is displaying 0 to 9.

Now its times to write your own program?

Write an easy program to display hexadecimal numbers F to 0 in decremental order.

SERIAL COMMUNICATION - Part 1(Basics)

(I) Introduction:
Serial communication is common method of transmitting data between a computer and a peripheral device such
as a programmable instrument or even another computer. Serial communication transmits data one bit at a time,
sequentially, over a single communication line to a receiver. Serial is also a most popular communication protocol
that is used by many devices for instrumentation; numerous GPIB-compatible devices also come with an RS-232
based port. This method is used when data transfer rates are very low or the data must be transferred over long
distances and also where the cost of cable and synchronization difficulties make parallel communication
impractical. Serial communication is popular because most computers have one or more serial ports, so no extra
hardware is needed other than a cable to connect the instrument to the computer or two computers together.

(II) Serial Vs Parallel:


Let us now try to have a comparative study on parallel and serial communications to understand the differences
and advantages & disadvantages of both in detail.
We know that parallel ports are typically used to connect a PC to a printer and are rarely used for other
connections. A parallel port sends and receives data eight bits at a time over eight separate wires or lines. This
allows data to be transferred very quickly. However, the setup looks more bulky because of the number of
individual wires it must contain. But, in the case of a serial communication, as stated earlier, a serial port sends
and receives data, one bit at a time over one wire. While it takes eight times as long to transfer each byte of data
this way, only a few wires are required. Although this is slower than parallel communication, which allows the
transmission of an entire byte at once, it is simpler and can be used over longer distances. For example, the IEEE
488 specifications for parallel communication state that the cabling between equipment can be no more than 20
meters total, with no more than 2 meters between any two devices; serial, however, can extend as much
as1200meters (with high-quality cable).
So, at first sight it would seem that a serial link must be inferior to a parallel one, because it can transmit less data
on each clock tick. However, it is often the case that, in modern technology, serial links can be clocked
considerably faster than parallel links, and achieve a higher data rate.
Even in shorter distance communications, serial computer buses are becoming more common because of a
tipping point where the disadvantages of parallel busses (clock skew, interconnect density) outweigh their
advantage of simplicity (no need for serializer and deserializer).
The serial port on your PC is a full-duplex device meaning that it can send and receive data at the same time. In
order to be able to do this, it uses separate lines for transmitting and receiving data.
From the above discussion we could understand that serial communications have many advantages over parallel
one like:
a) Requires fewer interconnecting cables and hence occupies less space.
b) "Cross talk" is less of an issue, because there are fewer conductors compared to that of parallel
communication cables.
c) Many IC s and peripheral devices have serial interfaces.
d) Clock skew between different channels is not an issue.
e) No need of (SERDES).
f) Cheaper to implement.

Clock skew:
Clock skew is a phenomenon in synchronous circuits in which the clock signal sent from the clock circuit arrives
at different components at different times, which can be caused by many things, like: -
a) Wire-interconnect length,
b) Temperature variations,
c) Variation in intermediate devices,
d) Capacitive coupling,
e) Material imperfections,
As the clock rate of a circuit increases, timing becomes more critical and less variation can be tolerated if the
circuit is to function properly.
There are two types of clock skew: Positive skew, which occurs when the clock reaches the receiving register
later than it reaches the register sending data to the receiving register and negative skew which just opposite: the
receiving register gets the clock earlier than the sending register.
Two types of violation can be caused by clock skew. One problem is caused when the clock travels more slowly
than the path from one register to another - allowing data to penetrate two registers in the same clock pulse, or
maybe destroying the integrity of the latched data. This is called a hold violation because the previous data is not
held long enough at the destination flip-flop to be properly clocked through. Another problem is caused if the
destination flip-flop receives the clock pulse earlier than the source flip-flop - the data signal has that much less
time to reach the destination flip-flop before the next clock tick. If it fails to do so, a setup violation occurs, so-
called because the new data was not set up and stable before the next clock tick arrived. A hold violation is more
serious than a setup violation because it cannot be fixed by increasing the clock period. Positive skew and
negative skew cannot negatively impact setup and hold timing constraints respectively

(III) Asynchronous Vs Synchronous data transmission:


Like any data transfer methods, Serial Communication also requires coordination between the sender and
receiver. For example, when to start the transmission and when to end it, when one particular bit or byte ends and
another begins, when the receiver's capacity has been exceeded, and so on. Here comes the need for
synchronization between the sender and the receiver. A protocol defines the specific methods of coordinating
transmission between a sender and receiver.
Let us take an example. A serial data signal between two PCs must have individual bits and bytes that the
receiving PC can distinguish. If it doesn't, then the receiving PC can't tell where one byte ends and the next one
begin or where one bit ends and begins. So the signal must be synchronized in such a way that the receiver can
distinguish the bits and bytes as the transmitter intends them to be distinguished.
There are two ways to synchronize the two ends of the communication namely synchronous and asynchronous.
The synchronous signaling methods use two different signals. A pulse on one signal line indicates when another
bit of information is ready on the other signal line. The asynchronous signaling methods use only one signal. The
receiver uses transitions on that signal to figure out the transmitter bit rate (known as auto baud) and timing, and
set a local clock to the proper timing, typically using a PLL to synchronize with the transmission rate. A pulse from
the local clock indicates when another bit is ready. That means synchronous transmissions use an external clock,
while asynchronous transmissions are use special signals along the transmission medium.( Refer Fig1.a)
Asynchronous communication is the commonly prevailing communication method in the personal computer
industry, due to the reason that it is easier to implement and has the unique advantage that bytes can be sent
whenever they are ready, an no need to wait for blocks of data to accumulate.
Mode of connection: -
In a simple connection, the hardware configuration is such that only one-way communication is possible. For
example, from a computer to printer that cannot send status signals back to the computer. In a half-duplex
connection, two-way transfer of data is possible, but only in one direction at a time. But in a full-duplex
configuration, both ends can send and receive data simultaneously, which technique is common in our PCs.
Synchronous transmission - a brief explanation:
In synchronous transmission, the stream of data to be transferred is encoded and sent on one line, and a periodic
pulse of voltage which is often called the "clock" or "strobe" is put on another line, that tells the receiver about the
beginning and the ending of each bit (or byte). In general, such synchronous transmission protocols are used for
all the types of parallel communications. For example, in a computer, address information is transmitted
synchronously, i.e.: the address bits over the address bus, and the read strobe in the control bus.
Synchronization can also be embedded into a signal on a single wire. In differential Manchester encoding, used in
video-tape systems, each transition from a low to high or high to low represents a logical zero. A logical one is
indicated when there are two transitions in the same time frame.

(Fig above) Synchronous Vs Asynchronous

The advantages & disadvantages:


The only advantage of synchronous data transfer is the Lower overhead and thus, greater throughput, compared
to asynchronous one. But it has some disadvantages such as,
1) Slightly more complex and
2) Hardware is more expensive
One of the main Disadvantage of asynchronous technique is the large relative overhead, where a high proportion
of the transmitted bits are uniquely for control purposes and thus carry no useful information. But it holds some
advantages like,
1) Simple and doesn't require much synchronization on both communication sides
2) The timing is not as critical as for synchronous transmission; therefore hardware can be made cheaper.
3) Set-up is very fast, so well suited for applications where messages are generated at irregular intervals, for
example data entry from the keyboard

The DCE & the DTE:


The terms DTE and DCE are very common in the data communications technologies. DTE is short for Data
Terminal Equipment and DCE stands for Data Communications Equipment. But what do they really mean? As the
full DTE name indicates, this is a piece of device that ends a communication line, whereas the DCE provides a
path for communication.
Let us try to understand the functions of both these equipments through an example. Let's take the example of a
computer on which wants to communicate with the Internet through a modem and a dial-up connection. To get to
the Internet you tell your modem to dial the number of your provider. After your modems have dialed the number,
the modem of the provider will answer your call. Then your connection is established. Now you have a connection
with the server from your provider and you can use the Internet. In this example, your PC is a Data Terminal
(DTE). The two modems (yours and that one of your provider's) are DCEs. They make the communication
between you and your provider possible. But now we have to look at the server of your provider. Is that a DTE or
DCE? The answer is a DTE. It ends the communication line between you and the server, although it gives you the
possibility to surf around the globe. The reason why it is a DTE is that when you want to go from your provider's
server to another place it uses another interface. So DTE and DCE are interface dependent. It means, for your
connection to the server, the server is a DTE, but the same server is a DCE for the equipment that it is attached
to the rest of the Net.

(IV) How does it (Serial date transfer) work:


Let us come back to serial communication and try to know how the data is transferred serially. Serial
communication requires that you specify the following five parameters:
1) The speed or baud rate of the transmission
2) The number of data bits encoding a character
3) The sense of the optional parity bit (whether to be used or not, if yes then odd or even)
4) The number of stop bits
5) Full or half-duplex operation

Each transmitted character is packaged in a character frame that consists of a single start bit followed by the data
bits, the optional parity bit, and the stop bit or bits, as shown in the Fig-1.a & Fig-1.b.
After the stop bit, the line may remain idle indefinitely, or another character may immediately be started. The
minimum stop bit length required by the system can be larger than a "bit". In fact it can be 1.5 stop bits, or 2 stop
bits, or even the new hardware that doesn't support fractional stop bits can be configured to send 2 stop bits
when transmitting and requiring 1 stop bit when receiving.

Fig-1.b
Typically, serial communication is carried out using ASCII form of the data. Communication is completed using 3
transmission lines: Ground, Transmit, and Receive. Since serial is asynchronous (in many applications), the port
is able to transmit data on one line while receiving data on another. Other lines are available for handshaking, but
are not required. We already understood that the important serial characteristics are baud rate, data bits, stop
bits, and parity and for two ports to communicate, these parameters must match.

Let us now try to understand each term in detail.


Baud rate: Used to specify Data speed, which is a measure of the number of times a digital signal changes state
in one second or the number of signal events or signal transitions occurring per second. The changes can be
a) From positive voltage to zero voltage,
b) From zero voltage to negative voltage, or
c) From positive voltage to negative voltage.
The baud rate can never be higher than the raw bandwidth of the channel, as measured in Hz. Baud rate and bit
rate, often and incorrectly, are used interchangeably. The relationship between baud rate and bit rate depends on
the sophistication of the modulation scheme used to manipulate the carrier. The bit rate (bits per second or bps)
and baud rate can be the same, if each bit is represented by a signal transition in a single-bit modulation scheme.
Baud rate is almost always a lower figure than bps for a given digital signal because some signal modulation
techniques allow more than one data bit to be transmitted per change state.
So the bit rate (bps) and baud rate (baud per second) can be connected by the formula: -
bps = baud per second x the number of bit per baud
The number of bit per baud is determined by the modulation technique. The following two examples give how it
could be.
1) When FSK ("Frequency Shift Keying", a transmission technique) is used, each baud transmits one bit; only one
change in state is required to send a bit. Thus, the modem's bps rate is equal to the baud rate
2) When we use a baud rate of 2400, you use a modulation technique called phase modulation that transmits four
bits per baud. So:
2400 baud x 4 bits per baud = 9600 bps
Such modems are capable of 9600 bps operation
Common baud rates for telephone lines are 14400, 28800, and 33600. Baud rates greater than these are
possible, but these rates reduce the distance by which devices can be separated. These high baud rates are
used for device communication where the devices are located together, as is typically the case with GPIB
devices.
Data bits: When the computer sends a packet of information, the amount of actual data may not be a full 8 bit.
Standard values for the data packets are 5, 7, and 8 bits. Which setting you choose depends on what information
you are transferring. For example, standard ASCII has values from 0 to 127 (7 bits). Extended ASCII uses 0 to
255 (8 bits). If the data being transferred is simple text (standard ASCII), then sending 7 bits of data per packet is
sufficient for communication. A packet refers to a single byte transfer, including start/stop bits, data bits, and
parity. Since the number of actual bits depends on the protocol selected, the term packet is used to cover all
instances.
Start & Stop bits: Stop bit is used to indicate the end of a single packet. Typical values are 1, 1.5, and 2 bits.
Since the data is clocked across the lines and each device has its own clock, it is possible for the two devices to
become slightly out of sync. Therefore, the stop bits not only indicate the end of transmission but also give the
computers some room for error in the clock speeds. The more bits that are used for stop bits, the greater the
lenience in synchronizing the different clocks, but the slower the data transmission rate. Start bit is the bit, which
signals the receiver that data is coming. Every byte of data in an asynchronous serial transmission is preceded by
a start bit and followed by a stop bit.
Parity: It's used for error checking in serial communication. There are two types of parity: even and odd. And the
option of using no parity is also available. For even and odd parity, the serial port will set the parity bit (the last bit
after the data bits) to a value to ensure that the transmission has an even or odd number of logic high bits. For
example, let us take the data as 011.Then for even parity; the parity bit would be 0 to keep the number of logic
high bits even. Similarly, for odd parity the parity bit would be 1, resulting in three logic high bits.
Another category of parity, Marked and spaced parity does not actually check the data bits, but simply sets the
parity bit high for marked parity or low for spaced parity. This allows the receiving device to determine if noise is
corrupting the data or the transmitting and receiving devices' clocks are out of sync.
Note: -
If an odd number of bits (including the parity bit) are changed while transmitting a set of bits then parity bit will be
incorrect and will thus indicate that an error in transition has occurred. That means, parity bit is an error detecting
code, but is not an error correcting code as there is no way to determine which particular bit is corrupted. So then
the data must be discarded entirely, and re-transmitted from scratch
(V) Various types of serial communication Standards:
1. RS-232
2. RS-423
3. RS-485
4. USB
5. Fire Wire
6. Ethernet
7. MIDI
8. PCI Express
9. SPI & SCI
10. IIC
11. IrDA

PS: Each of these above modules are explained in next coming modules

SERIAL COMMUNICATION - Part 2

RS-232 interface

With the term "serial port" we will usually mean the hardware RS-232 and its signal levels,
connections etc. because many of the modern devices still connect to serial port even after the
development of many advanced technologies in serial communication systems. There may be
many reasons like ease of debugging, cost effective, etc. Serial port is also termed as COM
port.

RS-232 a standard is related to serial data communication between host systems, commonly
known as Data Terminal Equipment, or DTE and a peripheral system termed, Data
communication Equipment (also known as Data Circuit-Terminating Equipment) or DCE. To be
more specific, the device that connects to the RS-232 interface is called a Data
Communications Equipment (DCE) and the device to which it connects (e.g., the computer) is
called a Data Terminal Equipment (DTE).

It was first introduced by the Electronics Industry Alliance (EIA) in the early 1960s and is
commonly known as RS-232 (Recommended Standard 232). EIA-232 or RS-232 or RS-232 C
is a complete serial communication protocol, which specifies signal voltages, signal timing,
signal function, pin wiring, and the mechanical connections (i.e.: either 25-pin DB-25 or 9-pin
DB-9). In 1987, the EIA released a new version of the standard and changed the name to EIA-
232-D. And in 1991, the EIA teamed up with Telecommunications Industry association (TIA) and
issued a new version of the standard called EIA/TIA-232-E. Many people, however, still refer
the standard name as RS-232C, or just RS-232.

Let us now try to understand about the electrical, mechanical and functional interface
characteristics of this standard.

Mechanical & functional characteristics:

We already know that the RS-232 standard supports two types of connectors - a 25-pin D-type
connector (DB-25) and a 9-pin D-type connector (DB-9). The figure below shows the pin
diagram of the connectors.
Although RS-232 specifies a 25-pin connector, this connector is often not used. Most
applications do not require all the defined signals, so a 25-pin connector is larger
than necessary. The most popular connector is the 9-pin DB9 connector, which
provides the necessary signals for the serial communication in modem applications.
So let us analyze the functions of each pins of the 9-pin RS-232 connector in
comparison with that of DB-25. RS-232 signals have a direction (in or out) depending
on whether they are with respect to a DTE or a DCE.

DB-
DB-25
9Pin Signal Type Direction Function
Pin No:
No:
Determines whether the DCE is connected
DCE to
1 8 CD Control to a working phone line or not. (Only used in
DTE
connection with modem)
DTE to
2 3 TD Data Computer sends information to the DCE.
DCE
DCE to Computer receives information sent from
3 2 RD Data
DTE the DCE.
Computer tells the DCE that it is ready to
DTE to communicate. Raised by DTE when
4 20 DTR Control
DCE powered on. In auto-answer mode raised
only when RI arrives from DCE.
5 7 SG Ground -- Signal Ground
DCE to Modem tells the computer that it is ready to
6 6 DSR Control
DTE talk. Raised by DCE to indicate ready.
Computer asks the modem if it can send
DTE to
7 4 RTS Control information. Raised by DTE when it wishes
DCE
to send. Expects CTS from DCE.
Modem tells the computer that it can send
DCE to
8 5 CTS Control information. Raised by DCE in response to
DTE
RTS from DTE.
Set when incoming ring detected - used for
DCE to auto-answer application. DTE raises DTR to
9 22 RI Control
DTE answer. (Only used in connection with
modem)
12, 13,
-- 14, 16, -- -- -- Only needed if second channel being used.
19, 24
DTE to Transmit clock. (Used in Synchronous mode
-- 15 DB Timing
DCE only) as Transmitter Signal Element Timing.
DCE to Receive clock (used in synchronous mode
-- 17 DD Timing
DTE only) as Receiver Signal Element Timing.
DTE to
-- 18 LL Control Local Loop back
DCE
DTE to
-- 21 RL/SQ Control Signal quality detector /Remote Loop back
DCE

Electrical characteristics:
RS-232 defines the purpose, signal timing and signal levels for each line. It's an Active LOW
voltage driven interface i.e. it transmits positive voltage for a 0 bit, negative voltage for a 1. And
the output signal level usually swings between +12 V and -12 V. The high level is defined as
between +5V to +12V, and a low level is defined as between -5V and -12V. With 2V of noise
margin, a high level for the receiver is defined as between +3V to +12V, and a low level is
between -3V to -12V. The signal voltage between +3 V and -3V, called "dead area" is designed
to absorb line noise. A low level is defined as logic 1 and is referred to as "marking." Similarly, a
high level is defined as logic 0 and is referred to as "spacing." The following figure illustrates the
logic level.

Note:
In the embedded system, if the device communicates at TTL level, the connection between the
embedded system and external device is simple. But if the device needs RS232 level signaling,
we will have to insert a RS-232 Line Driver/Receiver between the processor and the device.
Most of the devices used nowadays need three wires, i.e. Transmit Data, Receive Data and
Signal Ground. No need of hardware flow control signal. This simplifies the hardware
connection as well as the software design. The figure below shows how the data look like for
ASCII-A .

Connections & signal flow control:


Flow control is the process of managing the rate of data transmission between two nodes to
prevent a fast sender from over running a slow receiver. For example, as DTE to DCE speed is
a few times faster than DCE to DCE speed, the PC can send data to the modem at a higher
rate. That means in this connection sooner or later the data may be lost because of the buffers
overflow, so the control of data flow should be realized.
Flow control mechanisms can be classified by whether or not the receiving node sends
feedback to the sending node. That is through a "handshake", an exchange of characters
between a transmitter and a receiver is used to postpone transmission until the receiver is
ready to receive the data.
This character flow control is of two main types: Hardware & Software
Software Flow Control:
One example is "Xon/Xoff".Two characters Xon and Xoff are used here control the data flow.
Xon is usually a 17 character, and Xoff is a 19 character. The modem will only have a small
buffer. As when the computer fills it, the modem sends a Xoff character to inform the computer
about data transfer termination. As soon as the modem empties the most of the memory for
data, it will send the Xon character to the computer and to start the data transfer again. The
main advantage of this type of data flow control is that it doesn't need any other wires as the
characters are sent via TD/RD lines. But if the connection is slow, every character needs 10
bits, which can reduce the connection speed.
Hardware Flow Control:
Most serial communications uses software Flow Control, but there is an alternative: hardware
handshaking. Hardware flow control is also known as RTS/CTS flow control. To realize this
control two additional wires (RTS & CTS) in the sequential cable are used. This results in
increasing the data transmission rate, as no time is spent for Xon-Xoff characters transmission.
The flow control mode (whether Xon-Xoff or Hardware) can be selected as indicated below:

Here the transmission activates the Request To Send (RTS) line when the computer is ready to
send data .If the modem has a free buffer for this data, it activates the Clear to Send (CTS) line
in response and the computer starts sending data. If the modem lacks free memory, it won't
activate the CTS.
One of the main drawbacks of hardware handshaking is that some modems cannot deal
correctly with binary data streams that contain characters, which look like DC1/DC3. Hardware
handshaking is also known as Out-of-Band flow control, because the signals are generated and
observed outside the flow of the data.
Care must be taken when enabling hardware handshaking. Both parties have to agree and be
compatibly configured before you start. Software flow-control is re-active. Things proceed
normally until one party says, "Stop!" But the hardware flow-control is pro-active. You do not
start transmitting until you receive the Go signal. If you never get it, you never start. Fortunately,
with most modems the hardware handshake rules are not turned ON until a connection has
been established.
Let us now try to understand how a serial port can be connected to another one.

Null Modem connection:


The serial communication standards show the use of DTE/DCE communication, the way a
computer should communicate with a peripheral device like a modem. But in Null modem
connection, the PCs are connected back-to-back with cables, each acting as a DTE, which
means there is no DCE in this case. This type of connection finds many uses nowadays. The
null modem can be configured in many ways using the number of signal lines available. In most
situations, the original modem signal lines are reused to perform some sort of handshaking.
Handshaking has many advantages. It can increase the maximum allowed communication
speed because then the computer will be able to control the flow of information. In null modem
connection without "flow control", the communication may be possible only at the speed at
which the receiving side can handle the amount of data.
Before we talk about the Null-modem connections, let us try to refresh our knowledge about the
different types of flow control signals used in RS-232. The first two flow control pins are known
as RTS (request to send), an output signal from the DTE, which obviously comes as the input
for the DCE and CTS (clear to send), which come as the answering signal from the DCE side.
Before sending a character, the DTE asks permission by setting its RTS output. No information
will be sent until the DCE grants permission by making the CTS line high.
The other two flow control signals, DTR, data terminal ready and DSR, data set ready are used
to signal the status of one communication side to the other. The DTE uses the DTR signal to
signal that it is ready to accept information, whereas the DCE uses the DSR signal for the same
purpose. The last flow control signal present in DTE/DCE communication is the CD carrier
detect. It is not used directly for flow control, but indicates the existence of a communication link
between two modem devices.

a) Without handshaking:
This is the simplest and commonly used way of connection, which is shown in the following
figure. As you can see, only the data lines and signal ground are cross-connected. All other pins
have no connection, which means there is no handshaking.

This type of connection can be used to communicate with devices, which do not have modem
control signals.

b) Loop back handshaking:


Before going to talk about loop-back handshaking, let us try to point-out the issues associated
with the simple null modem without handshaking.
1. Suppose the software on both sides of the communication is well structured. What happens if
either the DCE or DTE checks for the DSR or CD inputs? These signal levels will never go
high, as it is kept not connected, which may cause problems.
2. Same thing may happen for the RTS/CTS handshaking sequence also. The RTS output is
set high by the DTE and then waits for a ready signal to be received on the CTS line. This
may enhance the possibility for the software to hang because no physical connection is
present to either CTS line.
To overcome this problem and still be able to use a cheap null modem we can use the
connection layout shown above termed as Null modem with loop back handshaking, which
make the well defined & structured software to think that there is some handshaking available
(even though it is a fake one).

c) Partial handshaking:
The problems associated with the loop back handshaking are as follows.
1. The DSR signal input indicates that the other side is ready for communication. But the line is
connected back to the DTR output of the same side, which means, that the software doesn't
see the ready signal of the other device, but its own. This may create problem for the proper
communication. The same thing happens in the case of the CD input.
2. Again this also has no functional enhancements over the simple connection. There is no way
both devices can control data flow, other than by using XON/XOFF handshaking.
3. If the software is designed for using hardware flow control (in the case of loop-back
handshaking) there is a chance for data loss. When data speeds reach the limit the receivers
can handle, communication may stop immediately without any reason.

In the case of CTS/RTS of this type connection, the software may not hang-up because the
CTS input on the same connector side is receiving clearance immediately when the RTS is set
high.
Both the simple null modem connection and the null modem with loop back handshaking have
no provisions for hardware flow control. If a condition comes that the hardware flow control is
necessary the null modem with partial handshaking can be used.

d) Full handshaking:

The software, which uses only the RTS/CTS protocol for the flow control, cannot go for the
partial handshaking null modem connection. The solution for this is the use of "full
handshaking". This is the most expensive null modem connection. In this mode of connection,
all the pins, except RI and CD are used. The main advantage of this connection is that there
are two signaling lines in each direction. Both the RTS and DTR are there to send flow control
information to the other device. This makes it possible to achieve very high communication
speeds, provided that the software has been designed for it.

Virtual null modem


This is also a type of communication method to connect two computer applications directly
using virtual serial port. Unlike null modem cable, virtual null modem is a software solution,
which emulates hardware null modem in computer. All features of hardware null modem can be
made available in virtual null modem as well. Some of the main advantages of this are:

1. No serial cable is needed.


2. Serial port of computer is not used.
3. Unlimited number of virtual connections is possible.
4. Transmission speed of serial data is limited only by computer performance
5. Virtual connection over network or Internet is possible. So Unlimited distance of
communication can be achieved.

Communication Interfaces Continued:

Controller Area Network (CAN):

History:

CAN or Controller Area Network or CAN-bus is an ISO standard computer network protocol and bus standard,
designed for microcontrollers and devices to communicate with each other without a host computer. Designed
earlier for industrial networking but recently more adopted to automotive applications, CAN have gained
widespread popularity for embedded control in the areas like industrial automation, automotives, mobile
machines, medical, military and other harsh environment network applications.

Development of the CAN-bus started originally in 1983 at Robert Bosch GmbH. The protocol was officially
released in 1986. and the first CAN controller chips, produced by Intel and Philips, introduced in the market in the
year of 1987.

Introduction:

The CAN is a "broadcast" type of bus. That means there is no explicit address in the messages. All the nodes in
the network are able to pick-up or receive all transmissions. There is no way to send a message to just a specific
node. To be more specific, the messages transmitted from any node on a CAN bus does not contain addresses of
either the transmitting node, or of any intended receiving node. Instead, an identifier that is unique throughout the
network is used to label the content of the message. Each message carries a numeric value, which controls its
priority on the bus, and may also serve as an identification of the contents of the message. And each of the
receiving nodes performs an acceptance test or provides local filtering on the identifier to determine whether the
message, and thus its content, is relevant to that particular node or not, so that each node may react only on the
intended messages. If the message is relevant, it will be processed; otherwise it is ignored.

How do they communicate?


If the bus is free, any node may begin to transmit. But what will happen in situations where two or more nodes
attempt to transmit message (to the CAN bus) at the same time. The identifier field, which is unique throughout
the network helps to determine the priority of the message. A "non-destructive arbitration technique" is used to
accomplish this, to ensure that the messages are sent in order of priority and that no messages are lost. The
lower the numerical value of the identifier, the higher the priority. That means the message with identifier having
more dominant bits (i.e. bit 0) will overwrite other nodes' less dominant identifier so that eventually (after the
arbitration on the ID) only the dominant message remains and is received by all nodes.
As stated earlier, CAN do not use address-based format for communication, instead uses a message-based data
format. Here the information is transferred from one location to another by sending a group of bytes at one time
(depending on the order of priority). This makes CAN ideally suited in applications requiring a large number of
short messages (e.g.: transmission of temperature and rpm information). by more than one location and system-
wide data consistency is mandatory. (The traditional networks such as USB or Ethernet are used to send large
blocks of data, point-to-point from node A to node B under the supervision of a central bus master).
Let us now try to understand how these nodes are interconnected physically, by pointing out some examples. A
modern automobile system will have many electronic control units for various subsystems (fig1-a). Typically the
biggest processor will be the engine control unit (or the host processor). The CAN standard even facilitates the
subsystem to control actuators or receive signals from sensors. A CAN message never reaches these devices
directly, but instead a host-processor and a CAN Controller (with a CAN transciever) is needed between these
devices and the bus. (In some cases, the network need not have a controller node; each node can easily be
connected to the main bus directly.)
The CAN Controller stores received bits (one by one) from the bus until an entire message block is available, that
can then be fetched by the host processor (usually after the CAN Controller has triggered an interrupt). The Can
transciever adapts signal levels from the bus, to levels that the CAN Controller expects and also provides a
protective circuitry for the CAN Controller. The host-processor decides what the received messages mean, and
which messages it wants to transmit itself.

Fig 1-a
It is likely that the more rapidly changing parameters need to be transmitted more frequently and, therefore, must
be given a higher priority. How this high-priority is achieved? As we know, the priority of a CAN message is
determined by the numerical value of its identifier. The numerical value of each message identifier (and thus the
priority of the message) is assigned during the initial phase of system design. To determine the priority of
messages (while communication), CAN uses the established method known as CSMA/CD with the enhanced
capability of non-destructive bit-wise arbitration to provide collision resolution and to exploit the maximum
available capacity of the bus. "Carrier Sense" describes the fact that a transmitter listens for a carrier wave before
trying to send. That is, it tries to detect the presence of an encoded signal from another station before attempting
to transmit. If a carrier is sensed, the node waits for the transmission in progress to finish before initiating its own
transmission. "Multiple Access" describes the fact that multiple nodes send and receive on the same medium. All
other nodes using the medium generally receive transmissions by one node. "Collision Detection" (CD) means
that collisions are resolved through a bit-wise arbitration, based on a preprogrammed priority of each message in
the identifier field of a message.

Fig 1-b

Let us now try to understand how the term "priority" becomes more important in the network. Each node can have
one or more function. Different nodes may transmit messages at different times (Depends how the system is
configured) based on the function(s) of each node. For example:
1) Only when a system failure (communication failure) occurs.
2) Continually, such as when it is monitoring the temperature.
3) A node may take action or transmit a message only when instructed by another node, such as when a fan
controller is instructed to turn a fan on when the temperature-monitoring node has detected an elevated
temperature.
Note:
When one node transmits the message, sometimes many nodes may accept the message and act on it (which is
not a usual case). For example, a temperature-sensing node may send out temperature data that are accepted &
acted on only by a temperature display node. But if the temperature sensor detects an over-temperature situation,
then many nodes might act on the information.
CAN use "Non Return to Zero" (NRZ) encoding (with "bit-stuffing") for data communication on a "differential two
wire bus". The two-wire bus is usually a twisted pair (shielded or unshielded). Flat pair (telephone type) cable also
performs well but generates more noise itself, and may be more susceptible to external sources of noise.
Main Features:

a) A two-wire, half duplex, high-speed network system mainly suited for high-speed applications using "short
messages". (The message is transmitted serially onto the bus, one bit after another in a specified format).
b) The CAN bus offers a high-speed communication rate up to 1 M bits / sec, for up to 40 feet, thus facilitating
real-time control. (Increasing the distance may decrease the bit-rate).
c) With the message-based format and the error-containment followed, it's possible to add nodes to the bus
without reprogramming the other nodes to recognize the addition or changing the existing hardware. This can be
done even while the system is in operation. The new node will start receiving messages from the network
immediately. This is called "hot-plugging"
d) Another useful feature built into the CAN protocol is the ability of a node to request information from other
nodes. This is called a remote transmit request, or RTR.
e) The use of NRZ encoding ensures compact messages with a minimum number of transitions and high
resilience to external disturbance.
f) CAN protocol can link up to 2032 devices (assuming one node with one identifier) on a single network. But
accounting to the practical limitations of the hardware (transceivers), it may only link up to 110 nodes on a single
network.
g) Has an extensive and unique error checking mechanisms.
h) Has High immunity to Electromagnetic Interference. Has the ability to self-diagnose & repair data errors.
i) Non-destructive bit-wise arbitration provides bus allocation on the basis of need, and delivers efficiency benefits
that can not be gained from either fixed time schedule allocation (e.g. Token ring) or destructive bus allocation
(e.g. Ethernet.)
j) Fault confinement is a major advantage of CAN. Faulty nodes are automatically dropped from the bus. This
helps to prevent any single node from bringing the entire network down, and thus ensures that bandwidth is
always available for critical message transmission.
k) The use of differential signaling (a method of transmitting information electrically by means of two
complementary signals sent on two separate wires) gives resistance to EMI & tolerance of ground offsets.
l) CAN is able to operate in extremely harsh environments. Communication can still continue (but with reduced
signal to noise ratio) even if:
1. Either of the two wires in the bus is broken
2. Either wire is shorted to ground
3. Either wire is shorted to power supply.

CAN protocol Layers & message Frames:

Like any network applications, Can also follows layered approach to the system implementation. It conforms to
the Open Systems Interconnection (OSI) model that is defined in terms of layers. The ISO 11898 (For CAN)
architecture defines the lowest two layers of the seven layers OSI/ISO model as the data-link layer and physical
layer. The rest of the layers (called Higher Layers) are left to be implemented by the system software developers
(used to adapt and optimize the protocol on multiple media like twisted pair. Single wire, optical, RF or IR). The
Higher Level Protocols (HLP) is used to implement the upper five layers of the OSI in CAN.

CAN use a specific message frame format for receiving and transmitting the data. The two types of frame format
available are:

a) Standard CAN protocol or Base frame format


b) Extended Can or Extended frame format

The following figure (Fig 2) illustrates the standard CAN frame format, which consists of seven different bit-fields.

a) A Start of Frame (SOF) field - indicates the beginning of a message frame.


b) An Arbitration field, containing a message identifier and the Remote Transmission Request (RTR) bit. The RTR
bit is used to discriminate between a transmitted Data Frame and a request for data from a remote node.
c) A Control Field containing six bits in which two reserved bits (r0 and r1) and a four bit Data Length Code (DLC).
The DLC indicates the number of bytes in the Data Field that follows.
d) A Data Field, containing from zero to eight bytes.
e) The CRC field, containing a fifteen-bit cyclic redundancy check-code and a recessive delimiter bit.
f) The Acknowledge field, consisting of two bits. The first one is a Slot bit which is transmitted as recessive, but is
subsequently over written by dominant bits transmitted from any node that successfully receives the transmitted
message. The second bit is a recessive delimiter bit.
g) The End of Frame field, consisting of seven recessive bits.

An Intermission field consisting of three recessive bits is then added after the EOF field. Then the bus is
recognized to be free.
(Fig 2)

The Extended Frame format provides the Arbitration field with two identifier bit fields. The first (the base ID) is
eleven (11) bits long and the second field (the ID extension) is eighteen (18) bits long, to give a total length of
twenty nine (29) bits. The distinction between the two formats is made using an Identifier Extension (IDE) bit. A
Substitute Remote Request (SRR) bit is also included in the Arbitration Field.

Error detection & correction:

This mechanism is used for detecting errors in messages appearing on the CAN bus, so that the transmitter can
retransmit message. The CAN protocol defines five different ways of detecting errors. Two of these works at the
bit level, and the other three at the message level.
1. Bit Monitoring.
2. Bit Stuffing.
3. Frame Check.
4. Acknowledgement Check.
5. Cyclic Redundancy Check

1. Each transmitter on the CAN bus monitors (i.e. reads back) the transmitted signal level. If the signal level read
differs from the one transmitted, a Bit Error is signaled. Note that no bit error is raised during the arbitration
process.

2. When five consecutive bits of the same level have been transmitted by a node, it will add a sixth bit of the
opposite level to the outgoing bit stream. The receivers will remove this extra bit. This is done to avoid excessive
DC components on the bus, but it also gives the receivers an extra opportunity to detect errors: if more than five
consecutive bits of the same level occurs on the bus, a Stuff Error is signaled.

3. Some parts of the CAN message have a fixed format, i.e. the standard defines exactly what levels must occur
and when. (Those parts are the CRC Delimiter, ACK Delimiter, End of Frame, and also the Intermission). If a CAN
controller detects an invalid value in one of these fixed fields, a Frame Error is signaled.

4. All nodes on the bus that correctly receives a message (regardless of their being "interested" of its contents or
not) are expected to send a dominant level in the so-called Acknowledgement Slot in the message. The
transmitter will transmit a recessive level here. If the transmitter can't detect a dominant level in the ACK slot, an
Acknowledgement Error is signaled.
5. Each message features a 15-bit Cyclic Redundancy Checksum and any node that detects a different CRC in
the message than what it has calculated itself will produce a CRC Error.

Error confinement:

Error confinement is a technique, which is unique to CAN and provides a method for discriminating between
temporary errors and permanent failures in the communication network. Temporary errors may be caused by,
spurious external conditions, voltage spikes, etc. Permanent failures are likely to be caused by bad connections,
faulty cables, defective transmitters or receivers, or long lasting external disturbances.

Let us now try to understand how this works.

Each node along the bus will be having two error counters namely the transmit error counter (TEC) and the
receive error counter (REC), which are used to be incremented and/or decremented in accordance with the error
detected. If a transmitting node detects a fault, then it will increments its TEC faster than the listening nodes
increments its REC because there is a good chance that it is the transmitter who is at fault.
A node usually operates in a state known as "Error Active" mode. In this condition a node is fully functional and
both the error count registers contain counts of less than 127. When any one of the two error counters raises
above 127, the node will enter a state known as "Error Passive". That means, it will not actively destroy the bus
traffic when it detects an error. The node which is in error passive mode can still transmit and receive messages
but are restricted in relation to how they flag any errors that they may detect. When the Transmit Error Counter
rises above 255, the node will enter the Bus Off state, which means that the node doesn't participate in the bus
traffic at all. But the communications between the other nodes can continue unhindered.

To be more specific, an "Error Active" node will transmit "Active Error Flags" when it detects errors, an "Error
Passive" node will transmit "Passive Error Flags" when it detects errors and a node, which is in "Bus Off" state will
not transmit "anything" on the bus at all. The transmit errors give 8 error points, and receive errors give 1 error
point. Correctly transmitted and/or received messages cause the counter(s) to decrease. The other nodes will
detect the error caused by the Error Flag (if they haven't already detected the original error) and take appropriate
action, i.e. discard the current message.

Confused? Let us try to get slightly simplified.

Let's assume that whenever node-A (for example) on a bus tries to transmit a message, it fails (for whatever
reason). Each time this happens, it increases its Transmit Error Counter by 8 and transmits an Active Error Flag.
Then it will attempt to retransmit the message and suppose the same thing happens again. When the Transmit
Error Counter rises above 127 (i.e. after 16 attempts), node A goes Error Passive. It will now transmit passive
error flags on the bus. A Passive Error Flag comprises 6 recessive bits, and will not destroy other bus traffic - so
the other nodes will not hear the node-A complaining about bus errors. However, A continues to increase its TEC.
When it rises above 255, node-A finally stops and goes to "Bus Off" state.
What does the other nodes think about node A? - For every active error flag that A transmitted, the other nodes
will increase their Receive Error Counters by 1. By the time that A goes Bus Off, the other nodes will have a count
in their Receive Error Counters that is well below the limit for Error Passive, i.e. 127. This count will decrease by
one for every correctly received message. However, node A will stay bus off. Most CAN controllers will provide
status bits and corresponding interrupts for two states: "Error Warning" (for one or both error counters are above
96) and "Bus Off".

Bit Timing and Synchronization:

The time for each bit in a CAN message frame is made up of four non-overlapping time segments as shown
below.
The following points may be relevant as far as the "bit timing" is concerned.

1. Synchronization segment is used to synchronize the nodes on the bus. And it will always be of one quantum
long.
2. One time quanta (which is also known as the system clock period) is the period of the local oscillator, multiplied
by the value in the Baud Rate Pre-scaler (BRP) register in the CAN controller.
3. A bit edge is expected to take place during this synchronization segment when the data changes on the bus.
4. Propagation segment is used to compensate for physical delay times within the network bus lines. And is
programmable from one to eight time quanta long.
5. Phase-segment1 is a buffer segment that can be lengthened during resynchronization to compensate for
oscillator drift and positive phase differences between the oscillators of the transmitting and receiving nodes. And
is also programmable from one to eight time quanta long.
6. Phase-segment2 can be shortened during resynchronization to compensate for negative phase errors and
oscillator drift. And is the maximum of Phase-segment1 combined with the Information Processing Time.

7. The Sample point will always be at the end of Phase-seg1. It is the time at which the bus level is read and
interpreted as the value of the current bit.
8. The Information Processing Time is less than or equal to 2 time quanta.

This bit time is programmable at each node on a CAN Bus. But be aware that all nodes on a single CAN bus must
have the same bit time regardless of transmitting or receiving. The bit time is a function of the period of the
oscillator local to each node, the value that is user-programmed into BRP register in the controller at each node,
and the programmed number of time quanta per bit.

How do they synchronize:

Suppose a node receives a data frame. Then it is necessary for the receiver to synchronize with the transmitter to
have proper communication. But we don't have any explicit clock signal that a CAN system can use as a timing
reference. Instead, we use two mechanisms to maintain synchronization, which is explained below.

Hard synchronization:
It occurs at the Start-of-Frame or at the transition of the start bit. The bit time is restarted from that edge.

Resynchronization:

To compensate for oscillator drift, and phase differences between transmitter and receiver oscillators, additional
synchronization is needed. The resynchronization for the subsequent bits in any received frame occurs when a bit
edge doesn't occur within the Synchronization Segment in a message. The resynchronization is automatically
invoked and one of the Phase Segments are shortened or lengthened with an amount that depends on the phase
error in the signal. The maximum amount that can be used is determined by a user-programmable number of time
quanta known as the Synchronization Jump Width parameter (SJW).
Higher Layer Protocols:
Higher layer protocol (HLP) is required to manage the communication within a system. The term HLP is derived
from the OSI model and its seven layers. But the CAN protocol just specifies how small packets of data may be
transported from one point to another safely using a shared communications medium. It does not contain
anything on the topics such as flow control, transportation of data larger than CAN fit in an 8-byte message, node
addresses, establishment of communication, etc. The HLP gives solution for these topics.
Higher layer protocols are used in order to
1. Standardize startup procedures including bit rate setting
2. Distribute addresses among participating nodes or kinds of messages
3. Determine the layout of the messages
4. Provide routines for error handling on system level
Different Higher Layer Protocols
There are many higher layer protocols for the CAN bus. Some of the most commonly used ones are given below.
1. Can Kingdom
2. CAN open
3. CCP/XCP
4. Device Net
5. J1939
6. OSEK
7. SDS
Note:
Lot of recently released microcontrollers from Freescale, Renesas, Microchip, NEC, Fujitsu, Infineon, and Atmel
and many such leading MCU vendors are integrated with CAN interface.

Communication Interfaces Continued:

LIN (Local Interconnect Network):


History:

LIN (Local Interconnect Network) was developed as cost-effective alternate to CAN protocol. In
1998 a group of companies including Volvo, Motorola, Audi, BMW, Daimler Chrysler, and
Volkswagen formed a consortium to develop LIN.

The latest version of LIN is LIN 2.0 and was released in 2003. LIN 2.0 provides interesting
features such as diagnostics.

Introduction:

The LIN is a SCI/UART-based serial, byte-oriented, time triggered communication protocol


designed to support automotive networks in conjunction with Controller Area Network (CAN),
which enables cost-effective communication with sensors and actuators when all the features of
CAN are not required. The main features of this protocol (compared to CAN) are low cost and
low speed and used for short distance networks.

Usually in automotive application, the LIN bus is connected between smart sensor or actuators
and an Electronic Control Unit (ECU) which is often a gateway with CAN bus. Like CAN, LIN is
also a broadcast type serial network, but with single master and multiple (up to 16) slaves. No
collision detection exists in LIN, therefore all messages are initiated by the master with at most
one slave replying for a given message identifier. The master is typically a moderately powerful
microcontroller, whereas the slaves can be less powerful, cheaper micro-controllers or
dedicated ASICs.

Moreover the LIN is a single wire 12V bus connection, in which the communication protocol is
based upon ISO 9141 NRZ-standard. An important feature of LIN is the synchronization
mechanism that allows the clock recovery by slave nodes without quartz or ceramics resonator.
Only the master node will be using the oscillating device. Nodes can be added to the LIN
network without requiring hardware or software changes in other slave nodes. And the
maximum transmission speed will be 20kbit/s.

.
Fig 1-a

How do they transmit & receive data?


As we have already seen, the LIN network comprises of one master node and one or more
slave nodes. Only the master node initiates the communication in a LIN network. The master
node defines the transmission speed, sends synchronization pulses, and does data monitoring
and switching slave nodes to sleep/wake up mode. It also receives Wakeup Break from slave
nodes when the bus is inactive and they request some action.
A slave node waits for the synchronization pulse, and then does synchronization using
synchronization pulse and process the message identifier. Then according to the ID, the slave
decides what to do, either to receive data or to transmit or to do nothing. While transmitting it
sends 2, 4 or 8 data bytes depending on the ID received, plus a checksum byte. The two types
of data messages in the LIN network are the signal message (data which are sent in the data
frame) and the diagnostic message.

The network consists of one master task and several slave tasks. In the master node the both
tasks- master & slave can be found, but in a slave node the slave task only.
The communication can take place from the master node (using its slave task) to one or more
slave nodes, and from one slave node to the master node and/or other slave nodes. The
communication is also possible directly from slave to slave without routing through the master
node.
The LIN uses frames for data communication. A frame consists of a header, a response and
some response space so the slave will have time to answer. The master sends out a message
header, or in other words the headers are situated in a master task, contains synchronization
breaks, synchronization byte and the message identifier, each part begins with a start bit and
ends with a stop bit. The response contains one to eight data bytes and one checksum byte.
The slave task is connected to the identifier and receives the response, verifies the checksum
and uses the data transport. Messages are created when the master node sends a frame
containing a header. The slave node(s) then fills the frame with data depending on the header
sent from the master.

This system of communication using headers and responses has a lot of advantages. Some of
them are
1. The nodes can be added to the network without requiring hardware or software changes in
other slave nodes.
2. The identifier defines the content of a message and
3. Any number of nodes can simultaneously receive and act upon a single frame.

The LIN protocol is byte oriented, which means that data is sent one byte at a time. One byte
field contains a start bit (dominant), 8 data bits and a stop bit (recessive). The data bits are sent
LSB first.
The synch break is the beginning of a Message Frame. It contains at least 13 bits of dominant
value including the start bit followed by at least 1 bit long recessive break delimiter. The synch
byte or the synch field is used to determine the time between two rising edges to determine the
transmission rate which the master node uses.
The identifier of a message, which is 6-bits long identifier plus 2 parity bits, denotes the content
of a message but not the destination. It incorporates information about the sender, the
receivers, the purpose, and the data field length of the response (The three classes of 2/4/8
data bytes). The length coding is done in the 2 MSB of the ID-Field. A total of 64 message
Identifiers is possible here. The two linked parity bits are used to protect this ID-field.

.
(Fig 2)

TThe parity bits can be calculated as explained below:


P0 = logic OR between ID0, ID1, ID2 and ID4.
P1 = logic OR between ID1, ID3, ID4 and ID5.

The length of the response data field from the slave can be 2, 4 or 8 bytes, depending on the
two MSB of the ID field send by the master node (LIN 1.3 have length of 8-bytes only)

There are two types of checksum used in LIN. The first one used in LIN 1.3 consists of the
inverted eight-bit sum of all 8 data bytes in a message. The new checksum used in LIN 2.0 also
incorporates the protected identifier in the checksum calculation.

The power management in LIN network:

The network management in a LIN cluster contains "wake up" and "go-to sleep".

All the slave nodes in an active LIN cluster can be changed into sleep mode by sending a
diagnostic master request frame with the first data byte equal to zero. This special use of a
diagnostic frame is called a go-to-sleep-command. Slave nodes can automatically enter a sleep
mode if the LIN bus is inactive for more than 4 seconds.

Any node in a sleeping LIN cluster can send a request for wake up cluster. The wake-up
request can change the bus to the dominant state for 250 ms to 5 ms.
Every slave node can detect the wake-up request (a dominant pulse longer than 150 ms) and
be ready to listen to bus commands within 100 ms, measured from the ending edge of the
dominant pulse.

The master node can also wake up and, when the slave nodes are ready, start sending frame
headers to find out the cause of the wake up. If the master does not gain frame headers within
150 ms from the wake up request, the node sending the request can try to send a new wake up
request. After three failing requests the node shall wait minimum 1.5 seconds before sending a
fourth wake up request.

LIN Versus CAN:


There is no meaning of comparing CAN and LIN since they do not address the same
issues. However it can give you a general view of LIN in the big picture.
As we already saw, the LIN targets low-end applications where the communication
cost per node must be two to three times lower compared to CAN but where the
performance, robustness and versatility of CAN are not required. The main
economical factor in favor of LIN is the avoidance of high cost quartz or ceramic
resonators in slave nodes, as they can perform self-synchronization. A small
comparative study gives the following points.

Features LIN CAN


Media access control Single Master Multiple Masters
Bus Speed 2.4, 9.6 and 19.2 Kbps 62.5Kbps to 1Mbps
Multicast MessageRouting 6 bit identifier 11/29 bit identifier
Size of network 2 to 16 nodes 4 to 20 nodes
Data Byte per Frame 0 to 8 2 to 8
Transmission time for 4 data bytes 6 ms at 19.2Kbps 0.8 ms at 125Kbps
Error detection(Data field) 8-bit checksum 15 bit CRC
Physical Layer Single wire, 40V Shielded,Twisted pair, 5V
Quartz/ ceramicResonator For master only For All nodes

Some of the main drawbacks of LIN are lower bandwidth and less effective bus access scheme
with the master- slave configuration

MODULE -11

I2C Bus interface

How it arrived:

Almost 25 years ago, in the early 1980's Philips designed & developed a new bus standard namely IC bus, for
easy communication between Integrated Circuits (especially in TV circuits), which reside on the same circuit
board.
The name IC translates into Inter Integrated Circuits is a bi-directional 2-wire bus standard for efficient inter-IC
control. So the bus is commonly known as the Inter-IC or IC -bus.
When connecting multiple devices together, the address and data lines of each device were connected
individually. This would result in a lot of traces on the PCB, and require more components. This makes the
systems expensive and also susceptible to interference and disturbances by Electromagnetic Interference (EMI)
and Electrostatic Discharge (ESD). The IC bus standard is a remedy to this problem.

Introduction:

IC is a multi-master, low-bandwidth, short distance, serial communication bus protocol. Nowadays it is not only
used on single boards, but also to attach low-speed peripheral devices and components to a motherboard,
embedded system, or cell-phone, as the new versions provide lot of advanced features and much higher speed.
The features like simplicity & flexibility make this bus attractive for consumer and automotive electronics.

Details:
The basic design of IC has a 7-bit address space with 16 reserved addresses, which makes the maximum
number of nodes that can communicate on the same bus as 112. That means each IC device is recognized by a
unique 7-bit address. It is important to note that the maximum number of nodes is obviously limited by the
address space, and also by the total bus capacitance of 400 pf.

The two bi-directional lines, which carry information between the devices connected to the bus, are known as
Serial Data line (SDA) and Serial Clock line (SCL). As the name indicates the SDA line contains the data and the
SCL with the clock signal for synchronization. The typical voltages used are +5 V or +3.3V.

Like the CAN & LIN protocols, the IC also follows the master-slave communication protocol. But the IC bus is a
multi-master bus, which means more than one IC/device capable of initiating a data transfer can be connected to
it. The device that initiates the communication is called MASTER, whereas the device being addressed by the
Master is called as SLAVE. It is the master device who always do generation of clock signals, which means each
master generates its own clock signals when transferring data on the bus.

The real communication:

As we saw already, the active lines used for communication in IC protocol are bi-directional. Each & every device
(for example: MCU, LCD driver, ASIC, remote I/O ports, RAM, EEPROM, data converters) connected to the bus
will be having a unique address. Each of these devices can act as a receiver and/or transmitter, depending on the
functionality. And also each device connected to the bus is software addressable by this unique address:

As we already know the nodes or any other peripheral devices (no matter whether its master or slave) use
microcontrollers to connect and communicate through the bus, this communication can be considered as inter-IC
communication in general.

Let us assume that the master MCU (as always, it's the master who initiates the communication) wants to send
data to one of its slaves. The step-by-step procedure will be as follows.

1. Wait until it sees no activity on the I2C bus. The SDA and SCL lines are both high. The bus is 'free'.
2. The Master MCU issues a start condition, telling that "its mine - I have started to use the bus". This condition
informs all the slave devices to listen on the serial data line for instructions/data.
3. Provide on the SCL line a clock signal. It will be used by all the ICs as the reference time at which each bit of
data on the SDA line will be correct (valid) and can be used. The data on the data line must be valid at the time
the clock line switches from 'low' to 'high' voltage.
4. The Master MCU sends the unique binary address of the target device it wants to access.
5. Master MCU puts a one-bit message (called read/write flag) on the bus telling whether it wants to SEND or
RECEIVE data from the other chip. This read/write flag is an indication to the slave node whether the access is
a read or a write operation.
6. The slave node ICs will then compare the received address with its own address. The Slave device with the
matching address responds back with an acknowledgement signal. If the address doesn't match, they simply
wait until the bus is released by the stop condition.
7. Once the master MCU receives the acknowledgement signal, it starts transmitting or receiving and the data
communication proceeds between the Master and the Slave on the data bus. Both the master and slave can
receive or transmit data depending on whether the communication is a read or write. The transmitter sends 8-
bits of data to the receiver, which replies with a 1-bit acknowledgement. And the action of data transfer
continues.
8. When the communication is complete, the master issues a stop condition indicating that everything is done.
This action free ups the bus. The stop signal is just one bit of information transferred by a special 'wiggling' of
the SDA/SCL wires of the bus.

Notes:
1. Devices with Master capability can identify themselves to other specific Master devices and advise their own
specific address and functionality.
2. Only two devices exchange data during one 'conversation'

The trick of open-drain lines & pull-up resistors:

The bus interface in IC is built around an input buffer and an open drain transistor. When the bus is in "idle" state,
the bus lines are kept in the logic "high" state. The external pull-up resistors are used for this condition to achieve.
This pull-up resistor as seen in the fig-1 is actually a small current source. If the device wants to put a signal on
the bus, the chip drives its output transistor, thus pulling the bus to "low" level. Suppose the bus is already
occupied by another chip by sending a "low" state to the bus. Then all other chips lose their right to access the
bus. The chip does this with a built-in bus mastering technique.

Both the bus lines SDA and SCL are initially bi-directional. This means that in a particular device, these lines can
be driven by the IC itself or from an external device. In order to achieve this functionality, these signals use open
collector or open drain outputs.

The weak point of open-collector technique is, in a lengthy bus the speed of transmission comes down drastically
due to the presence of capacitive load. The shapes of the signals alter in proportion to RC time constant. Higher
the RC constant, the slower will be the transmission. At some point, the ICs will not be able to sense logic 1 and
0.

And also it can cause reflections at high speed, which creates "ghost signals" and corrupt the data, which is being
transmitted.

This problem can be overcome by using an active IC terminator. This device consists of a twin charge pump,
which can be considered as a dynamic resistor (instead of the passive pull-up resistors used). The moment the
state changes, it provides a large current (low dynamic resistance) to the bus. This action will charge the parasitic
capacitor very quickly. Once the voltage has risen above a certain level, the high current mode cuts out and the
output current drops sharply.

Different states, conditions & events on the bus:

We saw several unique states and conditions on the bus in our explanation: START, ADDRESS, ACK, DATA and
STOP. Let us now try to understand some more about those states.

START:
The Start state needs to be issued on the bus before any type of transaction on the bus. The master-chip first
pulls the data line (SDA) low, and next pulls the clock line (SCL) low. This condition is a signal to all the connected
chips to listen to the bus to expect a transmission.
A single message can contain multiple Start conditions. The use of this is called "repeated start".

ADDRESS & DATA:

After the "start" bit, a byte (7+1) is transmitted to slaves by the master. This byte is the address, which can identify
the particular slave on the bus. Bit 0 of this byte determines the slave access mode ('1' = read / '0' = write).
Remember, bytes are always transmitted MSB first. The R/W bit '0' indicates the master is willing to send data to
the slaves. Then the intended slave will respond back with ACK signal, indicating that its ready to receive. And the
communication continues.

In the same way, a byte can be received from the slave if the R/W bit in the address was set to '1', i.e. 'read'. But
now the master is not allowed to touch the SDA line. Master sends the 8 clock pulses needed to clock in a byte on
the SCL line, and releases the SDA line. Instead, the slave will now take control of this SDA line for data transfer.
All the master has to do now is generate a rising edge on the SCL line, read the level on SDA and generate a
falling edge on the SCL line. The slave will not change the data during the time that SCL is high. This sequence
has to be performed 8 times to complete the data byte.

Some of the addresses are reserved for "extended addressing mode", which use 10-bit addressing. If a standard
slave node, which is not able to resolve this extended addressing receives this address, it won't do anything.

ACK:

As we know the ACK signal is send back to the master whenever the address or data byte has been transmitted
onto the bus and received by the intended slave node.
The slave after sending ACK signal will drive the SDA line to low status immediately after receiving the 8th data
bit transmitted by the master or after evaluating received address byte. So to signal the completion of
transmission, SCL pulled to low by master, whereas SDA is pulled to low by slave.

To repeat the transmission, master drops a clock pulse on the SCL line and slave will release the SDA line after
receiving the clock. With this, bus is now ready again for master to send data or to initiate a stop condition.

In the same way, the master must acknowledge the slave device upon successful reception of a byte from a
slave.
SDA and the SCL line are in full control of the master. The slave will release the SDA line after sending last bit to
the master and make the SDA line high. The Master will now bring the SDA line low state and put a clock pulse on
the SCL line. After completion of this clock pulse, the master will again release the SDA line to allow the slave to
regain control of the SDA line.

The master can stop receiving data from the slave at any time, just by sending a stop condition.

NACK:

NACK means "Not Acknowledge". Confused? Don't confuse it with "No Acknowledge". Because, "Not
Acknowledge" occurs only after a master has read a byte from a slave. And "No Acknowledge" occurs after a
master has written a byte to a slave. Again confused? Lets analyze this in detail.

This happens when the slave regains control of the SDA line after the ACK cycle issued by the master.

Let's assume the next bit ready to be sent to the master is a 0. The slave would pull the SDA line low immediately
after the master takes the SCL line low. The master now attempts to generate a Stop condition on the bus. It
releases the SCL line first and then tries to release the SDA line, which is held low by the slave. So in short, No
Stop condition has been generated on the bus. This condition is called a NACK.

No ACK:

If, after transmission of the 8th bit from the master to the slave the slave does not pull the SDA line low, then this
is considered a No ACK condition.
This condition may be created due to the following reasons:
1. The slave is not there (in case of an address)
2. The slave missed a pulse and got out of sync with the SCL line of the master.
3. The bus is "stuck". One of the lines could be held low permanently.

In any case the master should abort by attempting to send a stop condition on the bus.

STOP:

The stop state is sent to the bus only after the message transfer has been completed. The master-MCU first
releases the SCL and then the SDA line. This condition is a true indication to all the chips and devices on the bus
that the bus is idle or the bus is free and available again for another communication.
A Stop condition denotes the END of a transmission, even if it is issued in the middle of a transaction or in the
middle of a byte. In this case, the chip disregards the information sent and goes to IDLE-state, waiting for a new
start condition.

Modes of operation:

The IC bus can operate in three modes, or in other words the data on the I2C bus can be transferred in three
different modes.
1. Standard mode
2. Fast mode
3. High-Speed (Hs) mode
Standard mode:

1. This is the original Standard mode released in early 80's


2. It has maximum data rates of 100kbps
3. It uses 7-bit addressing, which provides 112 slave addresses.

Enhanced or Fast mode:

The fast mode added some more features to the slave devices.
1. The maximum data rate was increased to 400kbps.
2. To suppress noise spikes, Fast-mode devices were given Schmitt-triggered inputs.
3. The SCL and SDA lines of an IC-bus slave device were made to exhibit high impedance when power was
removed.

High-Speed mode:

This mode was created mainly to increase the data rate up to 36 times faster than standard mode. It provides 1.7
Mbps (with C>b = 400pF), and 3.4Mbps (with C>b = 100pF).

The major difference between High Speed (HS) mode in comparison to standard mode is, HS mode systems
must include active pull up resistors on the SCL line. The other difference is, the master operating in HS -mode
sends a compatibility request signal in code form to slave, if not-Acknowledge (a bit name with in the I2C frame)
remains high after receiving the compatibility code, than the master assumes the slave is capable of HS-mode.

The risk of data corruption:

The operation of the bus with one master node seems to be very easy. But what happens if there are two masters
connected to the bus and if both of them start communicating at the same time. Let us try to analyze this situation
in detail.
When the first MCU issues a start condition and sends an address, all slaves will listen (including the second
MCU which at that time is considered a slave as well). If the address does not match the address of the second
MCU, it will hold back any activity until the bus becomes idle again after a stop condition.
As long as the two MCU's monitor what is going on, on the bus (start and stop) and as long as they are aware
that a transaction is going on because the last issued command was not a STOP, there is no problem.
But, what will happen if one of the MCU's missed the START condition and still thinks the bus is idle, or it just
came out of reset and wants to communicate.

The physical bus setup of the IC helps to solve this problem. Since the bus structure is a wired AND (if one
device pulls a line low it stays low), its possible to find whether the bus is idle or occupied.

Different versions of the I2C bus

Some of the specifications of the different versions of the I2C bus are explained below.

Version 1.0 - 1992

1. Programming of a slave address by software has been omitted.


2. The "low-speed mode" has been omitted.
3. The Fast-mode is added. Fast-mode devices are downward compatible
4. 10-bit addressing is added. This allows 1024 additional slave addresses.
5. Slope control and input filtering for Fast-mode devices is specified to improve the EMC behaviour.

Version 2.0 - 1998


1. The High-speed mode (Hs-mode) is added.
2. The low output level and hysteresis of devices with a supply voltage of 2 V and below has been adapted to
meet the required noise margins and to remain compatible with higher supply voltage devices.
3. The 0.6 V at 6 mA requirement for the output stages of Fast-mode devices has been omitted.
4. The fixed input levels for new devices are replaced by bus voltage-related levels.
5. Application information for bi-directional level shifter is added.

Version 2.1 - 2000

1. After a repeated START condition in Hs-mode, it is possible to stretch the clock signal SCLH (see Section 13.2
and Figs 22, 25 and 32).
2. Some timing parameters in Hs-mode have been relaxed

Benefits and Drawbacks:

Since only two wires are required, I2C is well suited for boards with many devices connected on the bus. This
helps reduce the cost and complexity of the circuit as additional devices are added to the system.

Due to the presence of only two wires, there is additional complexity in handling the overhead of addressing and
acknowledgments. This can be inefficient in simple configurations and a direct-link interface such as SPI might be
preferred.
.

Click on the text below to next module

MODULE -12

SPI Bus interface

Introduction:

Serial to Peripheral Interface (SPI) is a hardware/firmware communications protocol developed


by Motorola and later adopted by others in the industry. Microwire of National Semiconductor is
same as SPI. Sometimes SPI is also called a "four wire" serial bus.

The Serial Peripheral Interface or SPI-bus is a simple 4-wire serial communications interface
used by many microprocessor/microcontroller peripheral chips that enables the controllers and
peripheral devices to communicate each other. Even though it is developed primarily for the
communication between host processor and peripherals, a connection of two processors via
SPI is just as well possible.

The SPI bus, which operates at full duplex (means, signals carrying data can go in both
directions simultaneously), is a synchronous type data link setup with a Master / Slave interface
and can support up to 1 megabaud or 10Mbps of speed. Both single-master and multi-master
protocols are possible in SPI. But the multi-master bus is rarely used and look awkward, and
are usually limited to a single slave.

The SPI Bus is usually used only on the PCB. There are many facts, which prevent us from
using it outside the PCB area. The SPI Bus was designed to transfer data between various IC
chips, at very high speeds. Due to this high-speed aspect, the bus lines cannot be too long,
because their reactance increases too much, and the Bus becomes unusable. However, its
possible to use the SPI Bus outside the PCB at low speeds, but this is not quite practical.

The peripherals can be a Real Time Clocks, converters like ADC and DAC, memory modules
like EEPROM and FLASH, sensors like temperature sensors and pressure sensors, or some
other devices like signal-mixer, potentiometer, LCD controller, UART, CAN controller, USB
controller and amplifier.

Data and control lines of the SPI and the basic connection:

An SPI protocol specifies 4 signal wires.

1. Master Out Slave In (MOSI) - MOSI signal is generated by Master, recipient is the Slave.
2. Master In Slave Out (MISO) - Slaves generate MISO signals and recipient is the Master.
3. Serial Clock (SCLK or SCK) - SCLK signal is generated by the Master to synchronize data
transfers between the master and the slave.
4. Slave Select (SS) from master to Chip Select (CS) of slave - SS signal is generated by
Master to select individual slave/peripheral devices. The SS/CS is an active low signal.

There may be other naming conventions such as Serial Data In [SDI] in place of MOSI and
Serial Data Out [SDO] for MISO.

Among these four logic signals, two of them MOSI & MISO can be grouped as data lines and
other two SS & SCLK as control lines.

As we already know, in SPI-bus communication there can be one master with multiple slaves.
In single-master protocol, usually one SPI device acts as the SPI Master and controls the data
flow by generating the clock signal (SCLK) and activating the slave it wants to communicate
with slave-select signal (SS), then receives and or transmits data via the two data lines. A
master, usually the host micro controller, always provides clock signal to all devices on a bus
whether it is selected or not.

The usage of these each four pins may depend on the devices. For example, SDI pin may not
be present if a device does not require an input (ADC for example), or SDO pin may not be
present if a device does not require an output (LCD controllers for example). If a microcontroller
only needs to talk to 1 SPI Peripheral or one slave, then the CS pin on that slave may be
grounded. With multiple slave devices, an independent SS signal is needed from the master for
each slave device.
How do they communicate:

The communication is initiated by the master all the time. The master first configures the clock,
using a frequency, which is less than or equal to the maximum frequency that the slave device
supports. The master then select the desired slave for communication by pulling the chip select
(SS) line of that particular slave-peripheral to "low" state. If a waiting period is required (such as
for analog-to-digital conversion) then the master must wait for at least that period of time before
starting to issue clock cycles.

The slaves on the bus that has not been activated by the master using its slave select signal
will disregard the input clock and MOSI signals from the master, and must not drive MISO. That
means the master selects only one slave at a time.

Most devices/peripherals have tri-state outputs, which goes to high impedance state
(disconnected) when the device is not selected. Devices without this tri-state outputs cannot
share SPI bus with other devices, because such slave's chip-select may not get activated.

A full duplex data transmission can occur during each clock cycle. That means the master
sends a bit on the MOSI line; the slave reads it from that same line and the slave sends a bit on
the MISO line; the master reads it from that same line.

Data transfer is organized by using Shift register with some given word size such as 8- bits
(remember, its not limited to 8-bits), in both master and slave. They are connected in a ring.
While master shifts register value out through MOSI line, the slave shifts data in to its shift
register.

Data are usually shifted out with the MSB first, while shifting a new LSB into the same register.
After that register has been shifted out, the master and slave have exchanged their register
values. Then each device takes that value and does the necessary operation with it (for
example, writing it to memory). If there are more data to be exchanged, the shift registers are
loaded with new data and the process is repeated. When there are no more data to be
transmitted, the master stops its clock. Normally, it then rejects the slave.

There is a "multiple byte stream mode" available with SPI bus interface. In this mode the
master can shift bytes continuously. In this case, the slave select (SS) is kept low until all
stream process gets finished.

SPI devices sometimes use another signal line to send an interrupt signal to a host CPU. Some
of the examples for these type of signals are pen-down interrupts from touch-screen sensors,
thermal limit alerts from temperature sensors, alarms issued by real time clock chips, and
headset jack insertions from the sound codec in a cell phone.

Significance of the clock polarity and phase:

Another pair of parameters called clock polarity (CPOL) and clock phase (CPHA)
determine the edges of the clock signal on which the data are driven and sampled.
That means, in addition to setting the clock frequency, the master must also
configure the clock polarity (CPOL) and phase (CPHA) with respect to the data. Since
the clock serves as synchronization of the data communication, there are four
possible modes that can be used in an SPI protocol, based on this CPOL and CPHA.

SPI Mode CPOL CPHA


0 0 0
1 0 1
2 1 0
3 1 1

If the phase of the clock is zero (i.e. CPHA = 0) data is latched at the rising edge of the clock
with CPOL = 0, and at the falling edge of the clock with CPOL = 1.
If CPHA = 1, the polarities are reversed. Data is latched at the falling edge of the clock with
CPOL = 0, and at the rising edge with CPOL = 1.

The micro-controllers allow the polarity and the phase of the clock to be adjusted. A positive
polarity results in latching data at the rising edge of the clock. However data is put on the data
line already at the falling edge in order to stabilize. Most peripherals, which can only be slaves,
work with this configuration. If it should become necessary to use the other polarity, transitions
are reversed.

Different types of configurations:

Suppose a master-microcontroller needs to talk to multiple SPI Peripherals. There are 2 ways
to set things up:

1. Cascaded slaves or daisy-chained slaves


2. Independent slaves or parallel configuration

Daisy-chained slave configuration:

In cascaded slave configuration, all the clock lines (SCLK) are connected together. And also all
the chip select (CS) pins are connected together. The data flows out the microcontroller,
through each peripheral in turn, and back to the microcontroller. The data output of the
preceding slave-device is tied to the data input of the next, thus forming a wider shift register.
So the cascaded slave-devices are evidently looked at as one larger device and receive
therefore the same chip select signal. This means, only a single SS line is required from the
master, rather than a separate SS line for each slave.
But we have to remember that the daisy-chain will not work with devices which support or
require multiple bytes operation.

Independent slave configuration:

This is the typical SPI-bus configuration with one SPI-master and multiple slaves/peripherals. In
this independent or parallel slave configuration,

1. All the clock lines (SCLK) are connected together.


2. All the MISO data lines are connected together.
3. All the MOSI data lines are connected together.
4. But the Chip Select (CS) pin from each peripheral must be connected to a separate Slave
Select (SS) pin on the master-microcontroller.
Queued Serial Peripheral Interface (QSPI)

The queued serial peripheral interface (QSPI) is another type of SPI controller, not another bus
type. Or in other words it is just an extension to the SPI-bus.
The difference is that it uses a data queue with programmable queue pointers that allow some
data transfers without CPU intervention. It also has a wrap-around mode that allows continuous
transfers to and from the queue with no CPU intervention. As a result, the peripherals or the
slaves appear to the CPU as memory-mapped parallel devices. This feature is useful in
applications such as control of an Analog to Digital converter.
The QSPI has got some more programmable features like chip selects and transfer
length/delay.

Advantages of SPI

1. Full duplex communication


2. Higher throughput than IC protocol
3. Not limited to 8-bit words in the case of bit-transferring
4. Arbitrary choice of message size, contents, and purpose
5. Simple hardware interfacing
6. Typically lower power requirements than IC due to less circuitry.
7. No arbitration or associated failure modes.
8. Slaves use the master's clock, and don't need precision oscillators.
9. Transceivers are not needed.
10. At most one "unique" bus signal per device (CS); all others are shared

Disadvantages of SPI

1. Requires more pins on IC packages than IC


2. No in-band addressing. Out-of-band chip select signals are required on shared busses.
3. No hardware flow control
4. No slave acknowledgment
5. Multi-master busses are rare and awkward, and are usually limited to a single slave.
6. Without a formal standard, validating conformance is not possible
7. Only handles short distances compared to RS-232, RS-485, or CAN.

Click on the text below to enter next module

USB interface tutorial covering basic fundamentals

Introduction:

Universal Serial Bus (USB) is a set of interface specifications for high speed wired
communication between electronics systems peripherals and devices with or without
PC/computer. The USB was originally developed in 1995 by many of the industry leading
companies like Intel, Compaq, Microsoft, Digital, IBM, and Northern Telecom.

The major goal of USB was to define an external expansion bus to add peripherals to a PC in
easy and simple manner. The new external expansion architecture, highlights,

1. PC host controller hardware and software


2. Robust connectors and cable assemblies
3. Peripheral friendly master-slave protocols
4. Expandable through multi-port hubs.

USB offers users simple connectivity. It eliminates the mix of different connectors for different
devices like printers, keyboards, mice, and other peripherals. That means USB-bus allows
many peripherals to be connected using a single standardized interface socket. Another main
advantage is that, in USB environment, DIP-switches are not necessary for setting peripheral
addresses and IRQs. It supports all kinds of data, from slow mouse inputs to digitized audio
and compressed video.

USB also allows hot swapping. The "hot-swapping" means that the devices can be plugged and
unplugged without rebooting the computer or turning off the device. That means, when plugged
in, everything configures automatically. So the user needs not worry about terminations, terms
such as IRQs and port addresses, or rebooting the computer. Once the user is finished, they
can simply unplug the cable out, the host will detect its absence and automatically unload the
driver. This makes the USB a plug-and-play interface between a computer and add-on devices.

The loading of the appropriate driver is done using a PID/VID (Product ID/Vendor ID)
combination. The VID is supplied by the USB Implementer's forum
Fig 1: The USB "trident" logo

The USB has already replaced the RS232 and other old parallel communications in many
applications. USB is now the most used interface to connect devices like mouse, keyboards,
PDAs, game-pads and joysticks, scanners, digital cameras, printers, personal media players,
and flash drives to personal computers. Generally speaking, USB is the most successful
interconnect in the history of personal computing and has migrated into consumer electronics
and mobile products.

USB sends data in serial mode i.e. the parallel data is serialized before sends and de-serialized
after receiving.

The benefits of USB are low cost, expandability, auto-configuration, hot-plugging and
outstanding performance. It also provides power to the bus, enabling many peripherals to
operate without the added need for an AC power adapter.

Various versions USB:

As USB technology advanced the new version of USB are unveiled with time. Let us now try to
understand more about the different versions of the USB.

USB1.0: Version 0.7 of the USB interface definition was released in November 1994. But USB
1.0 is the original release of USB having the capability of transferring 12 Mbps, supporting up to
127 devices. And as we know it was a combined effort of some large players on the market to
define a new general device interface for computers. This USB 1.0 specification model was
introduced in January1996. The data transfer rate of this version can accommodate a wide
range of devices, including MPEG video devices, data gloves, and digitizers. This version of
USB is known as full-speed USB.

Since October-1996, the Windows operating systems have been equipped with USB drivers or
special software designed to work with specific I/O device types. USB got integrated into
Windows 98 and later versions. Today, most new computers and peripheral devices are
equipped with USB.

USB1.1: USB 1.1 came out in September 1998 to help rectify the adoption problems that
occurred with earlier versions, mostly those relating to hubs.

USB 1.1 is also known as full-speed USB. This version is similar to the original release of USB;
however, there are minor modifications for the hardware and the specifications. USB version
1.1 supported two speeds, a full speed mode of 12Mbits/s and a low speed mode of 1.5Mbits/s.
The 1.5Mbits/s mode is slower and less susceptible to EMI, thus reducing the cost of ferrite
beads and quality components.
USB2.0: Hewlett-Packard, Intel, LSI Corporation, Microsoft, NEC, and Philips jointly led the
initiative to develop a higher data transfer rate than the 1.1 specifications. The USB 2.0
specification was released in April 2000 and was standardized at the end of 2001. This
standardization of the new device-specification made backward compatibility possible, meaning
it is also capable of supporting USB 1.0 and 1.1 devices and cables.

Supporting three speed modes (1.5, 12 and 480 megabits per second), USB 2.0 supports low-
bandwidth devices such as keyboards and mice, as well as high-bandwidth ones like high-
resolution Web-cams, scanners, printers and high-capacity storage systems.

USB 2.0, also known as hi-speed USB. This hi-speed USB is capable of supporting a transfer
rate of up to 480 Mbps, compared to 12 Mbps of USB 1.1. That's about 40 times as fast! Wow!

USB3.0: USB 3.0 is the latest version of USB release. It is also called as Super-Speed USB
having a data transfer rate of 4.8 Gbit/s (600 MB/s). That means it can deliver over 10x the
speed of today's Hi-Speed USB connections.

The USB 3.0 specification was released by Intel and its partners in August 2008. Products
using the 3.0 specifications are likely to arrive in 2009 or 2010. The technology targets fast PC
sync-and-go transfer of applications, to meet the demands of Consumer Electronics and mobile
segments focused on high-density digital content and media.

USB 3.0 is also a backward-compatible standard with the same plug and play and other
capabilities of previous USB technologies. The technology draws from the same architecture of
wired USB. In addition, the USB 3.0 specification will be optimized for low power and improved
protocol efficiency.

USB system overview:

The USB system is made up of a host, multiple numbers of USB ports, and multiple peripheral
devices connected in a tiered-star topology. To expand the number of USB ports, the USB hubs
can be included in the tiers, allowing branching into a tree structure with up to five tier levels.

The tiered star topology has some benefits. Firstly power to each device can be monitored and
even switched off if an overcurrent condition occurs without disrupting other USB devices. Both
high, full and low speed devices can be supported, with the hub filtering out high speed and full
speed transactions so lower speed devices do not receive them.

The USB is actually an addressable bus system, with a seven-bit address code. So it can
support up to 127 different devices or nodes at once (the "all zeroes" code is not a valid
address). However it can have only one host: the PC itself. So a PC with its peripherals
connected via the USB forms a star local area network (LAN).

On the other hand any device connected to the USB can have a number of other nodes
connected to it in daisy-chain fashion, so it can also form the hub for a mini-star sub-network.
Similarly it is possible to have a device, which purely functions as a hub for other node devices,
with no separate function of its own. This expansion via hubs is possible because the USB
supports a tiered star topology. Each USB hub acts as a kind of traffic cop. for its part of the
network, routing data from the host to its correct address and preventing bus contention
clashes between devices trying to send data at the same time.

On a USB hub device, the single port used to connect to the host PC either directly or via
another hub is known as the upstream port, while the ports used for connecting other devices to
the USB are known as the downstream ports. USB hubs work transparently as far as the host
PC and its operating system are concerned. Most hubs provide either four or seven
downstream ports or less if they already include a USB device of their own.
The host is the USB system's master, and as such, controls and schedules all communications
activities. Peripherals, the devices controlled by USB, are slaves responding to commands from
the host. USB devices are linked in series through hubs. There always exists one hub known as
the root hub, which is built in to the host controller.

A physical USB device may consist of several logical sub-devices that are referred to as device
functions. A single device may provide several functions, for example, a web-cam (video device
function) with a built-in microphone (audio device function). In short, the USB specification
recognizes two kinds of peripherals: stand-alone (single function units, like a mouse) or
compound devices like video camera with separate audio processor.
The logical channel connection host to peripheral-end is called pipes in USB. A USB device can
have 16 pipes coming into the host controller and 16 going out of the controller.

The pipes are unidirectional. Each interface is associated with single device function and is
formed by grouping endpoints.

FIGURE Fig2: The USB "tiered star" topology

The hubs are bridges. They expand the logical and physical fan-out of the network. A hub has a
single upstream connection (that going to the root hub, or the next hub closer to the root), and
one to many downstream connections.

Hubs themselves are considered as USB devices, and may incorporate some amount of
intelligence. We know that in USB users may connect and remove peripherals without powering
the entire system down. Hubs detect these topology changes. They also source power to the
USB network. The power can come from the hub itself (if it has a built-in power supply), or can
be passed through from an upstream hub.

USB connectors & the power supply:


Connecting a USB device to a computer is very simple -- you find the USB connector on the
back of your machine and plug the USB connector into it. If it is a new device, the operating
system auto-detects it and asks for the driver disk. If the device has already been installed, the
computer activates it and starts talking to it.

The USB standard specifies two kinds of cables and connectors. The USB cable will usually
have an "A" connector on one end and a "B" on the other. That means the USB devices will
have an "A" connection on it. If not, then the device has a socket on it that accepts a USB "B"
connector.

Fig 3: USB Type A & B Connectors

The USB standard uses "A" and "B" connectors mainly to avoid confusion:

1. "A" connectors head "upstream" toward the computer.


2. "B" connectors head "downstream" and connect to individual devices.

By using different connectors on the upstream and downstream end, it is impossible to install a
cable incorrectly, because the two types are physically different.

Individual USB cables can run as long as 5 meters for 12Mbps connections and 3m for
1.5Mbps. With hubs, devices can be up to 30 meters (six cables' worth) away from the host.
Here the high-speed cables for 12Mbps communication are better shielded than their less
expensive 1.5Mbps counterparts. The USB 2.0 specification tells that the cable delay to be less
than 5.2 ns per meter

Inside the USB cable there are two wires that supply the power to the peripherals--
+5 volts (red) and ground (brown)-- and a twisted pair (yellow and blue) of wires to
carry the data. On the power wires, the computer can supply up to 500 milliamps of
power at 5 volts. A peripheral that draws up to 100ma can extract all of its power
from the bus wiring all of the time. If the device needs more than a half-amp, then it
must have its own power supply. That means low-power devices such as mice can
draw their power directly from the bus. High-power devices such as printers have
their own power supplies and draw minimal power from the bus. Hubs can have their
own power supplies to provide power to devices connected to the hub.

Pin No: Signal Color of the cable


1 +5V power Red
2 - Data White / Yellow
3 +Data Green / Blue
4 Ground Black/Brown

Table - 1: USB pin connections

USB hosts and hubs manage power by enabling and disabling power to individual devices to
electrically remove ill-behaved peripherals from the system. Further, they can instruct devices
to enter the suspend state, which reduces maximum power consumption to 500 microamps (for
low-power, 1.5Mbps peripherals) or 2.5ma for 12Mbps devices.

Fig 3: USB Type A & B Connectors

In short, the USB is a serial protocol and physical link, which transmits all data differentially on
a single pair of wires. Another pair provides power to downstream peripherals.

Note that although USB cables having a Type A plug at each end are available, they should
never be used to connect two PCs together, via their USB ports. This is because a USB
network can only have one host, and both would try to claim that role. In any case, the cable
would also short their 5V power rails together, which could cause a damaging current to flow.
USB is not designed for direct data transfer between PCs.
But the "sharing hubs" technique allows multiple computers to access the same peripheral
device(s) and work by switching access between PCs, either automatically or manually.

USB Electrical signaling

The serial data is sent along the USB in differential or push-pull mode, with opposite polarities
on the two signal lines. This improves the signal-to-noise ratio by doubling the effective signal
amplitude and also allowing the cancellation of any common-mode noise induced into the
cable. The data is sent in non-return-to-zero (NRTZ) format. To ensure a minimum density of
signal transitions, USB uses bit stuffing. I.e.: an extra 0 bit is inserted into the data stream after
any appearance of six consecutive 1 bits. Seven consecutive 1 bits is always considered as an
error.

The low speed/full speed USB bus (twisted pair data cable) has characteristic impedance of 90
ohms +/- 15%. The data cable signal lines are labeled as D+ and D-. Transmitted signal levels
are as follows.
1. 0.0V to 0.3V for low level and 2.8V to 3.6V for high level in Full Speed (FS) and Low Speed
(LS) modes
2. -10mV to 10 mV for low level and 360mV to 440 mV for high level in High Speed (HS) mode.

In FS mode the cable wires are not terminated, but the HS mode has termination of 45O to
ground, or 90O differential to match the data cable impedance.

As we already discussed, the USB connection is always between a host / hub at the "A"
connector end, and a device or hub's upstream port at the other end. The host includes 15 kO
pull-down resistors on each data line. When no device is connected, this pulls both data lines
low into the so-called "single-ended zero" state (SE0), and indicates a reset or disconnected
connection.

A USB device pulls one of the data lines high with a 1.5kO resistor. This overpowers one of the
pull-down resistors in the host and leaves the data lines in an idle state called "J". The choice of
data line indicates a device's speed support; full-speed devices pull D+ high, while low-speed
devices pull D- high. In fact the data is transmitted by toggling the data lines between the J
state and the opposite K state.

A USB bus is reset using a prolonged (10 to 20 milliseconds) SE0 signal. USB 2.0 devices use
a special protocol during reset, called "chirping", to negotiate the High-Speed mode with the
host/hub. A device that is HS capable first connects as an FS device (D+ pulled high), but upon
receiving a USB RESET (both D+ and D- driven LOW by host for 10 to 20 mS) it pulls the D-
line high. If the host/hub is also HS capable, it chirps (returns alternating J and K states on D-
and D+ lines) letting the device know that the hub will operate at High Speed.

How do they communicate?

When a USB peripheral device is first attached to the network, a process called enumeration
process gets started. This is the way by which the host communicates with the device to learn
its identity and to discover which device driver is required. The enumeration starts by sending a
reset signal to the newly connected USB device. The speed of the USB device is determined
during the reset signaling. After reset, the host reads the USB device's information, and then
the device is assigned a unique 7-bit address (will be discussed in next section). This avoids
the DIP-switch and IRQ headaches of the past device communication methods. If the device is
supported by the host, the device drivers needed for communicating with the device are loaded
and the device is set to a configured state. Once a hub detects a new peripheral (or even the
removal of one), it actually reports the new information about the peripheral to the host, and
enables communications with it. If the USB host is restarted, the enumeration process is
repeated for all connected devices.

In other words, the enumeration process is initiated both when the host is powered up and a
device connected or removed from the network.

Technically speaking, the USB communications takes place between the host and endpoints
located in the peripherals. An endpoint is a uniquely addressable portion of the peripheral that
is the source or receiver of data. Four bits define the device's endpoint address; codes also
indicate transfer direction and whether the transaction is a "control" transfer (will be discussed
later in detail). Endpoint 0 is reserved for control transfers, leaving up to 15 bi-directional
destinations or sources of data within each device. All devices must support endpoint zero.
Because this is the endpoint, which receives all of the devices control, and status requests
during enumeration and throughout the duration while the device is operational on the bus.

All the transfers in USB occur through virtual pipes that connect the peripheral's endpoints with
the host. When establishing communications with the peripheral, each endpoint returns a
descriptor, a data structure that tells the host about the endpoint's configuration and
expectations. Descriptors include transfer type, max size of data packets, perhaps the interval
for data transfers, and in some cases, the bandwidth needed. Given this data, the host
establishes connections to the endpoints through virtual pipes.
Though physically configured as a tiered star, logically (to the application code) a direct
connection exists between the host and each device.

The host controller polls the bus for traffic, usually in a round-robin fashion, so no USB device
can transfer any data on the bus without an explicit request from the host controller.

USB can support four data transfer types or transfer mode, which are listed below.

1. Control
2. Isochronous
3. Bulk
4. Interrupt

Control transfers exchange configuration, setup and command information between the device
and the host. The host can also send commands or query parameters with control packets.

Isochronous transfer is used by time critical, streaming device such as speakers and video
cameras. It is time sensitive information so, within limitations, it has guaranteed access to the
USB bus. Data streams between the device and the host in real-time, and so there will not be
any error correction.

Bulk transfer is used by device like printers & scanners, which receives data in one big packet.
Here the timely delivery is not critical. Bulk transfers are fillers, claiming unused USB bandwidth
when nothing more important is going on. The error correction protects these packets.

Interrupt transfers is used by peripherals exchanging small amounts of data that need
immediate attention. It is used by devices to request servicing from the PC/host. Devices like a
mouse or a keyboard comes in this category. Error checking validates the data.

As devices are enumerated, the host is keeping track of the total bandwidth that all of the
isochronous and interrupt devices are requesting. They can consume up to 90 percent of the
480 Mbps of bandwidth that is available. After 90 percent is used up, the host denies access to
any other isochronous or interrupt devices. Control packets and packets for bulk transfers use
any bandwidth left over (at least 10 percent).

The USB divides the available bandwidth into frames, and the host controls the frames. Frames
contain 1,500 bytes, and a new frame starts every millisecond. During a frame, isochronous
and interrupt devices get a slot so they are guaranteed the bandwidth they need. Bulk and
control transfers use whatever space is left.

USB packets & formats

All USB data is sent serially, of course, and least significant bit (LSB) first. USB data transfer is
essentially in the form of packets of data, sent back and forth between the host and peripheral
devices. Initially, all packets are sent from the host, via the root hub and possibly more hubs, to
devices. Some of those packets direct a device to send some packets in reply.

Each USB data transfer consists of a

1. Token Packet (Header defining what it expects to follow)


2. Optional Data Packet, (Containing the payload)
3. Status Packet (Used to acknowledge transactions and to provide a means of error
correction)

As we have already discussed, the host initiates all transactions. The first packet, also called a
token is generated by the host to describe what is to follow and whether the data transfer will be
a read or write and what the device's address and designated endpoint is. The next packet is
generally a data packet carrying the content information and is followed by a handshaking
packet, reporting if the data or token was received successfully, or if the endpoint is stalled or
not available to accept data.

USB packets may consist of the following fields:

1. Sync field: All the packets start with this sync field. The sync field is 8 bits long at low and full
speed or 32 bits long for high speed and is used to synchronize the clock of the receiver with
that of the transmitter. The last two bits indicate where the PID fields starts.

2. PID field: This field (Packet ID) is used to identify the type of packet that is being sent. The
PID is actually 4 bits; the byte consists of the 4-bit PID followed by its bit-wise complement,
making an 8-bit PID in total. This redundancy helps detect errors.

3. ADDR field: The address field specifies which device the packet is designated for. Being 7
bits in length allows for 127 devices to be supported.

4. ENDP field: This field is made up of 4 bits, allowing 16 possible endpoints. Low speed
devices however can only have 2 additional endpoints on top of the default pipe.

5. CRC field: Cyclic Redundancy Checks are performed on the data within the packet payload.
All token packets have a 5-bit CRC while data packets have a 16-bit CRC.

6. EOP field: This indicates End of packet. Signaled by a Single Ended Zero (SE0) for
approximately 2 bit times followed by a J for 1 bit time.

The USB packets come in four basic types, each with a different format and CRC field:

1. Handshake packets
2. Token packets
3. Data packets
4. PRE packet
5. Start of Frame Packets

Handshake packets:

Handshake packets consist of a PID byte, and are generally sent in response to data packets.
The three basic types of handshake packets are

1. ACK, indicating that data was successfully received,


2. NAK, indicating that the data cannot be received at this time and should be retried,
3. STALL, indicating that the device has an error and will never be able to successfully transfer
data until some corrective action is performed.

Fig 4: Handshake packet format


USB 2.0 added two additional handshake packets.

1. NYET which indicates that a split transaction is not yet complete,


2. ERR handshake to indicate that a split transaction failed.

The only handshake packet the USB host may generate is ACK; if it is not ready to receive
data, it should not instruct a device to send any.

Token packets:

Token packets consist of a PID byte followed by 11 bits of address and a 5-bit CRC. Tokens are
only sent by the host, not by a device.

There are three types of token packets.

1. In token - Informs the USB device that the host wishes to read information.
2. Out token- informs the USB device that the host wishes to send information.
3. Setup token - Used to begin control transfers.

IN and OUT tokens contain a 7-bit device number and 4-bit function number (for multifunction
devices) and command the device to transmit DATA-packets, or receive the following DATA-
packets, respectively.

An IN token expects a response from a device. The response may be a NAK or STALL
response, or a DATA frame. In the latter case, the host issues an ACK handshake if
appropriate. An OUT token is followed immediately by a DATA frame. The device responds with
ACK, NAK, or STALL, as appropriate.

SETUP operates much like an OUT token, but is used for initial device setup.

Fig 5: Token packet format

USB 2.0 added a PING token, which asks a device if it is ready to receive an OUT/DATA packet
pair. The device responds with ACK, NAK, or STALL, as appropriate. This avoids the need to
send the DATA packet if the device knows that it will just respond with NAK.

USB 2.0 also added a larger SPLIT token with a 7-bit hub number, 12 bits of control flags, and
a 5-bit CRC. This is used to perform split transactions. Rather than tie up the high-speed USB
bus sending data to a slower USB device, the nearest high-speed capable hub receives a
SPLIT token followed by one or two USB packets at high speed, performs the data transfer at
full or low speed, and provides the response at high speed when prompted by a second SPLIT
token.

Data packets:

There are two basic data packets, DATA0 and DATA1. Both consist of a DATA PID field, 0-1023
bytes of data payload and a 16-bit CRC. They must always be preceded by an address token,
and are usually followed by a handshake token from the receiver back to the transmitter.
1. Maximum data payload size for low-speed devices is 8 bytes.
2. Maximum data payload size for full-speed devices is 1023 bytes.
3. Maximum data payload size for high-speed devices is 1024 bytes.
4. Data must be sent in multiples of bytes

Fig6: Data packet format

USB 2.0 added DATA2 and MDATA packet types as well. They are used only by high-speed
devices doing high-bandwidth isochronous transfers, which need to transfer more than 1024
bytes per 125 s "micro-frame" (8192 kB/s).

PRE packet:

Low-speed devices are supported with a special PID value, PRE. This marks the beginning of a
low-speed packet, and is used by hubs, which normally do not send full-speed packets to low-
speed devices.
Since all PID bytes include four 0 bits, they leave the bus in the full-speed K state, which is the
same as the low-speed J state. It is followed by a brief pause during which hubs enable their
low-speed outputs, already idling in the J state, then a low-speed packet follows, beginning with
a sync sequence and PID byte, and ending with a brief period of SE0. Full-speed devices other
than hubs can simply ignore the PRE packet and its low-speed contents, until the final SE0
indicates that a new packet follows.

Start of Frame Packets:

Every 1ms (12000 full-speed bit times), the USB host transmits a special SOF (start of frame)
token, containing an 11-bit incrementing frame number in place of a device address. This is
used to synchronize isochronous data flows. High-speed USB 2.0 devices receive 7 additional
duplicate SOF tokens per frame, each introducing a 125 s "micro-frame".

Fig7: Start of Frame packet format


The Host controllers

As we know, the host controller and the root hub are part of the computer hardware. The
interfacing between the programmer and this host controller is done by a device called Host
Controller Device (HCD), which is defined by the hardware implementer.

In the version 1.x age, there were two competing HCD implementations, Open Host Controller
Interface (OHCI) and Universal Host Controller Interface (UHCI). OHCI was developed by
Compaq, Microsoft and National Semiconductor. UHCI and its open software stack were
developed by Intel. VIA Technologies licensed the UHCI standard from Intel; all other chipset
implementers use OHCI. UHCI is more software-driven, making UHCI slightly more processor-
intensive than OHCI but cheaper to implement.

With the introduction of USB 2.0 a new Host Controller Interface Specification was needed to
describe the register level details specific to USB 2.0. The USB 2.0 HCD implementation is
called the Enhanced Host Controller Interface (EHCI). Only EHCI can support hi-speed (480
Mbit/s) transfers. Most of PCI-based EHCI controllers contain other HCD implementations
called 'companion host controller' to support Full Speed (12 Mbit/s) and may be used for any
device that claims to be a member of a certain class. An operating system is supposed to
implement all device classes so as to provide generic drivers for any USB device.

But remember, USB specification does not specify any HCD interfaces. The USB defines the
format of data transfer through the port, but not the system by which the USB hardware
communicates with the computer it sits in.

Device classes

USB defines class codes used to identify a device's functionality and to load a device driver
based on that functionality. This enables a device driver writer to support devices from different
manufacturers that comply with a given class code.

There are two places on a device where class code information can be placed. One place is in
the Device Descriptor, and the other is in Interface Descriptors. Some defined class codes are
allowed to be used only in a Device Descriptor, others can be used in both Device and Interface
Descriptors, and some can only be used in Interface Descriptors.

Further developments in USB

USB OTG:

One of the biggest problems with USB is that its host controlled. If we switch off a USB host,
nothing else works. Also USB does not support peer-to-peer communication. Let us take an
example: Many USB digital cameras can download data to a PC, but it is unable to connect
them directly to the USB printer or to a CD Burner, something which is possible with other
communication mediums.

To combat these problems, a standard was created to USB 2.0. USB On-The-Go (OTG) was
created in 2002. It is actually a supplement to the USB 2.0 specification. USB OTG defines a
dual-role device, which can act as either a host or peripheral, and can connect to a PC or other
portable devices through the same connector. The OTG specification details the "dual role
device", in which a device can function as both a device controller (DC) and/or a host controller
(HC).

The OTG host can have a targeted peripheral list. This means the embedded device does not
need to have a list of every product and vendor ID or class driver. It can target only one type of
peripheral if needed.

Mini, Micro USBs


The OTG specification introduced two additional connectors. One such connector is a mini A/B
connector. A dual-role device is required to be able to detect whether a Mini-A or Mini-B plug is
inserted by determining if the ID pin (an extra pin introduced by OTG) is connected to ground.
The Standard-A plug is approximately 4 x 12 mm, the Standard-B approximately 7 x 8 mm, and
the Mini-A and Mini-B plugs approximately 2 x 7 mm. These connectors are used for smaller
devices such as PDAs, mobile phones or digital cameras.

The Micro-USB connector was introduced in Jan-2007. It was mainly intended to


replace the Mini-USB plugs used in many new smart-phones and PDAs. This Micro-
USB plug is rated for approximately 10,000 connect-disconnect cycles. As far as the
dimensions are concerned, it is about half the height of the mini-USB connector, but
features a similar width.

Pin
Name Description Color
No:
1 VCC +5V Red
2 D- Data- White
3 D+ Data+ Green
Type A: Connected to
4 ID GNDType B: Not None
connected
5 GND Ground Black

Table-2: Mini/Micro plug connection

Click on the text below to enter next module

Next module - 15 (SRAM memory interface)


Click on the text below to enter previous module

You might also like