You are on page 1of 45

Wireless Sensor Networks

Clock Synchronization
Professor Jack Stankovic
University of Virginia

Why is clock sync important


Spatial-Temporal System

Where and when events occur


To distinguish multiple events, compute
lifetime of event, coordinate sensing,
coordinate control actions (for wake-up,
for actuators, ) ,
Signal processing (local clock may be
enough)
Compute velocity

Could we have a WSN without clock


sync?
If all operations are logical

Or Use GPS!
Power hungry
Expensive per node

Outline
NTP (Network Time Protocol) for
Internet
RBS (Reference Broadcast Sync)
TPSN (Time-sync Protocol for Sensor
Networks)
FTSP (Flooding Time Sync Protocol)

What is Clock Sync


The clocks of each node in the WSN
should read the same time within
epsilon and remain that way
Drift

Drift

Sync clocks
Handle clock drift (up to 40 microsec per
second on Mica platform; 7.37 MHz clock)
Solution: Re-sync (expensive)
Solution: Estimate clock drift and compensate
Resync infrequently

How bad is 40 microsec/sec?


40 x 60 = 2,400 microsec/min
2,400 x 60 = 144,000 microsec/hr
144,000 x 24 = 3,456,000 microsec/day

Goals for Clock Sync in


WSN
Meet required precision
Low communication BW
Robust against node and link failures
Multi-hop (large scale)

NTP
Network Time Protocol (NTP) on Internet

Included in OS code
Simple NTP (SNTP) exists for PCs
Runs as background process
Get time from best servers

Finds those with lowest jitter/latency


Depends on wired and fast connections

64 bit timestamps sent (200 picosec resolution)


Use statistical analysis of round trip time to clock
servers
Clock servers get time from GPS
Millisec (ms) accuracy (per hop)

NTP
Not (directly) usable for WSN

Complex see 50 page document


Continuous cost
Large code size
Expensive in messages/energy
Non-determinism at MAC layer (per hop)
if NTP was implemented in WSN can add 100s
ms delay

Clock Sync Delays


Uncertainties of radio message delivery

Clock Sync - Delays


Send Time assemble message and issue
request to MAC layer
System call overhead
OS may delay the call (how busy is the node)

Access Time waiting for access to the


channel
Transmission Time time to transmit;
F(speed of radio, length of message)

Clock Sync - Delays


Propagation Time time to get from the
sender to receiver; less than 1 microsec for
ranges up to 300 m.
Reception Time time for receiver to collect
message from the airways; same as transmit
time
Receive Time time to process the incoming
message

Ideal
Read
Clock

Propagates

Set
Clock

Real
Clock

Real
Clock
All Zero Costs/Delays

Read
Clock

What if Deterministic
Delays
.01
Propagates

Real
Clock

Set
Clock

5
5.21

5
.1

.1

Minimize these times


Make deterministic

5
5.21
Real
Clock

RBS
Reference message is broadcast
Receivers record their local time when message is
received
Timestamps are ONLY on the receiver side
This eliminates access and send times

Nodes exchange their recorded times with each


other
29.1 microsec accuracy for 1 hop case
No transmitter side non-determinism
Not extended to large multi-hop networks

RBS
Local Time
5

Exchange
6,7

Adjust
Plus 2

Send MSG
No Clock

Propagation Time = 0

5,7

Plus 1

5,6

OK

What can cause the ideal case of all


three nodes getting the clock message
at the same time to be false?
Node turns off interrupts
Code explicitly turns off interrupts
nesC atomic statement

Node turns off radio (sleep mode)

TPSN
Creates a spanning tree
Perform pairwise sync along edges of
the tree
Must be symmetric links
Recall radio irregularity paper

Considers entire system


RBS looks at one hop

Exchange two sync messages with


parent node

Synchronization Phase
Delta = clock drift
P = propagation delay
T2

Node B

(T1)

Node A
T1

P = ((T2-T1)+(T4-T3))/2

T2=T1+P+Delta

T3
(T1,T2,T3)
T4
Delta = ((T2-T1)-(T4-T3))/2

Node A corrects its clock by Delta


Note: Sender A corrects to clock of receiver B

Example
5.3
Node B
Node A

T2=5.35
.05

T3=6.0
.05

5.05

T1= 5.0
P = ((T2-T1)+(T4-T3))/2
P= ((5.35-5.0)+(5.75-6))/2
P=((.35)+(-.25))/2
P= .1/2=.05

5.7 T4=5.75
Delta = ((T2-T1)-(T4-T3))/2
Delta= ((.35)-(-.25))/2
Delta= 0.6/2 =.3

So A adds .3 to 5.75 to get 6.05


Only need Delta to adjust clocks

TPSN
No broadcasting so TPSN is expensive
Timestamp the message in the MAC
layer multiple times and average those
times (improves accuracy over RBS by 2
times)
(16.9 microsec accuracy for 1 hop)
No clock drift corrections

Read Clock in MAC Layer


Uncertainties of radio message delivery

TPSN
Assumes topology does not change
Not robust to failure of a node
Node fails must redo the spanning tree

Flooding Time Sync


Protocol (FTSP)
Implemented on Mica platform
~1 Microsec accuracy
MAC-layer timestamp
Skew compensation with linear
regression (accounts for drift)
Periodic flooding robust to failures
and topology changes
Handles large scale networks

Remove Uncertainties
Eliminate Send Uncertainty
Get time in the MAC layer

Eliminate Access Time


Get time after the message has access to
the channel

Eliminate Receive Time


Record local time message received at the
MAC layer

Remaining
(Mostly) Deterministic Times
Transmit
Propagation
Reception

Basic Idea
When to time stamp the message
Ready

Get timestamp
sender Mica2 uncertainties:
Interrupt:5us, 30us(<2%)
Encode+decode:110us -112us
Byte align: 0us 365us
Propagation: 1us

receiver

Set timestamp

Basic Idea
When to time stamp the message
Radio layer, after the second SYN sent out,
6 timestamps in row, take the average and
send only 1 timestamp

Normalize and then


take Average of these
timestamps for 6 bytes
of data

Why take 6 samples?


Because of all the uncertainties as
described in a previous slide

FTSP
Root maintains global time for system
All others sync to the root
Nodes form an ad hoc structure rather
than a spanning tree
Root broadcasts a timestamp for the
transmission time of a certain byte
Every receiver time stamps reception
of that byte
Account for deterministic times
Differences are the clock offsets

Summary of FTSP
So Far
(Below) MAC-layer timestamp
Correct sender timestamp to account
for known delays
Compute final offset error

Result: 1.48 microsec accuracy for 1 hop

Clock Drift
If local clocks all had exact same
frequency then no clock drift
corrections are needed and no re-syncs
needed either
MICA2 motes
Crystals used have drifts of up to 40
microseconds per second
Implication: resync every sec to maintain
40 microsec accuracy
Too costly - estimate clock drift and fix!

Basic Idea
8 entry linear regression table to
estimate clock skew (each entry derived from
1 clock sync protocol execution)
Example

10 seconds
1 sec

1
2
3
4
5
6
7
8

5 us offset
5 us offset
7 us offset
4 us offset
4 us offset
5 us offset
7 us offset
5 us offset

5 us over
Say a 10 sec
Resync period
Compensate .5 us
Each sec

Performance
Single hop, stop sync after 33 minutes
(where sync was run every 30 secs)

Example: If clock sync stops at time 33 and error of 10


Microsec is OK; then 7 minute re-sync cycle is OK

Multi-Hop Time Sync


Create a fixed spanning tree for clock
sync dissemination like in TPSN (not
robust)
Use more ad hoc technique
Assume every node has unique ID
One root at a time

Root/Reference Point

Root/Reference Point

Step 1

Step 2
8 sync msgs
to perform
Linear regression

How to handle
messages from
multiple nodes

Sync Message Format


Timestamp global time estimate of
transmitter when broadcast
RootID
seqNum incremented by root when
new sync round is initiated (e.g., every
30 seconds)
Other nodes insert largest received
seqNum into messages they broadcast

Processing Redundant
Messages
Place only first neighbor message for
each sequence number in linear
regression table
Other redundant messages (with same
sequence number) are ignored
Once 8 messages are obtained then can
perform linear regression
Note: the 8 messages may come from
different neighbors

Root Election Process


Option base station is root done, but not
fault tolerant
Multiple base stations?
Any node can serve as root?
If no sync message for time T, declare
myself to be root
May be multiple roots
Smallest ID wins; so roots will give up their
root status when it eventually gets a sync
message from another root with lower ID

Performance/Example
Multiple-hop, sync every 30 seconds
A) at 0:00 all motes were turned on;

ID2

B) at 0:41 the root with ID1 was switched


off;
C) from 1:12 until 1:42 randomly selected
motes were switched off and back on, one
per 30s;
D) at 1:47 the motes with odd node IDs
were switched off (half of the nodes are
removed);
E) at 2:02 the motes with odd node IDs
were switched back on;

ID1 (initial root elected)

F) at 2:13 the second root with ID2 is


switched off;

Performance (not real data)


6 minutes to elect root, 10 minutes complete sync, 1.2ms per hop

Questions
How long does it take to clock sync the
entire system (of say 1000, or 10,000
nodes)?
60 nodes: 10 minutes to elect leader; 14
minutes for sync
Collisions

What happens during the election


process?

Questions
What if some nodes do not have
synchronized clocks?
Lost messages
Down nodes (and come back)

Summary
Very accurate clock sync is possible (in
microsec range)
Time to perform clock sync is important
Initial delay at system init time may be
important
Consider a system of 10,000 nodes
Multiple base stations issues

Solve clock drift

Frequent sync messages


Infrequent sync messages but adjust
locally

Summary
Consider costs
Energy
Congestion
Time

Consider accuracy requirement


FTSP available as a nesC component

You might also like