You are on page 1of 5

Efficient Queue Based Dynamic Bandwidth

Allocation Scheme for Ethernet PONs


Pallab K. Choudhury and Poompat Saengudomlert
Asian Institute of Technology, Thailand

Abstract—We propose an Optical Line Terminal (OLT) cen- successful implementation of EPONs.
tric Dynamic Bandwidth Allocation (DBA) scheme based on in- The IEEE 802.3ah task force has developed the Multi-Point
dividual requests from service queues in Optical Network Units Control Protocol (MPCP) to facilitate implementation of vari-
(ONUs) for a Quality-of-Service (QoS) aware Ethernet Passive
Optical Network (EPON). The goal is to provide fairness for allo- ous bandwidth allocation algorithms in EPONs. This protocol
cating bandwidth among different service classes based on their relies on two control messages: REPORT (an upstream mes-
Service Level Agreements (SLAs). The proposed DBA scheme sage from ONU) and GATE (a downstream message from the
makes use of the excess bandwidth of lightly loaded queues to OLT), to request and assign transmission opportunities on the
meet the bandwidth demand of heavily loaded queues. To im- PON. The entity that receives a GATE and responds with a
plement this scheme, an effective polling mechanism is incorpo- REPORT is called a logical link and is identified by Logical
rated to solve the idle period problem. In addition, to reduce the
scheduling overhead, the DBA scheme utilizes a novel different Link IDentifier (LLID). The number of LLIDs instantiated on
cycle policy which involves selective bandwidth allocations to an ONU can have a profound impact on performance, and in
different service classes based on their delay bounds. We conduct fact is one of the most important design choices made in de-
detailed simulation experiments and compare the results with the signing an EPON system.
well-known Interleaved Polling with Adaptive Cycle Time Current MPCP systems allow for two LLID assignment
(IPACT) algorithm that utilizes strict priority policy. policies, namely 1 LLID per ONU and 1 LLID per queue. In
the per ONU approach, each ONU is allocated a single LLID,
Index Terms— Bandwidth allocation, Ethernet passive optical which is associated with all supported packet queues. In ef-
network. fect, this produces a hierarchical scheduling structure, where
the central scheduler (inter scheduling) running in the OLT
I. INTRODUCTION assigns a single, aggregated slot to the given LLID, and the
ONU must distribute this slot among the different COSs by
While backbone networks have experienced substantial using a low-level scheduler (intra scheduling). In the per
growth due to the emergence of Internet traffic, access net-
queue policy, a single LLID is assigned to each packet queue,
works have experienced fewer changes in recent years. Ser- which eliminate intra-scheduler as shown in Fig. 1. This ap-
vice providers are challenged with deploying access solutions proach employs a centralized control of OLT over each user
that are inexpensive, simple, scalable, and capable of deliver-
access and service level agreement (SLA), and makes the
ing integrated voice, data, and video services to the subscrib-
ONU much simpler [2].
ers. This paper proposes an efficient dynamic bandwidth alloca-
Passive Optical Networks (PONs) bring high speed broad- tion (DBA) algorithm for the per queue approach in the EPON
band access via fiber to the businesses, curbs, and home. system. The proposed DBA scheme uses the excess bandwidth
Ethernet PONs (EPONs) has gained tremendous attention of lightly loaded queues to help meet the bandwidth demand
since they combine low-cost Ethernet equipment and passive of heavily loaded queues. To implement this scheme, an effec-
optical components. The architecture of an EPON consists of tive polling mechanism is incorporated to solve the idle period
an Optical Line Terminal (OLT), a 1:N passive optical coupler problem. In addition, to reduce the overhead on bandwidth
(or splitter/combiner), and multiple Optical Network Units utilization, the scheme uses a novel different cycle policy.
(ONUs). In the upstream direction, an EPON is a multipoint-
to-point network. Since all ONUs share the same upstream II. RELATED RESEARCH
transmission medium, an EPON needs to employ some arbi- In [3], the authors combined the basic IPACT (Interleaved
tration mechanism to avoid data collisions and fairly share the Polling with Adaptive Cycle Time) limited service scheme
fiber-channel capacity [1]. EPONs are expected to support with strict priority queuing in order to support differentiated
diverse applications, such as voice communications, standard services. They have noticed an interesting phenomenon called
and high-definition video (STV and HDTV), video conferenc- light-load penalty. This is caused by the fact that during the
ing, real-time and near-real-time transactions, and data traffic. time lag between ONU’s reporting queue states and the arrival
However, to support these applications, EPONs need to have
Class-of-Service (COS) mechanisms built in. Hence, band-
width management of the upstream channel is essential for

2183
1930-529X/07/$25.00 © 2007 IEEE
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2007 proceedings.
Queue 0 In designing our proposed scheme, we consider the follow-
ONU 0 ing issues relevant to most of the DBA schemes based on per
Queue N−1
ONU approach.
1) An EPON scheduler should have a good control on a
Queue 0
misbehaving user so that it can effectively isolate or limit a
ONU 1 particular user. By using per ONU scheduling, it is difficult to
Scheduler

Queue N−1 limit misbehaving high-priority classes without completely


starving all the lower priority classes. Thus, we focus on per
queue scheduling.
2) As each service queue has its own SLA, its bandwidth
OLT
should be controlled by the OLT. In the existing DBA
Queue 0
ONU M-1
schemes, fairness is mainly considered among the queues in
Queue N−1 each ONU. As a result, two queues with identical SLAs placed
a
in different ONUs may get different services.
Fig. 1. Scheduling with one logical link per queue. In the next section, we present the proposed DBA which
overcomes the weaknesses mentioned above.
of the corresponding grant, more packets arrive to the queues.
Newly arriving packets may have higher priority and preempt III. PROPOSED DYNAMIC BANDWIDTH ALLOCATION
the lower priority packets that are already reported. This leads
to some lower priority packets left in the queue and being de- Consider an EPON network with an OLT and M ONUs.
layed for multiple cycle times. Since Ethernet frames cannot Each ONU contains N queues, each serving a class of service.
be fragmented and the new packets were not reported to the The number of queues is M × N. For this paper, we have con-
OLT, there are also unused slot remainders. sidered three priority classes in every ONU: P0 can be mapped
In [4], the authors proposed a DBA algorithm by using non- to Expedited Forwarding (EF), P1 can be mapped to Assured
strict priority policy to overcome the light-load penalty. The Forwarding (AF), and P2 can be mapped to Best Effort (BE).
authors presented a scheme that allocates the excess band- Assume that the transmission rates of both upstream and
width resulting from lightly loaded ONUs to other highly downstream links are R bps. The maximum transmission
loaded ONUs. To implement this scheme, the OLT needs to cycle is Tmax . The guard time between two consecutive time-
collect the REPORT messages from all the ONUs before it slots is Tg . Accordingly, the maximum data bytes that need to
performs DBA. However, it would result in idle periods in be distributed among all the queues in a cycle can be calcu-
which the channel is not utilized. The authors pointed out this lated as [4]
flaw and presented a DBA which schedules a lightly loaded
T ×R
ONU instantaneously, whereas the heavily loaded ONUs will Bmax = max −H (1)
be scheduled after collecting all the REPORT messages. 8
However, the DBA scheme presented in [4] may not be able where H is the overhead in bytes associated to every trans-
to make sufficient use of the idle period in high traffic load mission cycle, and can be expressed as [8]
because under this load all the ONUs may have a bandwidth ( M × N ) × Tg × R
demand larger than the minimum guaranteed bandwidth. In H = ( M × N ) × Lreq + (2)
8
this case, the idle period is still wasted and yields low link
utilization. where Lreq is the size of the REPORT message in byte (64
In [5], the authors address the issue of fairness at the global bytes).
level and propose an algorithm for per-ONU approach. To Among these traffic classes, P0 traffic has Constant Bit
achieve adaptive fairness, the authors of [6] proposed an intra- Rate (CBR). P1 and P2 traffic are highly bursty. As the P0
scheduling based on the Deficient Weighted Round Robin. In traffic represents delay-sensitive services, after serving the
[7], the authors proposed a DBA scheme that can ensure the entire request based on their SLA, the remaining bandwidth is
fairness based on the Service Level Agreement (SLA) be- first distributed between all the P1 traffic classes with equal
tween the OLT and the ONUs. weight. This allocation can be expressed as
Most of the DBA schemes based on per ONU approach M −1

employ inter- and intra- scheduling separately. Since ONUs


total
Bexcess = max(0, Bmax − ∑ BrP
i
0) (3)
i=0
need to perform scheduling, their complexities are high and
total
are thus relatively expensive. i Bexcess
Bmin P1 = (4)
In [8], the authors presented an OLT centric DBA which M
employs a credit pooling technique combined with a where i = 0,....., M − 1 is the index of all the P1 traffic queues,
weighted-share policy to partition the upstream bandwidth
BrPi 0 is the requested bandwidth from each P0 traffic queue.
among different classes of services.

2184
1930-529X/07/$25.00 © 2007 IEEE
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2007 proceedings.
i
Bmin BexcessP1
P1 is the minimum guaranteed bandwidth for each P1 for all queues in H .
i
Bmin P1 +
traffic queue. H
For P2 traffic, no bandwidth has been explicitly reserved
A. DBA1 but P2 packets can still get bandwidth that remains after serv-
ing P1 traffic queues. The excess bandwidth is distributed
In the first bandwidth assignment policy, referred to as
among all the P2 traffic queues in the same fashion as for P1
DBA1, P1 traffic queues are allocated bandwidth as follows:
traffic queues.
i i i
BassignP1 = min( BrP1 , Bmin P1 ) (5)
i th To implement both schemes, we first allocate all bandwidth
where B is the bandwidth assigned for the i P1 traffic
assignP1
the P0 traffic classes. As this class of traffic is delay sensitive,
i th
queue, BrP 1 is the requested bandwidth of the i P1 traffic we use early prediction method to reduce the grant delay of P0
queue. This approach is similar to the limited service scheme traffic queues. The idea is to predict the number of P0 packets
in IPACT [3]. We implement this policy in per queue EPON that arrive at a P0 queue during the time lag between the send-
to evaluate its performance. ing of REPORT message and the arrival of GATE message
corresponding to that request. Fig. 2 illustrates a schematic
B. DBA2 diagram of the prediction process. In a particular cycle, when
Due to the bursty nature of P1 traffic, some P1 queues may the OLT receives a REPORT message from a queue, it sets its
have less traffic to transmit and thus need smaller bandwidth counter according to the timestamp field in the received mes-
than the minimum guaranteed bandwidth (called lightly sage. The OLT produces a GATE message for the next cycle
loaded), while other queues may have more traffic to transmit which contains the start time field when the queue will start it
and need larger bandwidth (called heavily loaded). If transmission. As the P0 traffic queue has CBR packets and
i i i i i i also OLT knows the time of the next grant message, the OLT
BrP1 ≤ Bmin P1 , then BassignP1 = BrP1 . If BrP1 > Bmin P1 , the OLT can calculate the credit as
computes the excess bandwidth resulting from the lightly
loaded P1 traffic queues. The total excess bandwidth available C =  rCBR × Tdiff  × ( LCBR + LIFG + Lpreamble ) (9)
after allocating all the lightly loaded P1 queues can be ex-
where  x  denotes the smallest integer larger than or equal to
pressed as
x, rCBR is the average arrival rate of CBR packets, Tdiff is the
BexcessP1 = ∑ ( Bmin
i i
P1 −BrP1 ) (6)
i∈L difference in time between GATE start time and REPORT
where L is the set of lightly loaded P1 traffic queues in a timestamp value, LCBR is the length of CBR packet, LIFG is
transmission cycle. Let H be the set of highly loaded P1 traf- the length of inter-frame gap, and Lpreamble is the preamble
fic queues, then the total excess requested bandwidth from
overheads. In [8], the authors used this similar prediction
highly loaded P1 traffic queues can be expressed as,
mechanism in the ONU side. However, the new scheme util-
RexcessP1 = ∑ rPi 1 (7) izes this in the OLT side to control of all service classes,
i∈H which is good for fairness and can also limit any misbehaving
where rPi 1 = BrP
i i
1 − Bmin P1 the is excess bandwidth requested by traffic class.
the ith highly loaded P1 queue.
Cycle (n-1) Cycle n
If BexcessP1 ≥ RexcessP1 ,
OLT
i i
BassignP1 = BrP1 (8)
rt

ort

ort
Data

Data

Data

for each heavily loaded P1 traffic queue. If BexcessP1 < RexcessP1 ,


Repo

Rep

Rep

based on the rPi1 value, the OLT will arrange the P1 queues in
ascending order and keep track of two values: the unscheduled P0 queue
Rtime Gtime Gstart
Time

P1 queue with minimum amount of excess request denoted stamp stamp time

by rP1 , and the number of unscheduled P1 queues denoted by Request Credit value
data
H . If r × H ≤ B
P1 , the queue corresponding to r is
excessP1 P1 Fig. 2. Credit estimation of a P0 traffic class.
i
allocated the bandwidth B + rP1 . Update the value of rP1 ,
min P1
The granted bandwidth for every P0 traffic queue is
H , and BexcessP1 , and repeat the process until reaching the i i
BassignP 0 = BrP 0 + C (10)
condition r × H > B . At this point, allocate bandwidth
P1 excessP1
where i = 0,....., M − 1 .

2185
1930-529X/07/$25.00 © 2007 IEEE
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2007 proceedings.
After serving all the P0 traffic, the scheduler gives grants to scheduled, resulting in the decrease of REPORT overhead
P1 traffic queues. Finally, if there is any bandwidth left, P2 significantly.
traffic would be served. To increase the bandwidth utilization, we modify DBA2 by
For DBA2, to allocate bandwidth efficiently among P1 and applying the different cycle policy at high loads. In high traf-
P2 traffic queues, the OLT will give the grants instantly when fic load, the queues are full and bandwidth demands are higher
the REPORT messages arrive from lightly loaded queues. For than the minimum guaranteed bandwidth. When the OLT ob-
heavily loaded queues, OLT will wait to collect all the serves all the traffic queues overloaded, it applies the different
REPORT messages, and then it will produce grants based on cycle policy by employing selective operations among P1 and
the total information about the excess and requested band- P2 service classes.
width. Fig. 3 illustrates the scheduler operation. The benefit of
this new scheme is that it has no idle period problem because, IV. SIMULATION SETUP AND RESULTS
at the start of new cycle when this computation process is go- We have developed an event-driven packet-based simulation
ing on, the channel is still utilized by P0 traffic queues. This model using C++. To evaluate the performance of the pro-
ensures good bandwidth utilization for any load. posed algorithm, we compare it with the IPACT limited ser-
Decision on heavily
vice scheme proposed in [3] through simulation results. The
Transmission cycle (n-1)
loaded queues maximum distance between OLT and a queue in ONU is 20
Cycle n
km. The line rate of user-to-queue link is 100 Mbps. The
OLT
EPON line rate is 1Gbps. The guard band between adjacent
slots is 1 µs. The maximum cycle time is 2 ms. For different
cycle policy of DBA3, the maximum cycle times are: 2 ms for
P0 Data + P0, 4 ms for P1, and 8 ms for P2.
Reports
P1 Data + For traffic modeling, the P0 traffic is simulated as a CBR
P2
Reports
Time
stream with a rate of 8000 packet/s and packet length of 70
Data +
Reports
Grant n based on
report (n-1)
bytes. For our simulation, we always keep the P0 load con-
Fig. 3. Scheduler’s operation during a transmission cycle.
stant and split the remaining load between P1 and P2 equally.
The P1 and P2 traffic classes exhibit properties of self-
C. DBA3 similarity and long-range dependence (LRD). To generate
these two traffics, we have used ON-OFF sources with Pareto
To overcome the effect of REPORT message overhead, we
distributions. The shape parameter for the ON and OFF inter-
propose a different cycle policy. Fig. 4 illustrates this opera-
vals is set to 1.4 and 1.2, respectively. We used a tri-modal
tion. P0 traffics are served in every cycle but for P1 traffic
packet-size distribution similar to that observed in backbone
once per two cycles and for P2 traffic once per four cycles. In
network [9]. Three main modes correspond to most frequent
the per queue approach, as the traffic classes are controlled
packet sizes are: 64, 582/594, and 1518 bytes. Every point on
individually, it is possible to serve some traffic more often
the plots corresponds to a sample of 500 million packets.
whereas others less often. P0 traffic like voice traffic has low
Consider first the average delay for P0 traffic class for all
volume and is delay sensitive. P1 and P2 traffic are not delay
the schemes. From Fig. 5, 6, and 7 we can see that IPACT,
sensitive and highly bursty. According to their delay bound,
DBA1, DBA2, and DBA3 schemes perform almost equally
we can control their cycle times. According to ITU-T G.114,
well for P0 traffic class. For P1 traffic class, the packet delay
round-trip delay for voice traffic in access network should not
for DBA1 scheme is higher than IPACT. However, DBA2 and
exceed 3 ms. Delay bounds of P1 and P2 can be based on their
DBA3 schemes perform better than IPACT. This result is due
service requirements.
to the proper allocation of excess bandwidth that remains after
In this paper, we serve P0 traffic in every cycle to ensure
serving all the P0 traffic queues and also the application of
their delay bound. For P1 and P2, we have selected which
different cycle policy in DBA3. In DBA3, at the higher load
queues are served in each cycle based on their cycle times in
the delay of P1 is slightly higher than that of DBA2 but it is
Fig. 4. Hence, in a particular cycle, all the queues were not
still lower than IPACT scheme. This is due to the different
P00…M-1 P00…M-1 P00…M-1 P00…M-1 cycle policy to improve bandwidth utilization at high load
P0
Cycle for P0 which increase some delay for P1. For all the schemes, the
P10…M/2-1 P1M/2….M-1 P10…M/2-1 P1M/2….M-1 average delay of P2 traffic at light loads is significantly lower
P1
Cycle for P1
than that of IPACT scheme. This improvement is due to com-
P20….M/4-1 plete elimination of the light load penalty problem as the OLT
P2M/4…..M/2-1 P2M/2....3/4M-1 P23/4M……M-1
P2 now gives grant based on the requests from individual queues.
Cycle for P2
Network cycle time
In DBA1 and DBA2, at high loads, the average delay of P2
Fig. 4. Different cycle policy to reduce the REPORT overhead traffic is higher than IPACT scheme as it has the scheduling

2186
1930-529X/07/$25.00 © 2007 IEEE
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2007 proceedings.
10000 overhead effect. This delay can be minimized by using DBA3.
IPACT_P0
IPACT_P1 From Fig. 8, we can see the bandwidth utilization of all the
1000 IPACT_P2 schemes. The following formula is used to compute the utili-
DBA1_P0
DBA1_P1
zation U [10]
Avg Delay (ms)

100
DBA1_P2 Q(Tg R / 8 + Ravg + H REPORT )
U =1− (11)
10
Tavg × R / 8
where, Q is the number of queues served in each cycle ( Q =
1 48 for DBA1 and DBA2, Q = 28 for DBA3 with the different
cycle policy), Tg is the guard time, R is the EPON line
0.1
0 0.2 0.4 0.6 0.8 1
rate, Ravg is the average slot remainder (for the proposed
ONU Load

a schemes Ravg = 0, for IPACT Ravg = 595 bytes [10] ), Tavg is


Fig. 5. Average packet delay for DBA1 and IPACT schemes.
the average cycle time and H REPORT is the length of REPORT
10000
IPACT_P0 message.
IPACT_P1
1000 IPACT_P2
From Fig. 8, all the schemes utilize the channel bandwidth
DBA2_P0 efficiently compare to IPACT. For DBA1, at high load the
Avg Delay (ms)

100
DBA2_P1
performance is low due to the overhead. We can overcome
DBA2_P2
this by employing the DBA2 and DBA3 as shown in Fig. 8.
10
V. CONCLUSION
1
We introduced an OLT centric DBA scheme that effectively
controls all the service classes in a QoS-aware EPON. As the
0.1
0 0.2 0.4 0.6 0.8 1
OLT have a global view, it is possible to provide SLA guaran-
ONU Load tees among the differentiated services. The proposed DBA
scheme contains the mechanisms to control the scheduling
Fig. 6. Average packet delay for DBA2 and IPACT schemes.
overhead and to improve the bandwidth utilization. The simu-
10000
IPACT_P0
lation results show the effectiveness of the proposed scheme
IPACT_P1 in terms of average packet delay and utilization.
1000 IPACT_P2
DBA3_P0
DBA3_P1 REFERENCES
Avg Delay (ms)

100 DBA3_P2
[1] J. Zheng and H.T. Mouftah, “Media access control for Ethernet passive
optical network,” IEEE Communications Magazine, vol. 43, no. 2, Feb.
10
2005, pp. 145-150.
[2] G. Kramer, “On configuring logical links in EPON,” 2005.
1
[3] G. Kramer, B. Mukherjee, S. Dixit, Y. Ye, R. Hirth, “On supporting
differentiated classes of service in EPON-based access networks,”
Journal of Optical Networking, vol. 1, no. 8/9, Aug. 2002, pp. 280-298.
0.1 [4] C.M. Assi, Y. Ye, S. Dixit, M.A. Ali, “Dynamic bandwidth allocation
0 0.2 0.4 0.6 0.8 1
for quality-of-service over Ethernet PONs,” IEEE Journal on Selected
ONU Load
Areas in Communications, vol. 21 no. 9, Nov. 2003, pp. 1467–1477.
Fig. 7. Average packet delay for DBA3 and IPACT schemes. [5] G. Kramer, A. Banerjee, N. K. Singhal, B. Mukherjee, S. Dixit, and
Y.Ye, ‘‘Fair queueing with service envelopes (FQSE): A cousin-fair hi-
1 erarchical scheduler for subscriber access networks,’’ IEEE J. Sel. Areas
Commun., vol. 22, no. 8, pp. 1497---1513, Oct. 2004.
0.8
[6] A. R. Dhaini, C. M. Assi,, A. Shami, and N. Ghani, ’’Adaptive Fairness
through Intra-ONU scheduling for EPON’’, In Proc. IEEE ICC’06, Is-
tanbul.
0.6 [7] X. Bai, C. M. Assi, and A. Shami, “ On the fairness of dynamic band-
Utilization

width allocation schemes in EPONs”, Computer Commun., 2006.


0.4
[8] H. Naser and H.T. Mouftah, “A joint-ONU interval-based dynamic
scheduling algorithm for Ethernet passive optical networks,” IEEE/ACM
IPACT
Trans. Networking, vol. 14, no. 4, Aug. 2006, pp. 889-899.
DBA1
0.2 [9] D. Sala and A. Gummalla, ‘‘PON functional requirements: services and
DBA2
performance,’’ in IEEE 802.3ah Meeting, Portland, OR, Jul. 2001
DBA3
0
[Online]. Available: http://grouper.ieee.org/groups/802/3/efm/public/
0 0.2 0.4 0.6 0.8 1 jul01/presentations/sala_1_0701.pdf
[10] G. Kramer, “ Ethernet Passive Optical Network,” McGraw-Hill, 2005.
ONU Load

Fig. 8. Bandwidth utilization

2187
1930-529X/07/$25.00 © 2007 IEEE
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2007 proceedings.

You might also like