You are on page 1of 108

*E-mail: aworku@gibbinternational.

com
Journal of EEA, Vol. 28, 2011

RECENT DEVELOPMENTS IN THE DEFINITION OF DESIGN EARTHQUAKE


GROUND MOTIONS
CALLING FOR A REVISION OF THE CURRENT ETHIOPIAN SEISMIC CODE -
EBCS 8: 1995

Asrat Worku*
Department of Civil Engineering
Addis Ababa Institute of Technology, Addis Ababa University


ABSTRACT

Recent developments in the definition of design
ground motions for seismic analysis of structures
are presented. A summary of results of empirical
and analytical site-effect studies are provided and
recent findings from empirical studies on
instrumental records are compared against similar
results from earlier studies.

Pertinent changes introduced in recent editions of
international codes as a result of these evidences
are presented. Comparisons of relevant provisions
of EBCS 8: 1995 with those in contemporary
American, European and South African codes are
made.

The paper presents compelling evidences showing
that the amplification potential of site-soils can in
general be significantly larger at sites of low-
amplitude rock-surface acceleration up to 0.1g
than at sites of larger accelerations.

Noting the practical significance of this fact on the
seismic design of structures in low to moderate
seismic regions, to which many cities and towns of
Ethiopia belong, changes to selected provisions of
the local code are proposed.

KEY WORDS: Earthquake ground motion, return
period, response spectra, seismic hazard, site
amplification.

INTRODUCTION

The latest edition of the Ethiopian standard code
for building design was issued in 1995 by the
Ministry of Works and Urban Development. This
document known by the name of the Ethiopian
Building Code Standard (EBCS) has a separate
volume, EBCS 8, specifically dedicated to the
design and construction of buildings in seismic
regions [1].

EBCS 8 covers a wide range of issues ranging from
basic definitions to detailed requirements. The
document stands out as an important reference
material having the purpose of ensuring safety to
human lives and limiting damages to buildings
during earthquakes. It is widely referred to by
design engineers not only in Ethiopia, but also in
the wider seismic-prone region of East Africa.

Nevertheless, as rightly stated in its Forward, such
standards are technical documents which require
periodic updating through the incorporation of new
knowledge and practice as they emerge. This is
especially true in seismic design of structures for
the obvious reason that the discipline is still
growing and gets refined with further acquisition of
data as new earthquakes occur.

EBCS 8 has been in use for the past 16 years
without being updated. Meanwhile, a number of
devastating earthquakes have rattled many places
all over the world. In the past decade alone, several
earthquakes of magnitudes up to 7.3 on the Richter
scale have surprised Africa the continent once
regarded as an earthquake free zone. Due to
increased data base, knowledge on earthquakes and
their effects on human life has tremendously
improved. As a result, requirements of many design
codes have significantly been refined. Some basic
provisions in older editions of design codes are
discarded. Existing design approaches have been
modified and new ones introduced.

This paper attempts to address the basic issue of the
definition of design ground motion in EBCS 8 vis-
-vis those in recent editions of selected major
international codes. Two major aspects of design
ground-motion are dealt with: seismic-hazard
definition and consideration of site effect. The
documents selected for comparison include the
post-1994 editions of the National Earthquake
Hazard Reduction Program (NEHRP) of USA [2-
6], the 1994 and 2004 editions of the European
Norm [7-9] and the 2010 edition of the South
African National Standard [10,11].

Asrat Worku

2 Journal of EEA, Vol. 28, 2011

The paper starts by briefly reviewing the historical
development of empirical studies of ground motion
records with emphasis on site-soil effects [12,13].
Obvious differences in results of studies before and
after the 1989 Loma-Prieta earthquake are
summarized [12-23]. This is supplemented by basic
theoretical evidences [16,17,24]. Developments in
pertinent provisions of recent seismic codes are
summarized. Specifically, new definitions and
methods of characterization of site soils are
introduced. Significantly improved amplification
factors incorporated in contemporary seismic codes
are presented [2-11].

Basic design spectra of EBCS 8 for different site-
soil conditions are compared with corresponding
spectra specified by the selected codes [1-
4,7,8,10,25]. It is demonstrated that the EBCS 8
spectra fail to ensure adequate safety for the
majority of common buildings ranging from multi-
story residential houses through condominiums and
school buildings to multi-purpose buildings with
fundamental periods up to around 1 second,
especially when the structures are founded on
softer formations.

Moreover, it is pointed out that the 100-year return
period adapted by EBCS 8 to define design ground
motions is incompatible with the 475-year return
period accepted worldwide and can significantly
compromise safety [1,26]. This led to the
recommendation that appropriate provisions of
EBCS 8 need revision.

BRIEF HISTORICAL DEVELOPMENT OF
SITE EFFECT STUDIES

Results of Early Instrumental Studies

Even though the potential of site soils to amplify
earthquake ground motions was recognized since
around the 1950s, it was only in the early 1970s
that notable results of empirical studies on the
subject started to emerge. The pioneering works
were performed in Japan and the USA.

Hayashi et al (1971), as cited by Seed et al [12], are
probably the first to present site-dependent average
spectra, which were based on 61 accelerograms
from 38 earthquakes in Japan. However, due to the
limited size and quality of their data, the authors
themselves suggested that their spectral curves be
regarded with caution.

A more detailed study was reported by Seed et al at
a later time [12]. Based on a total of 104 ground
motion records from sites of fairly known
geotechnical conditions, the study considered four
site groups. Most of the records in the first three
site groups were obtained from sites in western
United States dominated by the 1971 San Fernando
earthquake, whereas many records for the softest
site soil group were from the Japanese earthquakes
of 1964 Niigata and 1968 Higashi-Matsuyame.

The average spectra for the four site conditions are
given in Fig. 1(a) for 5% damping. Significant
differences are observed in the spectral shapes of
the various site classes. For periods greater than 0.4
to 0.5 s, spectral amplifications are much higher for
deep cohesionless soil deposits and soft to medium
clay deposits than for stiff site conditions and rock
over a wide range of periods. Seed et al [12]
pointed out the inadequacy of their records for the
fourth soil class and advised against the use of this
particular spectral curve until further studies shade
better light.

Mohraz [13] concurrently with Seed et al [12]
conducted also an independent study on almost the
same ground motion data base and came up with
similar results.

Based on these results, the Applied Technology
Council Project (ATC-3) came up in 1978 with the
simplified site-dependent design spectra shown in
Fig. 1(b) for three site soil groups: S1 (rock or
shallow stiff soils), S2 (deep firm soils) and S3 (7
to 14 m deep soft soils). The spectral curves of S2
and S3 are obtained in such a way that their
respective ratios with respect to S1, normally
known as ratio of response spectra (RRS), in the
velocity-sensitive region are 1.5 and 2.2,
respectively. The less reliable fourth soil class was
excluded, apparently heeding the advice of the
researchers [12].

In general, the ATC-3 spectra are characterized by
an ascending straight line for the very short-period
range up to around 0.2 s, a constant acceleration for
the acceleration-sensitive short-period range and a
curve descending for the velocity-sensitive
intermediate-period range.

The ATC-3 spectra were integrated in the series of
editions of the National Earthquake Hazard
Reduction Program (NEHRP) up to 1994 and in the
Uniform Building Code (UBC) series up until
1997. In 1988, a fourth soil type, S4, for deep soft
clays was included with the aim to address the
rather high amplification potential of soft soils as
evidenced by the 1985 Mexico City earthquake
[5,16,17].

Recent Developments in the Definition of Design Earthquake Ground Motions
Journal of EEA, Vol. 28, 2011 3





Figure1. (a) response spectra for different site conditions after Seed et al [12]; (b) The design spectra proposed
by ATC-3:1978 Project [16,17]

It is important to note that the site-dependent
spectral values in Fig. 1 are normalized with
respect to the peak-ground acceleration, and thus
the spectral curves are all anchored to unity at T=0.
This has the effect of concealing inherent
amplifications in the short-period range so that only
amplifications in the intermediate velocity-
sensitive range are observed. This will be clearer in
the subsequent sections.

Results of Recent Empirical Studies

During the 7.1-magnitude Loma-Prieta earthquake
of 1989, most damages linked to site-soil
amplification and liquefaction took place in the bay
area of San Francisco and Oakland located about
100 km NW of the epicenter. Much of the recorded
evidence was also obtained from this area [14-18].
This, together with evidences from laboratory and
analytical studies, encouraged a critical review of
the single-factor amplification concept that endured
up until that time and described above. A number
of studies conducted on the enlarged data base
shaded more light on site effects than ever before.

Idriss [14,15] studied the amplification of rock-
surface accelerations using records from this and
the 1985 Mexico earthquake both of which are
associated with small rock-level accelerations. His
main findings are that soil sites have the ability to
amplify rock-surface accelerations of up to around
0.4g.

A more important outcome of post-Loma-Prieta
studies, especially for engineers, is the rather high
amplification of response spectra by soft soil sites.
Average spectral accelerations of ground motion
records due to the Loma-Prieta earthquake from
thick soil sites near the San Francisco bay area and
Oakland are provided in Fig. 2 for a damping of
5% in comparison with the corresponding average
spectra of adjoining rock sites.


Figure 2 Comparison of average soil-site
spectra in Oakland and San Francisco
areas with average rock-site spectra in
the region during 1989 Loma-Prieta
earthquake [16,17]

The figure shows that the rock-surface acceleration
(as T0) is 0.08 to 0.1 g and amplified two to
threefold by the soil. A similar degree of
amplification is seen for periods up to about 0.2 s.
The response spectra in the period range of 0.2 to
1.5 s are amplified to a much larger degree. Similar
trends, but with lesser degree of amplification,
were observed for stiff soil sites, though not
presented here [16,17].

Asrat Worku

4 Journal of EEA, Vol. 28, 2011

Comparison of the spectral curves in Fig. 2 with


those in Fig. 1 shows that the short-period
amplifications were not revealed in the early
studies. Also, the amplifications in the velocity-
sensitive region were underestimated. For this
reason, the single-factor approach is no more found
adequate to account for site-soil effects and has
long been abandoned. This fact led to the
introduction of new site-dependent design spectra
in US seismic codes since 1994 and in other
national and regional design codes.


BASIC THEORETICAL EVIDENCE IN
DYNAMIC SITE RESPONSE

A simple one-dimensional model of a homogenous
soil layer overlying a rock formation subjected to a
vertically propagating sinusoidal shear wave can be
used to provide a basic understanding of the
amplification potential of soft-soil sites [16,17,24].
Roesset [24] showed that the ratio of the
amplitudes of the sinusoidal accelerogram at the
soil surface, a
A
, to that at the rock, a
B
, is a function
of the soil shear-wave velocity, v
s
, and the soil
material damping ratio,
s
| . Plots of this maximum
amplification ratio as a function of v
s
for selected
values of
s
| are presented in Fig. 3.


Figure 3 Ground amplification ratio at resonance
versus shear-wave velocity for different
damping ratios

Approximating RRS
max
by this amplitude ratio an
assumption that has been found to be fairly
reasonable for a preliminary estimate - this
rudimentary model fairly accurately predicts
RRS
max
= 9 for the Mexico City soft soil site, for
which a shear-wave velocity of 80 m/s and
s
| = 3% are employed. This compares well with
RRS
max
of 8 to 20 actually recorded at the soil sites
in Mexico City during the 1985 earthquake.
Similarly, the model predicts RRS
max
=4 for San
Francisco Bay area corresponding to representative
values of v
s
=150 m/s and
s
| = 8% for the area.
The model once again predicted well RRS
max

observed at this site during the Loma-Prieta
earthquake that ranges from 3 to 6 [16,17].

These results indicate that, for places where
previous seismic records are not available, prior
knowledge of the representative shear-wave
velocity of the site and its damping behavior can
provide a good idea of its amplification potential.
Such an exercise is particularly useful for Ethiopia,
where none to few recorded strong ground motion
records are available to conduct statistical studies.

According to laboratory evidences, the material
damping ratio,
s
| , is a nonlinear function of the
plasticity and the strain level of the soil. Highly
plastic soils (PI > 50%) exhibit small damping and
behave nearly linearly over a wide range of strains.
For highly plastic clays,
s
| can be less than 3% for
strains up to 0.1%.

Soils of high PI are not uncommon in urbanized
seismic regions of Ethiopia, a typical example
being the dark and light grey expansive soils
covering a big part of Addis Ababa and its
environs. At some locations, this formation can be
several tens of meters thick and in a rather soft
state over a significant depth. The damping
potential of such soils can be quite low and their
amplification potential very high.

THE NEW APPROACH TO ACCOUNT FOR
SITE EFFECT

Evaluation of Improved Site Coefficients

A more practical approach for the evaluation of site
amplification factors in regions, where sufficient
earthquake records and geotechnical data are
available, is the calculation of statistical averages
of RRS for site soils grouped according to their
dynamic behavior. A number of empirical studies
conducted after the Loma-Prieta earthquake
suggested that average amplification factors of soil
sites are proportional to the mean shear-wave
velocity, v
S
, of the upper 30 m thickness raised to a
certain negative exponent, which is dependent on
the period band and the intensity of the rock
Recent Developments in the Definition of Design Earthquake Ground Motions
Journal of EEA, Vol. 28, 2011 5

acceleration [16-23]. It was thus found important


that site soils are classified on the basis of this
important parameter.

The empirical study of Borcherdt [18] in particular
suggested the following generic best-fit relations
for the two amplification factors, denoted by F
a
and
F
v
, as a function of v
s
and the rock-surface shaking
intensity:
1050
a
m
a
S
F
v
| |
=
|
\ .
;
1050
v
m
v
S
F
v
| |
=
|
\ .
(1)

The factor F
a
is applicable for the acceleration-
sensitive short-period region (about 0.1 to 0.5 s)
and F
v
for the velocity-sensitive intermediate-
period region (about 0.4 to 2 s). The values of the
exponents, m
a
and m
v
, are provided in Table 1.

Table 1: Values of the exponents in Borcherdts
regression relations of Equation (2) [18]

The plots of Eq. 1 are given in Fig. 4 which shows
that F
v
is consistently larger than F
a
for v
s
up to
around 1000 m/s - v
S
of the reference rock site.
Both factors tend to unity with v
S
approaching 1000
m/s and decrease with increasing intensity of rock
shaking. Please note the similarity of these curves
to the theoretical curves of Fig. 3 demonstrating
that increasing rock-shaking intensity is associated
with increased damping.




Figure 4 Variation of spectral amplification factors
versus
S
v for short and long period ranges
and for a range of intensity of rock
shaking (Re-plotted after Borcherdt [18]

The New System of Soil Classification

For a generally stratified formation of n layers each
having a thickness of h
i
and a shear-wave velocity
of v
Si
within the upper 30 m thickness, v
S
can be
established using the following relationship [2-4, 8,
10, 16, and 17]:
( )
30
1
30 30
n
S i Si
i
v t h v
=
= =

(2)

The terms in the summation represent the time
taken for the shear wave to travel through each
individual layer. The shear-wave velocity
computed in this manner is based on the time, t
30
,
taken by the shear wave to travel from a depth of
30 m to the ground surface, and is thus not
computed as the mere arithmetic average.

This approach also allows for the use of more
readily measurable quantities such as the standard
penetration test blow count, N, for granular
deposits or undrained shear strength, S
u
, for
saturated cohesive soils though they are less
reliable due to the inherent double correlations. The
representative values are determined in a manner
similar to Eq. 2

Based on a landmark consensus reached by
geotechnical engineers and earth scientists in the
USA in the early 1990s, five distinct soil and rock
classes, A to E, are identified in accordance with
this approach and provided in Table 2.
Corresponding approximate soil classes as per
older methods are also provided in the first column
for comparison purposes. A sixth much softer site
class, F, is also defined that requires site-specific
studies. It is described in detail in NEHRP
documents [2, 3, and 4].
Rock acceleration (g) m
a
m
v

0.1 0.35 0.65
0.2 0.25 0.60
0.3 0.10 0.53
0.4 0.05 0.45
Asrat Worku

6 Journal of EEA, Vol. 28, 2011


Table 2: Site soil classes as per the recent NEHRP editions [2, 3, and 4]


While this method of site classification is not
entirely correct from a theoretical perspective, the
general consensus is that the stiffness of the
shallow soil as measured by vs is the most reliable
single site parameter to best characterize site
amplification potential [ 16 - 20]. In addition, vs is
readily measured in the field.

The New Site Amplification Factors

Using the average v
s
of each soil class given in
Table 2, the site amplification factors can now be
established by reading from Fig. 4 for the
representative value of rock-motion intensity
considered. The discrete values so obtained
according to Borcherdit [18] and adopted by
NEHRP [2, 3, 4] are given in Table 3. The effective
peak acceleration, A
n,
and the effective velocity
related acceleration, A
v
, are rock-level seismic
hazard parameters employed to characterize site
seismicity of US for 90% probability of not being
exceeded in 50 years (475 years return period) [2].


Table 3: Values of the site coefficient F
a
and F
v

according to NEHRP 1994 [2]

The new amplification factors exhibit the following
main features [5, 6, and 25]:

1. The three (later on four) site categories in
earlier codes are replaced by six new
categories A to F. Soil classes C to E
amplify the rock motion significantly,
especially when the rock shaking intensity
is small.

2. Two seismicity dependent site
coefficients, F
a
and F
v
, replace the single
site coefficient, S, in older codes. F
a
is for
the acceleration-sensitive region and F
v
is
for the velocity-sensitive region. Both
factors decrease with increasing seismicity
due to increased damping, and F
v
is
almost always larger than F
a
for all sites

3. While the old factor, S, assumed values up
to 1.5 (or 2.2), the new factors, F
a
and F
v
,
take values of up to 2.5 and 3.5 for short-
period and intermediate-period bands,
respectively. This results in much larger
seismic design forces for many classes of
structures on soft formations especially in
Pre-1994 Site Class
(approximate)
New NEHRP
Site Class
Description v
S
(m/s) SPT blow
count, N
S
u
(kPa)
S1 A Hard Rock >1500 - -
B Rock 760 1500 - -
S1 and S2 C Soft rock/very dense
soil
360 760 >50 >100
D Stiff soil 180 360 15 50 50 100
S3 and S4 E Soft soil <180 <15 <50
F Soils requiring site-
specific study

Soil Profile Type F
a
for
Shaking Intensity, A
a

F
v
for
Shaking Intensity, A
v

0.1

0.2 0.3 0.4 0.5 0.1

0.2 0.3 0.4 0.5
A 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8
B 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
C 1.2 1.2 1.1 1.0 1.0 1.7 1.6 1.5 1.4 1.3
D 1.6 1.4 1.2 1.1 1.0 2.4 2.0 1.8 1.6 1.5
E 2.5 1.7 1.2 0.9
b
3.5 3.2 2.8 2.4 SR
F SR=

Site-specific geotechnical studies and dynamic site-response analysis required

Recent Developments in the Definition of Design Earthquake Ground Motions
Journal of EEA, Vol. 28, 2011 7

less seismic regions. For this reason, the


seismic design of structures in less seismic
regions has become much more stringent
than ever before.

4. The older qualitative site classification
method is replaced by a new unambiguous
and more rational classification method
using representative shear-wave velocities
of the upper 30 m geological formation.
Alternatively, though less preferable,
average SPT blow counts and/or
undrained shear strength can be used to
classify sites (See Table 2).

It is important to note that results of later studies on
an enlarged data base including records from more
recent earthquakes like Northridge 1994 have not
suggested significant changes to the values of the
above site amplification factors [4, 19, and 20].

DESIGN SPECTRA IN SEISMIC CODES

The Design Spectra of NEHRP

As noted earlier, the ATC-3: 1978 Spectra were for
the first time replaced by new design spectra in a
1994 document issued through a long-term federal
project of the US Government known by the name
of the National Earthquake Hazard Reduction
Program (NEHRP), which was initiated in 1985 to
replace the mission of ATC. NEHRP incorporated
the new results based on the 1989 Loma Prieta
earthquake. As presented above, the new results
clearly demonstrated that the site-dependent design
spectra that were in use up to that time were
inadequate [5,6,16,17]. Furthermore, NEHRP has
since its inception consistently employed a 475-
year return period in defining the design ground
motion [2-4].

The basic elastic design spectrum of NEHRP 1994
that for the first time made use of the above values
of amplification factors is given by the following
relationship [2]:

a
v
se
C
T
c
C 5 . 2
2 . 1
3 / 2


;
v v v
A F C
a a a
A F C (3)



Note that F
a
is applied on the constant part of the
spectrum, whereas F
v
is applied on the descending
segment. A plot of Eq. 3 normalized with respect to
C
a
against period is given in Fig. 5(a) for C
v
/C
a
=1.
This plot shows the shape of the basic elastic
design spectral curve.



Figure 5 Elastic design spectra according to
NEHRP 1994 (a) Basic [2]; (b) For
A
a
=A
v
=0.1

Spectral curves corresponding to the five possible
soil classes A to E can be plotted from Eq. 3 for a
given earthquake shaking intensity. Such design
spectra for a seismic region characterized by
A
a
=A
v
=0.1 are given in Fig. 5(b). Similar curves
can be prepared for other seismic regions. This is to
be compared to the three spectral curves of ATC-3
given in Fig. 1, where amplification occurs in the
declining section only.

The basic design spectrum in NEHRP 1997 [3] has
shown substantial changes as shown in Fig. 6(a), in
which two key spectral ordinates in the figure, S
DS

and S
D1
, are introduced as given by

(a)
(b)
Asrat Worku

8 Journal of EEA, Vol. 28, 2011


s d MS DS
S F S S
3
2
3
2



1 1 1
3
2
3
2
S F S S
v M S
(4)






Figure 6 The basic design spectral curves
according to (a) NEHRP 1997 [3]; (b)
NEHRP 2003[4]

The transition periods in the figure are obtained
from

0 1 1
0.2 ;
D DS S D DS
T S S T S S (5)

S
S
and S
1
are mapped spectral accelerations in
terms of fractions of g for the short and
intermediate-period regions represented by 0.2 s
and 1 s, respectively. These spectra correspond to
the Maximum Considered Earthquake (MCE) and
replace the effective accelerations, A
a
and A
v
, of the
1994 version to characterize the seismic hazard.
The MCE corresponds to a 2% probability of being
exceeded in 50 years (or 2500 years return period)
to be adjusted later to 475-years return period by
multiplying by 2/3. S
MS
and S
1S
are the
corresponding spectra that account for site-soil
effect [3]. The coefficients F
a
and F
v
are the same
site soil amplification factors of NEHRP 1994
given in Table 3. Note also that the descending
right part is varying according to T
-1
and no more
according to T
-2/3
. Similar to Fig. 5(b), a set of five
curves can be plotted from Eqs. 4 and 5 for the five
different soil groups in a given seismic region. The
basic design spectral curve in Fig. 6(a) has
remained the same in the subsequent editions of
NEHRP since 2000, except for the introduction of a
flatter curve varying according to T
-2
for the
displacement-sensitive long-period period region
beyond T
L
as shown in Fig. 6(b) [4].

The Eurocode Design Spectra

The 1994 edition of the European seismic code (EC
8) employed three site classes, A, B and C, similar
to those in ATC-3, 1978 [7]. However, while the
ATC-3 spectra shown in Fig. 1 have a common
plateau to all site classes, EC 8: 1994 paradoxically
specifies a smaller maximum value and a smaller
amplification factor over the entire period range for
the softest site class C as shown in Fig. 7(a), in
which the spectra are normalized with respect to
the design ground acceleration. In light of the
background material given above, such a
representation of the dynamic behavior of soft
formations is obviously faulty. Similar views have
recently been expressed by Rey et al [9], who
attribute this pitfall to lack of sufficient ad hoc
studies prior to the publication. These spectra are
no more in use in Europe.

(a)
(b)
Recent Developments in the Definition of Design Earthquake Ground Motions
Journal of EEA, Vol. 28, 2011 9




Figure 7 Normalized elastic response spectra: (a)
EC 8: 1994 [7]; (b) EC 8: 2004 Type 1
(for M
S
5.5; re-plotted after [8])

The more recent edition of EC8 issued in 2004 has
not only rectified this problem but also introduced
the new soil classes of NEHRP with some
modifications [2,3,4,8] (see Fig. 7(b)). According
to the new EC 8, all rock and rock-like geological
formations with v
s
> 800 m/s are categorized under
Ground Type A. This is unlike the provision for
two distinct rock site classes of A and B in the
recent NEHRP editions. Each soil class in EC 8,
2004 is assigned a constant amplification factor for
the entire period range. In general, these factors are
lower than the corresponding NEHRP factors.

Two types of spectra, Type 1 and Type 2, are
proposed by EC 8: 2004 for regions with
predominant earthquakes of surface-wave
magnitudes larger than 5.5 and less than 5.5,
respectively. Fig. 7(b) presents Type 1 spectra for
the five soil classes. A segment descending
according to T
-2
is included for the periods longer
than 2 s. The Type 2 spectra proposed for less
seismic regions are similar in shape to the Type 1
spectra but with larger amplification factors and
reduced control periods.

The Design Spectra of SANS 10160-4: 2010

This standard for seismic actions was published
very recently (June 2010) and makes up one of the
eight parts of the South African National Standard
SANS 10160 series - Basis of structural design
and actions for buildings and industrial structures
[10]. It supersedes the older version SABS
0160:1989 [10,11].

The code adapted Type 1 basic spectrum of EC 8:
2004 with a slight modification of the left linear
part. It has also directly adopted Ground Types A
to D of EC 8, 2004 and the corresponding
amplification factors and control periods omitting
Ground Types E and F. Since the plots of the
response spectra are similar to those in Fig. 7(b),
they are not presented here.

The seismic hazard is represented in terms of
reference peak ground acceleration, a
g
, for Ground
Type 1 (rock site) and given in form of a seismic
hazard map based on a 475-year return period.
Noteworthy is that this return period was also used
in the superseded 1989 edition [11]. Two major
zones are distinguishable: Zone I of natural
seismic activities and Zone II of mining-induced
and natural seismic activities. The majority of Zone
I is assigned a
g
=0.1g with sites of a
g
values less
than 0.05g being rare.

Given the relatively stable seismic nature of South
Africa, the attention given to seismic design in the
country is quite instructive to the more seismic
nations in East Africa. This provides an additional
perspective to critically evaluate the rather liberal
seismic hazard definition of EBCS 8 and its
provisions for site effects.

The Design Spectra of EBCS 8, 1995

The normalized elastic design spectra, S
d
, of the
Ethiopian Building Code Standard, EBCS 8 (1995),
proposed for dynamic analysis are given in Fig.
8(a).

(a)
(b)
Asrat Worku

10 Journal of EEA, Vol. 28, 2011



Figure 8 The design spectra of EBCS 8 (1995) (a)
for dynamic analysis; (b) for static
analysis [1,25]

Excepting for some minor differences, the EBCS 8
spectra are practically identical to the already
obsolete ATC-3 (1978) spectra given in Fig. 1. The
design spectra proposed for pseudo-static analysis
are also given in Fig. 8(b) for comparison purposes.
The left linear part is omitted in this case, the right
side descends according to T
-2/3
instead of T
-1
and
the amplification factors reduced.

COMPARISON OF EBCS 8 DESIGN
SPECTRA WITH THE REST

In this section, a comparative study of EBCS 8
spectra against those specified by NEHRP 2003,
EC8: 2004 and SANS 2010 is presented.

EBCS 8 Versus NEHRP 2003

The basic design spectrum of NEHRP in all its
editions since 2003 remained almost unchanged.
This spectrum as it appears in NEHRP 2003 [4], is
given in Fig. 6(b) and can be expressed as

( ) ( )
0 0
0
1
2
1
0.6 0.4 ; 0
;
;
;
DS
DS S
a
D S L
D L L
T T S T T
S T T T
S
S T T T T
S T T T T

+ s s

< s
=

< s

>

(6)

For T = 0, Eq. (6) yields the design spectral
ordinate for an ideally rigid structure undergoing
the same motion as its foundation which we can
denote by S
a0
. With this and the introduction of Eq.
4 in Eq. 6, we obtain:

0
0.26
a a S
S F S =

(7)

For a rigid structure on the reference ground type,
Class B, F
a
takes the value of unity (See Table 3),
and the design spectral ordinate S
a0
should be equal
or the same as the peak ground acceleration (PGA)
of the site for the design earthquake. This enables
us to estimate the value of S
S
from Eq. 7 for a
known PGA of a site.

As per the existing seismic hazard map of Ethiopia
which is based on a return period of 100 years, the
capital, Addis Ababa, located in Seismic Zone 2, is
assigned a PGA of 0.05g. With this value inserted
in Eq. 7 for S
a0
, the corresponding maximum value
of spectral acceleration for short period according
to NEHRP 2003 would be obtained as S
S
=0.188g.
The corresponding one-second spectral
acceleration, S
1
, can be extrapolated from Table 3
as 0.072g. With these inserted in Eq. 4, the design
spectral values S
DS
and S
D1
for Zone 2 are obtained
as

( )
a s DS
F X F S 125 . 0 188 . 0
3
2
= =

( )
v v D
F X F S 048 . 0 072 . 0
3
2
1
= =
(8)

Similarly, the transition periods can be computed
by substituting Eq. 8 back into Eq. 5. The values of
S
DS
, S
D1
, T
0
and T
S
computed in this manner are
substituted in Eq. 6 and the resulting expressions
plotted for the different site soils. These are given
in Fig. 9 together with the EBCS 8 spectra for a
PGA of 0.05g specified for Zone 2. Comparisons
for other seismic zones can be made in a similar
way.
(a)
(b)
Recent Developments in the Definition of Design Earthquake Ground Motions
Journal of EEA, Vol. 28, 2011 11



Figure 9 Comparison of EBCS 8 spectra and
NEHRP spectra adapted to a return period
of 100 years for Zone 2 (including Addis
Ababa)

The plots show that the introduction of the NEHRP
2003 site factors demands design forces up to 150
% in excess of what is currently required by
EBCS8. The largest spectral discrepancies occur in
a very important period range encompassing
buildings of small to moderate height of up to
around 12 stories with a fundamental period of up
to around 1 s built on NEHRP Site Classes D and
E. Evidently, such buildings are the most
frequently built structures including residential
houses, condominiums, apartments, office flats,
public offices, hotels, hospitals and many others.
Thus, the implications of the above results are not
difficult to figure out.

EBCS Versus EC 8 and SANS 2010

Comparison of the EBCS spectra with the
European and South African spectra is more direct
forward, as all of these documents use rock-level
PGA to characterize seismicity.

Type I spectra of EC 8: 2004 are compared in Fig.
10 with EBCS spectra, which show that buildings
in the short-period region designed in accordance
with EBCS 8 could be underdesigned by up to 40
%. A similar comparison with Type II spectra of
EC 8, 2004 indicates larger differences of up to 80
%. These discrepancies are comparatively smaller
than the discrepancies observed with NEHRP
spectra, because the NEHRP site amplification
factors are consistently larger than the EC 8
amplification factors. Comparison of the EBCS 8
spectra with the SANS spectra gives identical
results as in Fig. 10 with Site Class E omitted.

Figure 10 Comparison of EBCS 8 spectra with
Type I EC 8: 2004 spectra adapted to
100-year return period

Note that the comparisons in Fig. (9) and (10) are
conducted without considering the difference in the
definition of the return period. This issue is treated
in the next section.

Influence of Seismic Hazard definition

Seismic Hazard Maps of Ethiopia

The seismic hazard map of Ethiopia as provided in
EBCS 8, 1995 is presented in Fig. 11 [1]. This map
is based on a 100-year return period or
approximately 50 % of being exceeded in 50 years.
According to this map, each seismic zone of 1 to 4
is assigned a constant bedrock acceleration ratio,

0
, of 0.03, 0.05, 0.07 or 0.1, whereas Zone 0 is
considered seismic free. Addis Ababa belongs to
Zone 2 with
0
= 0.05. Many cities and big towns
like Mekele, Dese, Semera, Adama, Awasa and
Arba Minch, of which some are capitals of federal
states, all belong to Zone 4 with
0
=0.1.



Asrat Worku

12 Journal of EEA, Vol. 28, 2011



Figure 11 Seismic hazard map of Ethiopia for 100-year return period as per EBCS 8: 1995 [23]


A recent helpful compilation of worldwide
seismicity is provided by the Global Seismic
Hazard Assessment Program (GSHAP), which was
launched by the International Lithosphere Program
(ILP) with the support of the International Council
of Scientific Unions (ICSU), and endorsed as a
demonstration program in the framework of the
United Nations International Decade for Natural
Disaster Reduction (UN/IDNDR). It had the
objective of mitigating the risk associated with the
recurrence of earthquakes by promoting a
regionally coordinated, homogeneous approach to
seismic hazard evaluation. The project was
operational from 1992 to 1999 [26].






The major output of the GSHAP is the global
seismic hazard map for a 475-year return period.
As noted above, this level of hazard has been
widely accepted all over the world as a design-level
earthquake and incorporated in US codes for more
than three decades now. In contrast, the EBCS 8,
1995 employs a return period of just 100 years.
Reference documents could not be found providing
a rational explanation for taking such a bold
decision involving risks on the safety of life and
property.

The data base of GSHAP is accessible to users
[26]. A seismic hazard map for Ethiopia prepared
by the author using the appropriate data is given in
Fig. 12, in which five distinct seismic regions are
identified with different ranges of PGA values as
shown in the legend. Note that the ratio of the PGA
to the gravitational acceleration, g, corresponds to

0
- the bedrock acceleration ratio in EBCS 8.
Recent Developments in the Definition of Design Earthquake Ground Motions
Journal of EEA, Vol. 28, 2011 13




Figure 12 The seismic hazard map of Ethiopia
based on the GSHAP data for a return
period of 475 years

Comparison of Fig. 11 with Fig. 12 shows that not
only corresponding seismic regions are assigned
much higher values of PGA in the GSHAP map,
but also are the entire size and extent of the
individual seismic zones changed. According to the
new map, the most seismic area of the country is
concentrated near and around the Afar region
characterized by a PGA of 0.16g to 0.24g. This
alone entails an increase in seismic force demand
of 60 to 140 % in this region without including site
effect. The capital, Addis Ababa, belongs to the
second most seismic zone with PGA in the range of
0.1g to 0.16g. This again implies an increase of 100
to 220 % in seismic hazard level with an average
increase of 160 %. Several rapidly growing towns
including, Mekele, Dese, Debre Berhan, Ziway,
Hawasa, Arbaminch and Dire Dawa belong to this
seismic zone, while Semera, the current capital of
the Afar Region, is in the heart of the most seismic
zone.

Returning to the spectral comparison, the combined
influence of the new site classification system and
the new seismic hazard definition is studied next.
Considering 0.1g as the lowest-estimate PGA of
the region, to which Addis Ababa belongs, Eq. 7
yields a corresponding short-period spectral
acceleration, S
S
, of 0.45g.The one-second spectral
acceleration, S
1
, can be interpolated as 0.18g.

With these values inserted in Eq. 4, the design
spectral values S
DS
and S
D1
are obtained and the
transition periods easily computed as before. These
quantities are substituted in Eq. (6) and the
resulting expressions plotted for the different site
soils. These are presented in Fig. 13 together with
the EBCS 8 spectra for a PGA of 0.05g specified
for Zone 2 including Addis Ababa.
Figure 13 Comparison of EBCS 8 spectra with
NEHRP spectra adapted to GSHAP
zoning Of Ethiopia: Spectra for the
second most seismic region (including
Addis Ababa)

The plots show a very significant difference
between the two sets of design spectra. Design
base shear computed in accordance with EBCS 8
spectra fulfills only a fraction of the base shear
demanded by NEHRP requirements, in some cases
being as low as 24 %. All ranges of buildings on
any soil formation are affected by the inadequate
provisions of EBCS 8. Similar comparisons made
with the European and South African spectra
confirm these discrepancies.

CONCLUSIONS AND RECOMMENDATIONS

Recent changes in the definition of design ground
motions have been presented. Results of empirical
site-effect studies together with basic analytical
evidences on site response are provided.
Differences in results of empirical studies on recent
instrumental records against results from earlier
studies are highlighted. Changes introduced in
recent editions of international codes as a result of
such evidences are presented.

Comparisons of relevant provisions of EBCS 8,
1995 with those in contemporary American,
European and South African codes demonstrate
that seismic loads of most buildings designed in
accordance with EBCS 8 are significantly
underestimated. This is especially the case when
Asrat Worku

14 Journal of EEA, Vol. 28, 2011

the site soil overlying the bedrock is medium stiff


to soft and is relatively thick. Most vulnerable
buildings are those with fundamental periods up to
around 1 second that encompass most commonly
constructed buildings.

The two main culprits in EBCS 8 for these pitfalls
are the rather old and inadequate provisions for site
amplification effects and the 100-year return period
of the design-level earthquake.

The outcomes of the study strongly suggest that
there is an urgent need to revise EBCS 8, 1995 with
the objective to account for the above two major
issues among others. The post-Loma-Prieta studies
on site effects provided sufficient evidence
suggesting the use of higher site amplification
factors in all period ranges. This has already been
addressed in contemporary major seismic codes
worldwide including in Africa. EBCS 8 should
follow suite, especially with the current
construction boom and the relaxed quality control
in sight.

Furthermore, it is proposed that a new nation-wide
seismic-hazard study is conducted based on an
updated catalogue, employing state-of-the-art
hazard analysis methods and using appropriate
attenuation rules. The GSHAP study results can be
used as a good benchmark for this purpose.

It is also strongly recommended that the rather
risky 100-year return period, which is currently in
use, is critically revisited in consultation with
policy makers, property owners, financiers,
insurers and other stakeholders.

ACKNOWLEDGMENT

This study was inspired by the significant recent
developments in the discipline and by
encouragements from my colleagues Dr. Samuel
Kinde (San Diego State University), Mr Samson
Engida (formerly of CALTRANS) and Dr. Atalay
Ayele (Addis Ababa University) with an
anticipated effect to initiate the revision of EBCS 8.
The Author is thankful to all of them.

REFERENCES
[1] Ministry of Works and Urban
Development, Design of Structures for
Earthquake Resistance, Ethiopian
Building Code Standard (EBCS 8),
Addis Ababa, 1995.

[2] Building Seismic Safety Council
(BSSC), 1994 Edition NEHRP
Recommended Provisions for Seismic
Regulations for New Buildings, FEMA
222A and 223A (Provisions and
Commentary), Washington DC, 1995.

[3] Building Seismic Safety Council
(BSSC), 1997 Edition NEHRP
Recommended Provisions for Seismic
Regulations for New Buildings and
Other Structures, FEMA 302 and 303
(Provisions and Commentary),
Washington DC, 1998.

[4] Building Seismic Safety Council
(BSSC), 2003 Edition NEHRP
Recommended Provisions for Seismic
Regulations for New Buildings and
Other Structures, FEMA 450
(Provisions and Commentary),
Washington DC, 2004.

[5] Ghosh, S., Trends in the Seismic
Design Provisions of US Building
Codes, PCI Journal, Sept.-Oct. 2001,
pp. 98-102.

[6] Ghosh, S., Update on the NEHRP
Provisions: The Resource Document for
Seismic Design, PCI Journal, May-
June, 2004, pp. 96-102.

[7] European Committee for
Standardization, Eurocode 8, Design
Provisions for Earthquake Resistance
for Structures (ENV 1998), Brussels,
May 1994.

[8] European Committee for
Standardization, Eurocode 8, Design of
Structures for Earthquake Resistance
(EN 1998-1: 2004), Brussels, 2004.

[9] Rey, J., Faccioli, E. and Bommer, J.,
Derivation of Design Soil Coefficients
(S) and Response Spectral Shapes for
Eurocode 8 Using the European Strong-
Motion Database, Journal of
Seismology, Vol. 6, 2002, pp. 547-555.

[10] South African National Standard, SANS
10160-1, Basis of Structural Design
and actions for Buildings and Industrial
structures, SABS Standards Division,
Pretoria, 2010.
Recent Developments in the Definition of Design Earthquake Ground Motions
Journal of EEA, Vol. 28, 2011 15

[11] Wium, J., Background to SANS 10160


(2009): Part 4 Seismic Loading,
Journal of the South African Institution
of Civil Engineering, Vol. 52, No. 1,
2010, pp. 20-27.

[12] Seed, H., Ugas, C. and Lysmer, J., Site
Dependent Spectra for Earthquake-
Resistant Design, Bulletin Of
Seismological Society of America, Vol.
66, No. 1, 1976, pp. 221-244.

[13] Mohraz, B., A study of Earthquake
Response Spectra for Different
Geological Conditions, Bulletin of
Seismological Society of America, Vol.
66, No. 3, 1976, pp. 915-935.

[14] Idriss, I.M., "Response of soft soil sites
during earthquakes", Proceedings of
the Symposium to Honor Professor H.
B. Seed, Berkeley, May, 1990, pp. 273-
289.

[15] Idriss, I.M., "Earthquake ground
motions at soft soil sites", Proceedings
of the Second International Conference
on Recent Advances in Geotechnical
Earthquake Engineering and Soil
Dynamics, St. Louis, Missouri, Vol. III,
1991. pp. 2265-2273.

[16] Dobry, R., Borcherdt, R., Crouse, B.,
Idriss, I., Joyner, W., Martin G., Power,
M., Rinne, E. and Seed, R., New Site
Coefficients and Site Classification
System Used in Recent Building Seismic
Code Provisions, Earthquake Spectra,
Vol. 16, No. 1, February 2000, pp. 41-
67,

[17] Dobry, R. and Susumu, I., Recent
Development in The Understanding of
Earthquake Site Response and
Associated Seismic Code
Implementation, In Proc GeoEng2000,
An International Conference on
Geotechnical & Geological
Engineering: Melbourne, Australia,
2000, pp. 186-129.

[18] Borcherdt, R., Estimates of Site-
Dependent Spectra for Design
(Methodology and Justification),
Earthquake Spectra, Vol. 10, No. 4,
1994, pp. 617-653.

[19] Borcherdt, R. and Fumal, T., Empirical
Evidence from the Northridge
Earthquake for Site-Specific
Amplification Factors Used in US
Building Codes in Proc. 12 World
Conference on Earthquake Engineering,
Auckland, NZ, 2000, pp. 1-6.

[20] Borcherdt, R., Empirical Evidence for
site Coefficients in Building Code
Provisions, Earthquake Spectra, Vol.
18, No. 2, 2002, pp. 189-217.

[21] Crouse, C. and McGuire, J., Site
Response Studies for the Purpose of
Revising NEHRP Seismic Provisions,
Earthquake Spectra, Vol. 12, No. 2,
2002, pp. 407-439.

[22] Rodriguez-Marek, A., Bray, J. and
Abrahamson, N., Characterization of
Site Response General Categories,
PEER Report 1999/03, Pacific
Earthquake Engineering Research
Center, Berkeley, California, 1999.

[23] Stewart, J., Liu, A. and Choi, Y.,
"Amplification factors for spectral
acceleration in tectonically active
regions" Bull. Seism. Soc. Am., 93 (1),
2003, pp. 332-352.

[24] Roesset, J., Soil Amplification in
Earthquakes, in Numerical Methods in
Geotechnical Engineering, ed. pp. 639-
682, McGraw Hill, New-York, 1977.

[25] Worku, A., Comparison of Seismic
Provisions of EBCS 8 and Current
Major Building Codes Pertinent to the
Equivalent Static Force analysis, Zede,
Journal of the Ethiopian Engineers and
Architects, Vol. 18, 2001, pp. 11-25.

[26] http://www.seismo.ethz.ch/static/gshap/
Gshap98-stc.html

*Email:alemh29@gmail.com
Journal of EEA, Vol. 28, 2011

PERFORMANCE ANALYSIS OF CHAOTIC ENCRYPTION USING A SHARED


IMAGE AS A KEY

Alem Haddush Fitwi* and Sayed Nouh
Department of Electrical and Computer Engineering
Addis Ababa Institute of Technology, Addis Ababa University

ABSTRACT

Most of the secret key encryption algorithms in use
today are designed based on either the feistel
structure or the substitution-permutation structure.
This paper focuses on data encryption technique
using multi-scroll chaotic natures and a publicly
shared image as a key.

A key is generated from the shared image using a
full period pseudo random multiplicative LCG.
Then, multi-scroll chaotic attractors are generated
using a hysteresis switched, second order linear
system. The bits of the image of the chaotic
attractors are mixed with a plaintext to obtain a
ciphertext. The plaintext can be recovered from the
ciphertext during the deciphering process only by
mixing the cipher with a chaos generated using the
same secret key. As validated by a functional, NIST
randomness, and Monte Carlo simulation tests, the
cipher is very much diffused and not prone to
statistical or selected cipher attacks.

In addition, the performance is measured and
analyzed using such metrics as encryption time,
encryption throughput, power consumption and
compared with such existing encryption algorithms
as AES and RSA. Then, the performance analysis
and simulation results verify that the chaotic based
data encryption algorithm is valid.


Key Words: Secret key encryption, shared image,
hysteresis switched second order system,
multiplicative LCG, chaotic attractors,
randomness.

INTRODUCTION

At present when the Internet provides essential
communication for tens of millions of people and is
being increasingly used as a tool for commerce,
security becomes a tremendously important issue to
deal with. There are many aspects to security and
many applications, ranging from secure commerce
and payments to private communications and
protecting passwords. The fast expansion of
computer connectivity necessitates protecting data
and messages from unauthorized tampering or
reading. Even the US courts have ruled that there
exists no legal expectation of privacy for email. It
is thus up to the user to ensure that communications
which are expected to remain private actually do
so. One of the techniques for ensuring privacy of
files and communications is Cryptography [1].

In general, there are three types of cryptographic
schemes: secret key (or symmetric) cryptography,
public-key (or asymmetric) cryptography, and hash
functions. In all cases, the initial unencrypted data
is referred to as plaintext. It is encrypted into
cipher-text, which will in turn be decrypted into
usable plaintext [1-3].

The paper is organized as follows: Firstly, related
works and progresses in the areas of cryptography
and chaos generation and applications are
examined. This is followed by the design, analysis
and testing of the chaotic encryption algorithm.
Performance measurements of the design and the
corresponding results are then presented. Finally
the conclusions that are drawn from the
investigation are given.

RELATED WORKS

Pertinent works and progresses in the areas of
cryptography and chaos are surveyed as follows:
Data Encryption Standard (DES) is a feistel
structure, block cipher that was selected by the
National Bureau of Standards as an official Federal
Information Processing Standard (FIPS) for the
United States in 1976 and which had subsequently
enjoyed widespread use internationally. DES is
now considered to be insecure for many
applications chiefly due to the 56-bit key size being
too small. In January, 1999, Distributed.net and
the Electronic Frontier Foundation collaborated to
publicly break a DES key in 22 hours and 15
minutes. Consequently, DES has been
withdrawn as a standard by the National
Institute of Standards and Technology and was
finally superseded by the Advanced Encryption
Standard (AES) on 26 May 2002 [1, 4 - 9].

Advanced Encryption Standard (AES) is an
encryption standard adopted by the US
Government. It was announced by National
Institute of Standards and Technology (NIST) as
U.S. FIPS PUB 197 (FIPS 197) on November 26,
Alem Haddush Fitwi and Dr. Sayed Nouh



18 Journal of EEA, Vol. 28, 2011

2001 after a 5-year standardization process. The


AES ciphers have been analyzed extensively and
are now used worldwide, as was the case with its
predecessor, DES. Until May 2009, the only
successful published attacks against the full AES
were side-channel attacks on some specific
implementations. The input and output for the AES
algorithm each consist of sequences of 128 bits.
The Cipher Key for the AES algorithm is a
sequence of 128, 192 or 256 bits. Other input,
output and Cipher Key lengths are not permitted by
this standard [1, 4, 10, 11].

RSA (which stands for Rivest, Shamir and
Adleman who first publicly described it) is an
algorithm for public-key cryptography. It is
believed to be secure given sufficiently long keys
and the use of up-to-date implementations. As of
2010, the largest (known) number factored by a
general-purpose factoring algorithm was 768
bits long, using a state-of-the-art distributed
implementation. RSA keys are typically 1024
2048 bits long. Some experts believe that 1024-bit
keys may become breakable in the near term
(though this is disputed); few see any way that
4096-bit keys could be broken in the foreseeable
future. Therefore, it is generally presumed that
RSA is secure if n, called modulus which is the
product of two large random prime numbers, is
sufficiently large. If n is 300 bits or shorter, it can
be factored in a few hours on a personal computer,
using software already freely available. As the key
size increases, it becomes more expensive
computationally [12, 13]

Elliptic curve cryptography (ECC) is an
approach to public-key cryptography based on
the algebraic structure of elliptic curves over finite
fields. An ECC with a key-length greater than 112-
bit said to be secure but slow when used for
bulky data encryption. As the key size increases,
encryption using ECC becomes computationally
more expensive [12-14].

Chaos" means "a state of disorder", but the
adjective "chaotic" is defined more precisely in
chaos theory. For a dynamical system to be
classified as chaotic, it must be sensitive to initial
conditions, and topologically mixing. Over the
last two decades, chaotic oscillators have been
found to be useful with great potential in
many technological disciplines such as
information and computer sciences, biomedical
engineering, power systems protection, encryption
and communications, etc. Recently, there has been
some increasing interest in exploiting chaotic
dynamics for real-world engineering
applications, in which much attention has been
focused on effectively generating chaos from
simple systems by using simple controllers. Then a
survey has been made on a number of techniques
which have been developed for generating chaotic
attractors and their application in papers [15-19].

The motivation to design and evaluate a chaotic
based encryption algorithm is, therefore, because
cryptographic algorithms play an astronomical role
in information security systems, and in recent
years, as the importance and the value of
exchanged data over the Internet or other media
types have been increasing alarmingly, there has
been a search for the best solution to oer the
necessary protection against the data thieves
attacks. On the other side, cryptographic algorithms
consume a signicant amount of such computing
resources as CPU time, memory, and battery
power. As a consequence, there has been a great
interest of designing cryptographic algorithms
which are secure (or reliable), faster, efficient and
with no known method of attacks.

DESIGN, ANALYSIS, AND TEST OF THE
CHAOTIC ENCRYPTION ALGORITHM


Design overview

In the abstract, the design of a chaotic based
crypto-system comprises five major tasks as
delineated in Fig. 1. The tasks include image
processing, key generation, generation of chaotic
attractors, enciphering process, and deciphering
process. In addition, the design is tested using a
sample plaintext to verify if it can function as
designed and required, and it is validated using
statistical randomness and Monte Carlo simulation
tests. Eventually, the type of techniques used to
manage the secret key of the designed chaotic
crypto-system, and to provide a digital finger print
of the shared image to check its integrity are
presented.

Performance Analysis of Chaotic Encryption using a Shared Image as a Key


Journal of EEA, Vol. 28, 2011 19



Figure 1 Chaotic crypto-system

Shared Image

In this crypto-system, the same image, in lieu of the
secret key itself, is shared amongst all
communicating (sending and receiving) parties
from which the secret key is extracted. It is
publicly shared by communicating parties just like
a public key of a public-key encryption, only the
information required to extract the key from the
image is communicated secretly.



Figure 2 Grayscale image.

Keys having lengths less than the image size
(width*length) are extracted from this shared
image. The shared image used in this paper and
from which a secret key is extracted is the one
portrayed in the Fig. 2. But also it is possible to use
any other image which is not completely black or
white as a key! The minimum key length allowed is
128 bits for it is the minimum secure key length
used in todays popular secret key encryption
algorithms. Above it, it can be of any length as
long as it is less than the size of the shared image.

The shared image is then processed to make it
convenient to extract the secret key from its pixel
values. The image processing here comprises such
processes as image reading, converting to
grayscale, and grabbing the pixel values of the
grayscale image. If the image is RGB, it is first
converted to a grayscale, as portrayed in Fig. 2,
using the method convertTogray() from which
pixel values, ranging from 0 to 255, are grabbed
into a two dimensional array. Then, such important
attributes as width (w), height (h), and pixel values
(image Pixels) are accessed from the grayscale
image in Fig. 2 as follows:

w = image.getWidth() (1)
h= image.getHeight() (2)

Image Pixel [w] [h]= readGrayImage (3)
Pixel (grayImage)

Key Generation

Any secret key of length less than the size of the
shared image can be extracted from the two
dimensional pixel values of the shared image stored
in the 2D array, ImagePixel[w][h], in Eq. 3.

Key. length<= w*h (4)

Where the values of w, and h are obtained in Eqs.1,
and .2, respectively.

In this paper, the key is extracted from the 2D pixel
values of the grayscale image using a full period
pseudo random generator called linear congruential
generator, LCG, constructed using defined values
in GF (m) with a period of m-1. Then, the extracted
key, keyExtract, is converted to binary values, and
finally substituted using a seven-bit input and five-
bit output S-Boxes to obtain the final enciphering
and deciphering key, keyFinal.

The pseudo random generator used to extract a key
from the grayscale image is given in Eq. 5, where
69,621 is the multiplier, and 2
31
-1 is the modulus. It
is called multiplicative LCG.

X
n
= (69,621X
n-1
) mod (2
31
-1) (5)

The random numbers generated using the above
algorithm [20] are used as indices of the 2D array
Alem Haddush Fitwi and Dr. Sayed Nouh



20 Journal of EEA, Vol. 28, 2011

of pixels, ImagePixel[][], to extract a key from the


2D pixel values of the grayscale image as follows,
where Xo and X
1
are seed values.

For i=1:key.length do

idx1=((69,621X
o
) mod (2
31
-1))mod w;
idx2=((69,621X
1
) mod (2
31
-1))mod h

keyExtract[i]=ImgPixel[idx1][idx2];
end

Then, the keyBinary[] is divided into blocks of
size 49 bit each, in turn, each block is divided into
seven 7-bit pieces before being processed by the
substitution boxes. Each of the seven S-boxes
replaces its seven input bits with five output bits
according to a non-linear transformation, provided
in the form of a look up table. The S-boxes
strengthen the security of the key; i.e substituted
bits are used instead of the actual bits randomly
extracted from the shared image thereby increasing
the efforts of cryptanalysts who try to infer the key
using brute force analysis or selected cipher attack.

The S-Boxes in this algorithm serve more or less
the same purpose as the S-Boxes used in DES and
AES; they are however different from those used in
DES and AES. Here seven S-Boxes are used. Each
of them is constructed using defined transformation
of values in GF (2
5
) comprising 4 unique rows and
32 columns. Each raw comprises 32 elements
starting from 0 to 31 in a thoroughly random
sequence. And the rows are numbered from 0
through 3.

The input bits are used as addresses in tables of the
S-boxes. Each group of seven bits will give us an
address in a different S-box. The first and last bits
of the 7-bit input indicate row number, and the
other 5 bits give the number of columns. Located
at that address will be a 5-bit number. This 5-bit
number will replace the original 7 bits. The net
result is that the seven groups of 7 bits are
transformed by the seven S-Boxes into seven
groups of 5 bits for 35 bits total to obtain
keyFinal[]= S-Boxes(keyExt).

Generation of Chaotic Attractors

In this paper, the required chaotic attractors are
generated using a hysteresis switched second order
linear system. The generation process comprises
calculation of initial conditions from keys
generated earlier, and solving the second order
linear system using the concept of second order
homogeneous differential equations.
Hysteresis Switched Second Order Linear
System

There are many techniques of generating chaos; in
this paper a system called Hysteresis Switched
second order linear system is used. It is a chaotic
oscillator triggered only by initial conditions. It has
no inputs except the initial conditions, X
o
and Y
o
.

Then once triggered by the initial values, it keeps
on oscillating and generating chaotic attractors for
a time t, and moves from one scroll to another
depending on the value of n (number of scrolls )
provided due to the feedback hysteresis series as
depicted in Fig 3.


Figure 3 Chaotic oscillator [15].

The mathematical description of the hysteresis
switched system in Fig 3 is given by:

|
x = y
y = -x +2oy +E(x, n)

(6)

where X
o
, and Y
o
are the initial conditions, is a
positive constant, x and y are state variables, H(x,
n) is a hysteresis series described in Eq. 7 and 8,
and n is the number of scrolls.

E(x, n) = b

(x)
n
=1
(7)
and

b

= |
1 or x > i -1
u or x < i
(8)

Solution of Second Order Linear System

Differentiating both sides of the first Equation of
system 6 produces the following:

dt
dy
dt
x d
=
2
2
(9)
Performance Analysis of Chaotic Encryption using a Shared Image as a Key


Journal of EEA, Vol. 28, 2011 21


Then the differentiated y in Eq. 9 is substituted by
the equivalent expression given in the second
equation of system (3.6). Hence, Eq. 3.9 becomes:

d
2
x
dt
2
=
dy
dt
= -x +2o
dx
dt
+E(x, n) (10)

Rearranging Eq. 10, gives out a homogeneous
equation of the form

( ) x f cx
dt
dx
b
dt
x ad
= + +
2
2
(11)

Where f(x)=0. Therefore Eq. 11 is solved as
follows, letting
mt
e x = , and =
dx
dt


c
mt
(o
2
+b +c) = u (12)

Then, if c
mt
is to be a solution,

0
2
= + + c bm am

a
ac b b
m
2
4
2
+
= (13)

Using the values of m1 and m2, obtained from
Eq. 13, the solutions are
t m
e x
1
1
= and
t m
e x
2
2
= ,
then combining both solutions to obtain the general
solution

2 2 1 1
x c x c x + =
t m t m
e c e c x
2 1
2 1
+ = (14)

If the system is to generate chaos, its solution must
be complex; i.e for a=1, b=2o , and c=1 (obtained
from Eq. 10, | o i m + = . Then,

m = o o
2
-1 = o +i[ (15)
Hence,
2
1 o | =
,
obtained by solving

Equation 13.

Eulers formulae,

u u
u u
u
u
sin cos
sin cos
i e
i e
i
i
=
+ =

(16)

If m
1
= | o i + and m
2
= | o i , then using the
Eulers formulae in 16, the solution in 15
becomes:

x(t) = c
ut
|Acos[t +Bsin[t] (17)

The solution for the state variable y is therefore
obtained as follows,


( )
| | ( )
dt
t B t A e d
dt
dx
t y
t
| |
o
sin cos +
= = (18)

Solving for y and x in Equation 18, produces


x(t) = c
ut
|X
o
cos[t +i I

c
-uX
c

] sin[t|
y(t) = c
ut
|
o
cos[t +
u(
c
-uX
c
)

sin[t| (19)

Calculation of Initial Conditions

In this paper, the initial conditions of system 6 are
calculated using the bit values of the keyFinal
array, which is the output of the S-Boxes, as
follows.

X
o
= :or + kcyFinolX

o
= :or +kcyFinol (20)


Where var is a user defined value, kept secret as
part of the key, keyFinalX and keyFinalY are two
different keys generated using different seed values
for the LCG.



Figure 4 Five-scroll chaotic attractors
Alem Haddush Fitwi and Dr. Sayed Nouh



22 Journal of EEA, Vol. 28, 2011


Figure 4 portrays the 5-scroll chaotic attractors
generated using the solutions of system 6 given in
Eq. 19. The chaos is generated using a key of
length 136 bits, =0.0049, and
2
1
=0.999976. The chaos is sensitive to the values,
and secure range of values are determined using
an algorithm that makes use of the monobit test,
described later

Enciphering Process

During the enciphering process, the plaintext is
mixed with the generated chaos using a logical
bitwise XOR operator as depicted in Fig 5.



Figure 5 Enciphering process

During chaos processing, three equal sections
namely upper, middle and lower are cropped from
the overall balanced chaos shown in Fig 4. and
converted to grayscales as depicted in Figs 6 (a),
(b), and (c), and XORed together to further
increase the probability of the balance of 1s and
0s by avoiding localized imbalances. Then, the
combination of the three crops is resized as per the
size of the input plaintext as portrayed in Fig. 7.
Through this process, all the security properties
which include one way encryption, semantic
security, and indistinguishability are achieved.


(a) Upper crop

(b) Middle crop


(c) Lower crop

Performance Analysis of Chaotic Encryption using a Shared Image as a Key


Journal of EEA, Vol. 28, 2011 23



(d) Mixed=a XOR b XOR c

Figure 6 Chaos cropping and mixing.


Figure 7 Resized balanced chaos.

Deciphering Process

The process of deciphering in this chaotic
algorithm is essentially the same as the enciphering
process. The rule is as follows: the same key used
during the enciphering process is used in the
deciphering process, but the cipher text, in lieu of
the plaintext, is used as input to the chaotic
algorithm. The ciphertext, which is the output of
the enciphering process, is mixed with the
appropriately sized chaos generated using the same
key and the XOR operator.

Design Test

To verify if the designed chaotic algorithm can
function as required, a sample plaintext was
enciphered and then deciphered. A plaintext is
browsed using the GUI depicted in Fig. 8, and is
enciphered to convert it into a form which is
unintelligible. Eventually, the ciphertext is
deciphered to check if the chaotic deciphering
process can fully recover the clear text (or
plaintext) back from it. As copied from the text
areas of the GUI in Fig. 8, portions of the plaintext,
ciphertext, and decrypted text are respectively
displayed below verifying that the chaotic
algorithm works as designed and required. i.e the
plaintext and recovered (or decrypted text) are the
same.

Plaintext=How do we protect our most valuable
assets?

Ciphertext=:"u1:u"0u%':!06!u:'u8:&!u#494790u4&
&0!&ju;

Deciphered text=How do we protect our most
valuable assets?

What is more, a number of NIST statistical tests
and Monte Carlo Simulation test were performed
on the chaotic sequence to attempt to compare and
evaluate the sequence to a truly random sequence
as depicted in Table 1. Then, the results validate
the algorithm.




Figure 8 Chaotic crypto system

Alem Haddush Fitwi and Dr. Sayed Nouh



24 Journal of EEA, Vol. 28, 2011


Table- 1: Summary of test results

NIST Test Objective P-value Decision
Rule
Monobit Proportion of
1s and 0s
0.9491


0.01
Block
Frequency
Proportion of
1s and 0s
0.9998
Run Oscillation b/n
1 and 0
0.9542
Spectral Reveal
periodicities
0.0695
Linear
Complexit
y
Length of
LFSR*
0.8030
*LFSR=Linear Feedback Shift Register

Key Exchange

In general, in a secret key encryption, as the
number of communicating parties increase the
exchange of the secret key becomes insecure. i.e n
users who want to communicate in pairs need n *
(n - 1)/2 keys [2]. The number of keys needed
increases at a rate proportional to the square of the
number of users! So a property of symmetric
encryption systems is that they require a means of
key distribution.

In this paper, all the information required for the
extraction of the key from the publicly shared
image and other constants is encrypted using a
public-key RSA, and then sent to the recipient. Let
INFO=Seeds + n + var + + HKey + LenMul;
then, it is sent as follows:

E (K
PUB-R
, E(K
PRIV-S
, INFO))

where E stands for RSA encryption, Kpub=public
key, Kpriv=private key, R stands for Receiver, S
stands for Sender, n is the number of scrolls, and
LenMul is the number of digits contained in the
multiplier of the PRNG used for key extraction.
Besides, an HMAC that serves two purposes
namely shared image integrity check, and origin
authentication is generated using keyed SHA-1 as
HMAC=H (HKey, Shared image).

PERFORMANCE ANALYSIS

In this paper, relevant metrics are identified,
performance of the chaotic algorithm is measured,
and eventually, the chaotic encryption algorithm is
compared with existing one public key, RSA and
one secret key, AES, encryption algorithms.

Metrics and Performance

In this paper, five pertinent metrics are used to
evaluate the performance of the chaotic encryption.
The metrics include encryption/decryption time,
power consumption, encryption throughput, CPU
time and cipher size. Then the experiment was
conducted and performance results were collected
using a laptop having processor of Intel (R)
Pentium (R) Dual CPU T2370 @ 1.73GHz,
1.73GHz, and a RAM of 1GB.

Encryption Time

In most encryption algorithms, the encryption time
is dependent on the computational complexity of
the algorithm, key length, and the size of the
plaintext to be encrypted. Here, in this paper, a
number of encryption times for various plaintext
sizes and key length are collected and analyzed as
follows. The encryption time and text size are
measured using a timer and bit length reader
method built as part of the crypto system.

Effect of Key Length on the Encryption Time

Unlike other encryption algorithms, the key length
does not affect significantly the computation time
of the chaotic algorithm designed in this paper.
This is due to the fact that the key is used only to
calculate the initial conditions of the system used to
generate chaos. The initial conditions are calculated
in ways described in Eq. 20.

Effect of Plaintext Size on the Encryption Time

Various data sizes ranging from 7.84 Kb to 500 Kb
are enciphered, and their respective encryption
times are collected.
Performance Analysis of Chaotic Encryption using a Shared Image as a Key


Journal of EEA, Vol. 28, 2011 25



Figure 9: Data size versus encryption time

Figure 9 depicts that the enciphering time increases
as the data size to be enciphered is increased. The
data size and the enciphering time are linearly
related.

Encryption Throughput, X
e
It is the measure of the number of bytes of
ciphertext completed (enciphered) during an
observation period (enciphering time).
Mathematically represented:

s Time Encryption
bytes in s completion of number
x
e
(21)


The Effect of Changing the plaintext size on the
Encryption Throughput

The graph in Fig. 10 shows that for initially small
size data the throughput is not affected; rather it
increases with increase in data size. However, once
the data size gets large enough, further increase in
the data size keeps on diminishing the encryption
throughput.

Figure 10 Encryption throughput verses data sizes

Power Consumption

Such technologies as CPU and memory are
growing faster, and so is their need for power.
However, battery technology is increasing at a
much slower rate, forming a battery gap. Because
of this, battery capacity plays a major role in the
usability of devices and algorithms [21]. Hence, it
is worthwhile to analyze the power consumption of
the designed chaotic algorithm.

For computation of the energy cost of encryption,
we use the same techniques as described in [21].
We present a basic cost of encryption represented
by the product of the total number of clock cycles
taken by the encryption and the average current
drawn by each CPU clock cycle. The basic
encryption cost is in unit of ampere-cycle. To
calculate the total energy cost, we divide the
ampere-cycles by the clock frequency in
cycles/second of a processor; we obtain the energy
cost of encryption in ampere-seconds. Then, we
multiply the ampere-seconds with the processors
operating voltage, and we obtain the energy cost in
Joule. Thats, by using the cycles, the operating
voltage of the CPU, and the average current drawn
for each cycle, we can calculate the energy
consumption of cryptographic functions. Then, the
amount of energy consumed by the chaotic
algorithm C to achieve its goal (encryption or
decryption) is given by:

E = V cc*I*T joules (22)

Where for a given hardware Vcc is xed. The
encryption time, T, is considered the time that an
encryption algorithm takes to produce a cipher text
0 1 2 3 4 5 6
x 10
4
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
x 10
4
Throughputs
Data sizes in bytes
T
h
r
o
u
g
h
p
u
t
s

o
f

v
a
r
i
o
u
s

d
a
t
a

s
i
z
e
s
Alem Haddush Fitwi and Dr. Sayed Nouh



26 Journal of EEA, Vol. 28, 2011

from a plaintext, and I is the average current


consumed per CPU cycle.

In this paper, the experiment was conducted and
performance results were collected using a laptop
with Pentium Dual 1.73GHz CPU. Therefore, the
approximate average current consumed is 100mA
when it is busy, and the CPU voltage is Vcc=1.25
volts (both collected from Intel Manual). The
power consumption performance analysis results
for various data sizes are then collected based on
these current and voltage ratings.

Effect of Plaintext Size on Power Consumption

Figure 11 clearly shows the variation of energy
consumption with different data sizes. It can also
be inferred from the graph that it is similar to the
data size versus encryption time graph in Fig. 9
which implies that the energy consumed during an
enciphering process is directly proportional to the
encryption time.

Figure 11 Data size versus energy consumption

CPU Time

The CPU process time is the time that a CPU is
committed only to the particular process of
calculations. It reects the load of the CPU. The
more CPU time is used in the encryption process,
the higher is the load of the CPU.

In this paper, the CPU time is calculated using the
technique described in [29] as follows:

CPu busy timc, Icpu =
CPU utIIIzatIon
0bscuton Pcod
(23)
Where the observation period is equal to the
enciphering time and CPU utilization for the
encryption process is obtained from the task
manager.

It is found out that the longer the encryption time is
the busier the CPU becomes. Thats, if the time
required to encipher a certain text is longer, the
CPU load (or busy time) is proportionally higher.

Cipher Size

One of the most integral Shannon's Characteristics
of "Good" Ciphers is that the size of the enciphered
text should be no larger than the plaintext of the
original message [2].

As it is the case with other secret key encryption
algorithms, in this work the size of the plaintext
and the ciphertext are found to be the same
fulfilling the Shannon's size Characteristics of
"Good" Ciphers. The two merged pop-up message
boxes portrayed in Fig. 12 verify that the plaintext
and ciphertext, in this algorithm, are of the same
size.



Figure 12 Length measure of a clear and cipher texts

Comparison with AES and RSA

The performance of the designed algorithm is
compared with the popular current-in-use secret
key encryption algorithm, AES, and with a public
key encryption, RSA. The data sets, used in the
chaotic encryption, are encrypted using both AES
and RSA, and their performance is evaluated for
the same metrics used above. Here, while analyzing
the performance of AES and RSA, only secure key
lengths are used, 128 bits for AES and 1024 bits for
RSA.

Encryption Times

Encryption times for same set of various data sizes,
used to analyze the performance of the chaotic
enciphering, are collected the ways depicted in
Fig 13. Eventually, they are put to graphs as
0 50 100 150 200 250 300 350 400 450 500
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Energy Consumption During Enciphering Process
Data sizes in kilo bits
E
n
e
r
g
y

C
o
n
s
u
m
e
d

i
n

j
o
u
l
e
s
Performance Analysis of Chaotic Encryption using a Shared Image as a Key


Journal of EEA, Vol. 28, 2011 27

delineated in Fig. 14. It is found that the decryption


times of RSA algorithm are higher than its
encryption times. This is due the fact that the
enciphering comprises modular computation of
(plaintext)
pubkey
mode n, whereas the deciphering
process involves the modular computation given as
(plaintext)
pubkeyprivkey
mode n. PrivateKey *PubKey
is much larger than PubKey, and hence requires
higher computational time.

Likewise, encryption times for AES, for the various
data sizes enciphered, are collected. The
deciphering times for AES are more or less close to
the enciphering times.



Figure 13: RSA crypto timers


Figure 14: Encryption times of chaotic, RSA and
AES

Figure 14 shows that the chaotic encryption is
much faster than RSA algorithm for any data size.
Besides, it has better time performance than AES,
too, but for smaller data sizes (<125 Kb).

Encryption Throughput

Figure 15 illustrates that the throughput
performance of the chaotic encryption is very much
high for smaller size of data, and it keeps on
decreasing as the data size increases. It has higher
performance than AES for smaller data sizes, but it
is very much superior to RSA for any data size.

Figure 15 Chaotic, RSA, and AES throughputs

Encryption Power Consumption

The power consumptions of the Chaotic, RSA and
AES are depicted in Fig. 16.

The figure demonstrates that the designed
algorithm has less power consumption than AES
for relatively small data sizes, and much better
performance than the RSA for any data size. The
graphs in this figure are similar to the ones on
Fig. 14 proving that the energy consumed by an
enciphering process is proportional to the time
consumed by that process.

Figure 16 Power consumption of chaotic, RSA,
and AES algorithms

0 50 100 150 200 250 300 350 400 450 50
0
5
10
15
20
25
30
35
Encryption times of Chaotic, RSA and AES
Data sizes in kilo bits
E
n
c
r
y
p
t
i
o
n

T
i
m
e
s

i
n

s
e
c
Chaotic
RSA
AES
0 1 2 3 4 5 6
x 10
4
0
1
2
3
4
5
6
x 10
4
Thruputs of Chaotic, RSA and AES
Data sizes in bytes
T
h
r
u
p
u
t
s
Chaotic
RSA
AES
0 1 2 3 4 5 6
x 10
4
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Energy Consumption of Chaotic, RSA and AES Algorithms
Data sizes in kilo bits
E
n
e
r
g
y

C
o
n
s
u
m
e
d

i
n

j
o
u
l
e
s
Chaotic
RSA
AES
Alem Haddush Fitwi and Dr. Sayed Nouh



28 Journal of EEA, Vol. 28, 2011

Memory Cost

In secret key encryption, the plaintext and cipher
text are of the same size. Similarly, the plaintext
and ciphertext of the designed algorithm are of the
same size. But, considering the RSA, the cipher
size is greater than the size of the input plaintext.
As a result, the cipher of an RSA algorithm
occupies more memory as compared to that of
chaotic and AES.

Security

The security of the chaotic encryption scheme lies
on the difficulty of obtaining the exact key
combination from amongst the very large set of
possible combinations of the key due to limitations
in the computational power of todays computers.

Table 2: Summary of security comparison [5]

Algorithm Number of
operations
required
Examples
Chaotic 2
128
(2
k+1
-1) Key=136
bits2
145

operations
AES 2
n
Key=128
bits2
128

operations
RSA
c
.2-Inp-In (Inp)
lnp

N=1024 bits
2
90

operations

Table 2 summarizes the security comparison of
the Multi-Scroll chaotic, AES and RSA
encryption algorithms. The chaotic algorithm is
superior to both. It meets such security properties
as one way encryption, semantic security, and
indistinguishability. Besides, it is prone to the
three models of attacks namely total break, CCA1,
and CCA2.


CONCLUSION

At present, Internet and network applications are
growing very fast, so the need to protect such
applications has increased. Encryption algorithms
play an immense role in information security
systems. This paper has presented the optimization
of multi-scroll chaotic attractors for text encryption
and its performance analysis. All essential and
collateral parameters are systematically determined
to satisfy the specific cryptographic security
requirements. The algorithm solves at least some of
the drawbacks suffered by existing cryptographic
algorithms such as AES and RSA.
In this paper, a multi-scroll chaotic enciphering
algorithm is fully described and validated via
functional and randomness tests. Then, appropriate
metrics for performance measurements are
identified; the performance of the algorithm was
measured and compared with such existing
cryptographic algorithms as RSA and AES. The
test results show that the designed algorithm works
as required, i.e, the data enciphered by the
enciphering process is fully recovered by the
deciphering process. The test and security analysis
prove that the cipher is not prone to cipher or
statistical attack (including total break, CCAI, and
CCA2), and the key is secure. It meets the three
properties of security namely: one way encryption,
semantic security, and indistinguishability.

The performance of the chaotic encryption
algorism is by far better than the RSA. It has less
encryption time, less power consumption, and
higher throughput than RSA. The size of the
plaintext and ciphertext is also the same in the
chaotic algorism which is not the case in RSA.
RSA cipher is longer than the plaintext causing
more resource consumptions and congestion in
memory and bandwidth. Comparing it with AES,
it has better performance only for relatively smaller
data sizes. The chaotic encryption has another
advantage over RSA and AES in that it is key
length independent. The key length can be made
longer without significantly affecting the
computational time. In addition, there are no
known methods of attack so far for chaotic
encryption!

REFERENCES

[1] Smart, N., Cryptography: An Introduction,
CRC press, pp.11-387, June 2010.

[2] Pfleeger, C. P., Security in Computing,
Pfleeger Consulting Goup, Shari Lawrence
Pfleeger - RAND Corporation, October, 2006.

[3] Stallings, W., Data and computer
Communications, Prentice Hall, New Jersey,
1996.

Performance Analysis of Chaotic Encryption using a Shared Image as a Key


Journal of EEA, Vol. 28, 2011 29

[4] Menezes P., Oorschot, V., and Vanstone, S.,


Hand book of Applied Cryptography, CRC
press, 1996.

[5] Stallings,W. Lecture Notes for use with
Cryptography and Network Security, May
2010.(Online).

[6] FIPS PUB 46-3 Federal Information
Processing Standards Publication October 25,
1999.

[7] FIPS PUB 46-2 Federal Information
Processing Standard Publication December 30,
1993.

[8] FIPS PUB 46 Federal Information Processing
Standard Publication January 15, 1977.

[9] FIPS PUB 46-1 Federal Information
Processing Standard Publication January 22,
1988.

[10] Federal Information Processing Standards
Publication 197, Announcing the Advanced
Encryption Standard (AES), November
26,2001.

[11] Rhee, M.Y., Internet Security,
Cryptographic Principles, Algorithms
and Protocols, John Wiley & Sons
2003.

[12] Gura, N., Patel, A., Wander, A.,Eberle, H.
and Shantz, S.C. Comparing Elliptic Curve
Cryptography and RSA on 8-Bit CPUs,
International Association for Cryptologic
Research, 2004.

[13] Wenbo Mao Hewlett-Packard Company,
Modern Cryptography:Theory and
Practice John Wiley & Sons July 2003.

[14] Anoop,M.S., Elliptic Curve Cryptography
an Implementation Guide, 2006.(Online)

[15] Han, F., Lu, J., Yu, X., Chen,G. and Feng,
Y.,Generating Multi-Scroll Chaotic
Attractors Via a Linear Second Order
Hysteresis System, Watam Press, 2005.

[16] L' ua , J., Hanb, F., Yub, X. and Chenc, G.,
Generating 3-D multi-scroll chaotic
attractors: A hysteresis series switching
method , Elsevier 18 May 2004

[17] Han, F., Hua, J., Yub, X. and Wang, Y.,
Fingerprint Images encryption via multi-
scroll chaotic attractors, Elsevier, 2007.

[18] L, J., Murali, K., Sinha, S., Leung, H. and
Aziz-Alaoui, M. A., Generating multi-
scroll chaotic Attractors by thresholding,
Elsevier , 30 January, 2008.


[19] Fengling Han, Xinghuo Yu, and
Jiankun Hu, A New Way of Generating
Grid-Scroll Chaos and its Application to
Biometric Authentication, Melbourne VIC
Elsevier 2001.

[20] Jain, R., Art of Computer Systems
Performance Analysis Techniques For
Experimental Design Measurements
Simulation and Modeling, 2004. (Online)

[21] Naik, K. and Wei, D. S. L. Software
implementation strategies for power-
conscious systems, Mobile Networks and
Applications, Vol. 6, pp. 291-305, 2001.

*E-mail: abdotuko@yahoo..com
Journal of EEA, Vol. 28, 2011

INVESTIGATION OF ADAPTIVE BEAMFORMING ALGORITHMS FOR
COGNITIVE RADIO TECHNOLOGY

Mulugeta Atlabachew, Institute of Technology Jimma University
Mohammed Abdo Tuko*, Department of Electrical & Computer Engineering
Addis Ababa Institute of Technology, Addis Ababa University

ABSTRACT

Frequency spectrum is one of the biggest natural
resource which has a significant impact on the
development of wireless communication
technologies. Therefore, utilizing this natural
resource in an efficient way accelerates the
technological advancement. The spectrum
allotment strategy that has been serving well the
wireless communication family is the fixed
spectrum allocation strategy. However, the
increasing demand to use wireless technologies
increased the competition for spectrum. As a result,
there is no usable frequency spectrum left
unoccupied. In spite of this spectrum scarcity,
different research shows that most of the times
most of the spectrum bands are not in use. The
proposed solution to overcome this problem is to
use the cognitive radio technology.

Cognitive radio is a wireless communication
technology which adds intelligence to the existing
wireless communication scenario. As every
wireless communication requires antenna, in this
paper the feasibility of smart antenna to this
intelligence system is studied and the performance
(based on computational complexity, convergence
rate and radiation pattern characteristics) of
different adaptive beamforming algorithms are
investigated. The investigation result shows that the
Sample Matrix Inversion (SMI) algorithm besides
its best convergence rate, also produces radiation
pattern that best suits the behavior of cognitive
radio technology.

Key words: Cognitive Radio, smart antenna, and
Adaptive beamforming algorithms.

GENERAL BACKGROUND

Communication in general is a transmission of
signal (information) from one point (source) to the
other (destination). Basically, it is an inherent
behavior of all living matter to communicate, in
particular human beings have used this
phenomenon as a tool to change this world in all
dimensions. Therefore, the history of
communication is totally linked to the history of
living matter. Different disciplines classify types of
communication differently but from
communication engineering point of view, it can be
broadly classified as either wired or wireless
communication. This paper is confined to the latter
types of communication.

Because of its most convenient features, the
wireless communication is leading the market of
communication technology. The ever-increasing
demand of the world to use wireless technology has
motivated both researchers and the business
community to come up with new services and
ideas. However, this motivation is being restricted
by the scarcity of spectrum bands; hence all
spectrum bands of wireless communication are
already occupied [1]. This is so, because of the
fixed spectrum allocation strategy used. To
alleviate this problem cognitive radio (CR)
technology is proposed [2, 3].

Cognitive radio is a wireless technology that senses
the external environment, learns from experience,
plans based on knowledge, and decides based on
reasoning. In general, it adds intelligence to the
existing wireless communication.

PROBLEM DESCRIPTION

Though cognitive radio technology has many
advantages, it has limitations like complexity,
interference and detection [4, 5]. This work tries to
give a solution to the interference and detection
problems.

Since the cognitive radio is a wireless technology it
requires an antenna to establish the wireless link.
Therefore, it is possible to overcome the problem
associated with interference by using an
appropriate antenna. For this technology we
propose to use smart antenna (adaptive array
antenna) because it has much more beyond the
interference reduction capability like increasing
spectrum utilization efficiency, increasing capacity,
extended coverage area, reducing power
requirement, reducing the amount of
electromagnetic radiation to the globe etc [6-8].
Omnidirectional antenna is excluded due to its
obvious power dissipation and its being source of
interference to others.

This adaptive array antenna has got its smartness
from digital signal processing that is incorporated
within the adaptive antenna array system. The main
purpose of the digital signal processing unit is to
Mulugeta Atlabachew and Mohammed Abdo

32 Journal of EEA, Vol. 28, 2011


add adaptability nature to the array antenna in such
a way that the antenna dynamically produces
narrow and stronger beam to the direction of the
intended user and nulls to the direction of
interferers by tracking the new locations of both the
user and interferers. This process is known as
beamforming.

SPECIFIC OBJECTIVES

This work exclusively emphasizes on the following
objectives:

To study the fundamental behaviors of smart
antenna and propose it for the cognitive radio
technology.

To investigate the performances of different
adaptive beamforming algorithms such as
sample matrix inversion (SMI), least mean
square (LMS), recursive least square (RLS),
constant modulus (CM), and least square
constant modulus (LS-CM) and choose the one
that best suits the cognitive radio architecture.

COGNITIVE RADIO (CR)

Cognitive radio was first coined by Joseph Mitola
III [3], and he defined it as follows: The term
cognitive radio identifies the point at which
wireless personal digital assistants (PDAs) and the
related networks are sufficiently computationally
intelligent about radio resources and related
computer-to-computer communications to:

detect user communications needs as a
function of use context, and

Provide radio resources and wireless services
most appropriate to those needs.

Since then the concept has got popularity and
different groups are working on it for its feasibility
[2, 3, 5, 9-13], and. [20]. Those working groups
have developed their own working definition but
the central ideas can be summarized as follows:
Cognitive radio refers to an intelligence wireless
system that

senses and is aware of its operational
environment.
does not operate in a fixed assigned band but
it rather searches an appropriate band to
operate without any user intervention.

can be trained to dynamically and
autonomously adjust its radio operating
parameters accordingly.
learns from experience, plans based on
knowledge, and decides based on reasoning.

Therefore, CR improves spectrum utilization by
making it possible for a secondary user (unlicensed
user) to access a spectrum hole unoccupied by the
primary user (licensed user) at the right location and
time in request. A spectrum hole is a band of
frequency assigned to a primary user, but, at a
particular time and specific geographic location, the
band is not being utilized by that primary user.

The spectrum freedom obtained from CR will
increase the number of wireless operators which will
undoubtedly increases the interference level.
Therefore, to reduce interference, overcome
detection problem, and increase coverage area,
appropriate technologies must be chosen for this
new technology.

SMART ANTENNAS

Generally speaking, all types of antennas exhibit
directivity except the isotropic antenna, which does
not exist in the real world. Though, the level of
directivity varies from one type to the other,
directive antennas have many areas of applications
in wireless communication. Array antenna
technology is a more practical way of producing
highly directive radiation pattern than producing
the required radiation by using single and large
antenna. Besides, it has the following advantages
[14, 15]:

It produces narrow, electronically steerable and
more directive beams.
It tracks multiple targets
It produces low side lobes

The above mentioned advantages have been used
by deploying array antennas into the wireless
communication technologies.

Adding some intelligence to the array antenna
helps in tracking the dynamic wireless environment
and user location. It is just because of this
intelligence that the array antenna system has got
the so called naming of Smart Antenna / Adaptive
Array Antenna / Adaptive Beamforming. Fig. 1
shows the block diagram of an adaptive array
antenna.

Smart antenna may be considered as a marriage of
array antenna and digital signal processing
technology to improve the performance of wireless
communication technology by changing its
radiation pattern dynamically to suppress noise,
interference and reject multipath. In general,
deploying smart antennas to the wireless
technology have the following benefits [6, 8, 16-
18],
Investigation of Adaptive Beamforming Algorithms for Cognitive Radio Technology

Journal of EEA, Vol. 28, 2011 33

i. Reduction in co-channel interference
ii. Range improvement / range extension
iii. Increase in capacity
iv. Reduction in transmitted power
v. Reduction in handoff
vi. Mitigation of multipath effects
vii. Compatibility with TDMA, FDMA, CDMA,
SDMA




In vector form
t X W t y
H
(2)

Where: T
N
w w w W ] , , , [
2 1



T
N
t x x t x t X )] ( , ), 2 ( ), ( [ ) (
2 1

, and


H
(.) Signifies Hermitian transpose


































Figure 1 Block diagram of an adaptive array antenna.

Because of the above mentioned benefits, smart
antennas are proposed to be used in cognitive radio
technology to combat mainly interference and
reduce false alarm detection but in conjunction
with this, the other benefits could also be enjoyed
[5, 19].

The output of any beamformer is given by the
following relation [7, 17],

N
n
n
n t x w t y
1
*
) ( ) (
------------- ----- ((1)
Where
n
w

is a complex weight applied to the n
th

element.
) (t x
n
is the signal received by the n
th

element at time t
(.)* signifies complex conjugate.

For digital beamformer (adaptive array) the inputs
to the beamformer are fed in digital form as shown
in Fig.1. Therefore, the output of the beamformer at
the k
th
sample is given by [7, 8]:

N
n
n
n k x w k y
1
*
) ( ) (
--------------------- --- (3)

) ( ) ( k X W k y
H

----------------------- ----- (4)




The objective of the adaptive element in the smart
antenna system is to find weight vector W in such
a way that the formed radiation pattern from the
antenna array would acquire the following
characteristics [7]

Producing very strong beam to the direction of
intended user.

Formation of nulls to the direction of
unintended users/interferers.
y(k)
d(k)
w1
w2
wN
Adaptive
Algorithms


demodulator
Freq. down Conver.
AD/D
Freq. down Conver.
AD/D
Freq. down Conver.
AD/D
X1(k)
X2(k)
XNk)
X1(t)
X2(t)
XN(t)
Mulugeta Atlabachew and Mohammed Abdo

34 Journal of EEA, Vol. 28, 2011



Consider M number of users with signals
impinging upon the array and let ) (t X k denote
received signal vector corresponding to the k
th
user.

For LOS communication ) (t X k may be
expressed as [8]


) ( ) ( ) ( t s a t X
k k k
k u =
------------- ----- (5)

Where
k
is the scalar complex path amplitude,
) (
k
a u is the array response vector in the direction
of arrival
k
u of the k
th
user, and ) (t s
k
is the
complex baseband signal impinging upon the array
from user k.

Then the total received signal vector by the array
becomes

=
+ =
M
k
k t n t X t X
1
) ( ) ( ) (
------- --------------- (6)

Where ) (t n accounts for receiver noise as well as
background channel noise which can be taken as
Additive White Gaussian Noise (AWGN). In
matrix form Eq. 6 could be rewritten as

) ( ) ( ) ( t n t S A t X + =
----- -------------- ----- (7)

Where:

] ) ( ) ( ) ( [
2 1 M
a a a A u u u =

=
) (
) (
) (
) (
2
1
t s
t s
t s
t S
M



Information about the direction (location) of the
users is contained in the steering matrix, A.

To elaborate the idea discussed above, simulations
were carried and Fig. 2 shows the radiation pattern
simulated for a user located at 50
0
and interferers
located at 80
0
, 120
0
, and 170
0
. In the rectangular
plot of the simulation results, the asterisks (*)
correspond to the location of interferers and the
solid line at 50
0
corresponds to the location of the
desired user. This simulation is carried out to
simply show the capability of digital beamforming
(adaptive beamforming) in producing strong main
beam to the direction of the intended user and
placing nulls to the directions of the undesired
interferers locations effectively. In this way the
interference problems associated with cognitive
radio technology could be alleviated by the use of
smart antenna technology.


a) Polar plot of radiation pattern.




b) Rectangular plot of normalized array
factor
0.2
0.4
0.6
0.8
1
30
60
90
120
150
180 0
0 20 40 60 80 100 120 140 160 180
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
u, "position of user+ interferers in degree"
|
N
A
F


|
Investigation of Adaptive Beamforming Algorithms for Cognitive Radio Technology

Journal of EEA, Vol. 28, 2011 35

c) Plot of array factor in dB

Figure 2 Simulation of radiation pattern for an
array having 4 elements and 4
numbers of users.

To exploit the advantages of smart antenna fully
for time-varying environment and extend its use to
the new emerging CR technology, we have to
examine the beamforming techniques used by the
existing wireless communication and select the one
that best suits the CR technology. To do this, there
must be an adaptive algorithm that can track the
change and update the system with the necessary
information required to form strong beam to the
directions of intended users, nulls to the directions
of interferers and generate equal side lobes (with
minimum detectable energy) to overcome the
hidden node problem and ease detection. The next
section is therefore devoted to performance study
of different beamforming algorithms.

ADAPTIVE BEAMFORMING ALGORITHMS

There are two types of beamforming: conventional
and adaptive beamforming. The conventional
beamforming includes the entire beam shaping
techniques used in conventional array. Whereas
adaptive beamforming is a type of beamforming
which dynamically changes array weights based on
the dynamically changing environments so as to
make optimum beam to the direction of the
intended user and put nulls to the direction of the
interferers/noise. This phenomenon is
accomplished by using adaptive beamforming
algorithms. The theme of this section is therefore to
make theoretical investigation and compare the
different adaptive beamforming algorithms.

Basically, there are two major classes of adaptive
beamforming algorithms based on their
requirements for training signal sequence: Non-
Blind and Blind Adaptive Algorithms [7, 17, 19].

Non-Blind Adaptive Algorithms requires statistical
knowledge of the transmitted signal in order to
optimize the array weights. In other words, to
extract the desired user(s) from the surrounding
environment (received signals) a training signal
sequences which are known both at the receiver
and transmitter are transmitted. Then based on the
information obtained from the received signal
about the channel the array weights are optimized
(adjusted) to reduce the error between the received
signals sequences and the known transmitted signal
sequences at the receiver.

Unlike non-blind adaptive algorithms, blind
algorithms do not require training signal sequences
rather they try to estimate information from the
received signal.

In this section, the performance of the non-blind
and blind adaptive beamforming algorithms, in
particular: Sample Matrix Inversion (SMI), Least
Mean Square (LMS) and Recursive Least Square
(RLS) from the non-blind category; and Constant
Modulus (CM) and Least Square Constant
Modulus (LSCM) from the blind are dealt with
based on different weight optimization criterions.

SAMPLE MATRIX INVERSION (SMI)
ALGORITHM

SMI algorithm is an algorithm which uses
Minimum Mean Squared Error (MMSE) criterion
to obtain the optimal array weight vector. Since we
do not have the true auto-correlation matrix and
cross-correlation vector, this algorithm replaces
both of them by their corresponding estimations
(time averaging) to obtain the Wiener-Hopf
solution. The estimations are given by [7]:


) ( ) (
1
*
1
n X n X
N
n
N
Rxx =
=

.
-------- ------ (8)
) ( ) (
1
*
1
n d n X
xd
N
n
N
r

=
=

.
------------ --- -- (9)

Where: N is the block size.

{ } X X E R
H
xx
= is the auto-correlation

matrix

0 20 40 60 80 100 120 140 160 180
-200
-180
-160
-140
-120
-100
-80
-60
-40
-20
0
u, "position of user+ interferers in degree"
|
N
A
F


|

i
n

d
B
Mulugeta Atlabachew and Mohammed Abdo

36 Journal of EEA, Vol. 28, 2011


{ }
-
= d X E r xd is the cross-correlation
vector

(.) d is the reference signal

In terms of the above estimations the Wiener-Hopf
solution becomes [7]

xd xx r R W
.

.
.
=
1
--------------------------- -- (10)

The estimation error which is also known as the
residual error is given by
xd xx
est
r W R e
.
.
=
-------------------------- - (11)

As the block size increases the time average values
better approximate the ensemble, resulting in a
minimized estimation error in such a way that a
much more closer solution to the Wiener-Hopf is
obtained. Because of the time varying nature of the
wireless channel, the block adaptation is made
periodically.

This algorithm is very suitable for applications
which have bursty nature (for discontinuous
transmission) because its adaptation is made in
block form. The stability of the SMI algorithm
depends on the ability to invert the large covariance
matrix. In order to avoid a singularity of the auto-
correlation matrix, a zero- mean white Gaussian
noise is added to the array response vector. It
creates a strong additive component to the diagonal
of the matrix. In the absence of noise in the system,
a singularity occurs when the number of signals to
be resolved is less than the number of elements in
the array. The main limitation of SMI algorithm is
its computational complexity since it uses direct
matrix inversion

Least Mean Square (LMS) Algorithm

This is the second type of beamforming algorithm
which uses the MMSE criterion; that searches for
the optimal weight that would make the array
output either equal or as close as possible to the
reference signal or minimizes the Mean Square
Error (MSE). Unlike SMI, the LMS is very suitable
for continuous type of transmission since its
optimization is based on the instantaneous received
data. The optimization for LMS is done by
employing the Steepest Descent Method which is a
recursive way of optimizing the array weights.

The Steepest Descent Method is recursive in the
sense that its formulation is represented by a
feedback system whereby the computation of the
filter takes place iteratively in step by step manner.
When the method is applied to the Wiener filter, it
provides us with an algorithmic solution that allows
for the tracking of time variations in the signals
statistics without having to solve the Wiener-Hopf
equations each time the statistics change.
The Steepest Descent Method is given by [20]

) (
2
1
) ( ) 1 ( MSE k W k W V = + u
-------------- - (12)

Where u is the step size parameter (commonly it
is positive constant) and controls the convergence
characteristics of the algorithm.

The difference between the reference signal d(k)
and the array output signal y(k) is universally taken
as error of the adaptive system at that sample and it
is defined as:
) ( ) ( ) ( k y k d k e =


The mean squared error (MSE) is given by [21]:

=
2
) (k e E MSE

and

} { 2
} { 2 2 ) (
-
-
A

=
c
c
= V
Xd E
W X X E
W
MSE
MSE
H


By substituting the gradient of the cost function i.e.
the mean squared error (MSE) into Eq.12 we come
up with


) ( ) ( ) 1 ( xd opt
xx
r W R k W k W = + u
--- (13)

The computation of matrix associated with the
Steepest Descent Method is another problem of this
method. To overcome this difficulty, the LMS
algorithm replaces the auto-correlation and cross-
correlation by their instantaneous values instead of
their actual values. Therefore, Eq.13 can be
rewritten as [20]:

)) ( ) ( ) ( ) ( ( ) ( ) 1 ( k d k X W k X k X k W k W opt
H
-
= + u

) ( ) ( ) ( ) 1 ( k e k X k W k W
-
+ = + u
----------------- (14)

Recursive Least Square (RLS) Algorithm

RLS is a type of non-blind adaptive beamforming
algorithm that uses the LS method as optimization
criterion. To make the estimation problem well-
posed as well as to track time-varying systems, the
cost function is defined as [7, 17, 20].
Investigation of Adaptive Beamforming Algorithms for Cognitive Radio Technology

Journal of EEA, Vol. 28, 2011 37

-----
(15)






Where: k is variable length of the observable data
) (i e is the error function
o is a positive real number and it is called
the regularization parameter
is called the forgetting factor, which is
a positive constant close to, but less than
one. It emphasizes past data in a non-
stationary environment so that the
statistical variations of the data can be
tracked and not forgotten. In a
stationary environment, = 1
corresponds to infinite memory.

The RLS algorithm can be summarized as follows
[20]:

First initialize the algorithm by setting

I P
W
1
) 0 (
, 0 ) 0 (

=
=
o
------------------ -- (16)

=
SNR low for constant positive large
SNR high for constant positive small
o

I is NxN identity matrix.

) ( ) 1 ( ) ( 1
) ( ) 1 (
) (
1
1
k X k P k X
k X k P
k T
H
+

=


(17)


) ( ) 1 ( ) ( ) ( k X k W k d k e
H
=

(18)


) ( ) ( ) 1 ( ) (
*
k e k T k W k W + =
(19)

) 1 ( ) ( ) ( ) 1 ( ) (
1 1
=

k P k X k T k P k P
H


(20)


Where: ) (k T is Nx1 vector and it is called gain
vector

) (k P is NxN matrix and it is called
inverse correlation matrix


Constant Modulus Algorithm (CMA)

CMA is from the blind adaptive beamforming
family which requires no training signal sequence
to make an optimum beam to the intended
direction; it would rather try to restore important
property of the transmitted signal [8, 17, 22].

In most of the communication scenario it is
common to use modulation techniques with
constant envelope or amplitude such as FM, FSK,
PSK, MSK and the like. But in transmitting base
band signals by using these modulation techniques,
the transmitted signal encounters channel fading
which may result both in amplitude and phase
distortions. The constant envelope/modulus
property of the above mentioned modulation
techniques opens a window to the adaptive
beamforming algorithm in order to use this
property in the beamforming technology. The
receiver restores the envelope of the transmitted
signal by equating the received signal to some
constant value that corresponds to the envelope of
the transmitted signal. This is made possible by
continuously updating the weight of the
beamformer until the output of the array has the
same modulus as that of the original transmitted
signal. The class of adaptive beamforming
algorithm that uses this phenomenon is known as
the Constant Modulus Algorithm (CMA).

The cost function used for CMA is given by[8, 17]:

=
q
p
k y E k J o ) ( ) (
---------------- (21)

Where: p=1, 2 or q=1, 2 and o is the desired
signal amplitude at the output of the array
Assuming that o =1, then Eqn. 15 becomes:

=
q
p
k y E k J 1 ) ( ) (
--------------- (22)

This so-called CMA (p, q) cost function is simply a
positive measure of the average amount that the
beamformer output y (k) deviates from the unit
modulus condition. The objective is then choosing
weight vector recursively in order to minimize J
and consequently it makes y(k) as close to a
constant modulus signal as possible.

It is not possible to get the closed form of solution
for the above cost function; rather it is simple to
use an iterative method to obtain the optimal
weight vector or the minimum J like in LSM i.e. by
Sum of Weighted
Error Squares
Regularization term

=

+ =
k
i
k i k
k W i e k
1
2
2
) ( ) ( ) ( o c
Mulugeta Atlabachew and Mohammed Abdo

38 Journal of EEA, Vol. 28, 2011


using Steepest Descent Method. Then, in terms of
the above cost function, Eq. 12 becomes [8]:

) (
2
1
) ( ) 1 ( J k W k W V = + u
------------- (23)

Where:
) (J V
is the gradient of the cost function
of the CMA

The
) (J V
for p=1, q=2 becomes

) (
) (
2 )) ( (
*
k W
k J
k J
c
c
= V

c
c
=
) (
) (
) 1 ) ( ( 2
*
k W
k y
k y E


Further simplification of the above equation
results:

= V
*
)
) (
) (
) ( )( ( 2 )) ( (
k y
k y
k y k X E k J
- ------- (24)
As done for the LMS, we replace the statistical
expectation with the instantaneous value so that Eq.
24 becomes [8]:

|
|
.
|

\
|
= V
*
)
) (
) (
) ( )( ( 2 ) (
k y
k y
k y k X k J
---- (25)

Substituting Eq. 25 into Eq. 23 results in the
following weight updating equation


*
)
) (
) (
) ( )( ( ) ( ) 1 (
k y
k y
k y k X k W k W = + u

) ( ) ( ) ( ) 1 (
*
k e k X k W k W u = +
--------------- (26)

Where: u has similar function as to LMS but we
choose 1 << u to get better stability.

Like in the LMS algorithm, the convergence rate
can be controlled by varying u . However, to get
much better convergence behavior, non-linear least
square method need to be used.

Least Square Constant Modulus Algorithm (LS-
CMA)

The constant modulus algorithm was first used by
Gooch [23] in the beamforming problem. After
that, many CMA-type algorithms have been
proposed for use in adaptive arrays. Among them
B. G. Agee [24] developed the LS-CMA by using
the extension of the method of nonlinear least-
squares (Gauss's method). The extension of Gauss's
method states that if a cost function can be
expressed in the form:

=
= =
K
k
k
W g W g W F
1
2
2
2
) ( ) ( ) (
----------- --- (27)

Where:
)] ( ),..., ( ), ( [ ) (
2 1
W g W g W g W g
K
T
=

then the cost function has a partial Taylor-series
expansion with sum-of- squares form [8]

2
2
) ( ) ( ) ( d W D W g d W F
H
+ ~ +
-------- --- (28)

Where:

d is an offset vector, and

V
V V
=
)) ( (
)),..., ( ( )), ( (
) (
2 1
W g
W g W g
W D
K

(29)

It can be shown that the gradient of
) ( d W F +

with respect to d is given by [8]


*
) (
2 )) ( (
d
d W F
d W F
d
c
+ c
= + V


{ } d W D W D W g W D
H
) ( ) ( ) ( ) ( 2 + =
-- - (30)

Setting
)) ( ( d W F
d
+ V
equal to zero, the offset that
minimizes the cost function
) ( d W F +
will be

| | ) ( ) ( ) ( ) (
1
W g W D W D W D d
H

=
---------- (31)

Adding d to W results in a new weight vector
that minimizes the cost function. Therefore the
weight update equation becomes [8, 24]:

| | )) ( ( )) ( ( )) ( ( )) ( ( ) ( ) 1 (
1
l W g l W D l W D l W D l W l W
H

= +

(32)

Where:
l
denotes the iteration number. LS-CMA
is derived by applying Eq.32 to the constant
modulus function

=
=
K
k
k y W F
1
1 ) ( ) (
2

=
=
K
k
H
k X W
1
1 ) (
2
--- (33)

Comparing Eq.27 with Eq.33, we observe that

1 ) ( 1 ) ( ) ( = = k X W k y W g
H
k
- (34)

Then
) (W g
becomes
Investigation of Adaptive Beamforming Algorithms for Cognitive Radio Technology

Journal of EEA, Vol. 28, 2011 39

=
1 ) (
1 ) 2 (
1 ) 1 (
) (
) (
) (
) (
2
1
K y
y
y
W g
W g
W g
W g
K

----------- (35)

The gradient vector of
) (W g
k
is given by [8]


-
c
c
= V
W
W g
W g
k
k
) (
2 )) ( (

) (
) (
2 ) (
*
k y
k y
k X =
- ---(36)

Substituting Eq.36 into Eq.29 results in:

=
) (
) (
2 ) ( , ,...
) 2 (
) 2 (
2 ) 2 ( ,
) 1 (
) 1 (
2 ) 1 ( ) (
* * *
K y
K y
K X
y
y
X
y
y
X W D

CM
XY W D = ) (
-------------------------- (37)

Where:
| | ) ( ),... 2 ( ), 1 ( K X X X X =
is the input
data matrix, and

=
) (
) (
0 0
0
0
) 2 (
) 2 (
0
0 0
) 1 (
) 1 (
*
*
*
K y
K y
y
y
y
y
Y
CM

- (38)

is the output data matrix. Using Eq.34 and Eq.37
we have:

H
CM
H
CM
H
X Y XY W D W D = ) ( ) (
H
XX =
(39)
-------
and


=
1 ) (
1 ) 2 (
1 ) 1 (
) ( ) (
K y
y
y
XY W g W D
CM

=
) (
) (
) (
) 2 (
) 2 (
) 2 (
) 1 (
) 1 (
) 1 (
*
*
*
*
*
*
K y
K y
K y
y
y
y
y
y
y
X


*
) ( P Y X =
------------------ (40)


Where:
| |
T
K y y y Y ) ( ) 2 ( ) 1 ( =


T
K y
K y
y
y
y
y
P

=
) (
) (
) 2 (
) 2 (
) 1 (
) 1 (



The vectors ) (l Y and ) (l P are called the output
data vector and complex-limited output data vector,
respectively. Substituting Eq.39 and Eq.40 into
Eqn.32 we obtain [8]:

| |
*
1
)) ( ) ( ( ) ( ) 1 ( l P l Y X XX l W l W
H
= +




| | | | ) ( ) ( ) (
* 1 * 1
l P X XX l Y X XX l W
H H

+ =


| | | | ) ( ) ( ) (
* 1 1
l P X XX l W XX XX l W
H
H
H

+ =



| | ) (
* 1
l P X XX
H

=
----------- --------- (41)

Where | |
T
H
X l W l Y ) ( ) ( = then ) (l P = L ) (l Y .

Note that L ) (l Y places a hard limit on ) (l Y .
Since the algorithm iterates using a single block of
K data vectors, [x(k)], it is called static LS - CMA.
The LS-CMA can be implemented both statically
and dynamically.

The static LS-CMA repeatedly uses one data block
X, which contains K snapshots of the input data
vectors, in the updating of the weight vectorW . In
the static LS-CMA, after a new weight vector
) 1 ( + l W is calculated using Eq.41, this new
weight vector is used with the input data block X,
Mulugeta Atlabachew and Mohammed Abdo

40 Journal of EEA, Vol. 28, 2011


which was also used in the last iteration, to
generate the new output data vector ) 1 ( + l Y and
the complex-limited output data vector ) 1 ( + l P .
The new complex-limited output data vector is then
substituted into Eq.40 to generate a new weight
vector.

In dynamic LS-CMA, however, different input data
blocks are used during the updating of the weight
vector. Let ( ) l X denote the input data block used
in the
th
l iteration. ( ) l X can be expressed as [8]

| | ) ) 1 (( ),... 2 ( ), 1 ( ) ( K l X lK X lK X l X + + + =
----- (42)
For
L l , 2 , 1 , 0 =


Where L is the number of iterations required for the
algorithm to converge. Using
) (l X
we can describe
the dynamic LS-CMA by the following equations

T
H
l X l W l Y

= ) ( ) ( ) (

| |
T
K l y lK y lK y ) ) 1 (( ), 2 ( ), 1 ( + + + =
(43)

T
K l y
K l y
lK y
lK y
lK y
lK y
l P

+
+
+
+
+
+
=
) ) 1 ((
) ) 1 ((
,
) 2 (
) 2 (
,
) 1 (
) 1 (
) (

- (44)


| | ) ( ) ( ) ( ) ( ) 1 (
* 1
l P l X l X l X l W
H

= +
------------ (45)

From the above equations we see that while the
steepest descent CMA updates the weight vector on
a sample-by-sample basis, the dynamic LS-CMA
adjusts the weight vector on a block-by-block
basis.

Finally, the sample mean estimate of the
correlation matrix of the input data and the cross-
correlation between the input data and the output
for the block of data available at the l
th
iteration can
be constructed as [8]:
) ( ) (
1
l X l X
xx
H
K
R =
.
(46)
) ( ) (
1
l P l X
xd
K
r
-
=
.
(47)
Where K is the block size.

Then Eq. 45 becomes:
xd xx r R W
.

.
.
=
1
(48)

COMPUTATIONAL COMPLEXITY
ANALYSIS

Computational complexity can be expressed in
terms of time and space complexity. But the
analysis in terms of these two parameters is very
complex. It is rather better to discuss the
computational complexity of the above adaptive
beamforming algorithms in terms of the two
fundamental mathematical operators (addition and
multiplication operators) performed per iteration.
Using the latter concept, the computational
complexity of the adaptive beamforming
algorithms studied in this work are summarized in
the following tables.

Table 1: Computational complexity of SMI
Algorithm


Procedures

Multiplication
per iteration
Addition
per
iteration

=
-
.
=
K
k
T
xx k X k X
K
R
1
) ( ) (
1


KN+1

K+N

=
.
=
K
k
T
xd k X k d
K
r
1
) ( ) (
1


K
2
N+1

K
xd xx r R W
.

.
=
1


N
2

N
2
Total operation
K
2
N+KN+N
2
+
2 + Matrix
Inversion
operation

N
2
+
2K+N

Table 2: Computational complexity of LMS
algorithm

Procedures Multiplication
per iteration
Addition
per
iteration
) ( ) ( ) ( k X k W k y
H
=
N N
) ( ) ( ) ( k y k d k e =
- 1
) ( ) ( ) 1 ( e k X k W k W
-
+ = + u

N+1 N+1
Total operation 2N+1 2N+2

Where: K is length of observable data.
N is the number of array elements.
Journal of EEA, Vol. 28, 2011 41



Table 3: Computational complexity of RLS Algorithm

















Table 4: Computational complexity of CMA

Procedures Multiplication
per iteration
Addition
per
iteration
) ( ) ( ) ( k X k W k y
H
=

N N
) (
) (
) ( ) (
k y
k y
k y k e =

1 1
) ( ) ( ) 1 ( e k X k W k W u = +

N+1 N
Total operation 2N+2 2N+1












































Table 5: Computational complexity of LS-CMA

From the above computational complexity table,
the algorithms can be arranged in the order of
decreasing computational complexity as follows
SMI, LS-CMA, RLS, LMS and CMA.

To see the performance in terms of convergence
rate, we need to make simulation for the
corresponding beamforming algorithms. This is
presented in the next section with brief discussion
whenever required.


SIMULATION RESULTS

In this section, simulation results of different
adaptive beamforming algorithms used in this work
are presented. All the adaptive beamforming
procedures Multiplication
per iteration
Addition per
iteration
) ( ) 1 ( ) ( 1
) ( ) 1 (
) (
1
1
k X k P k X
k X k P
k T
H
+

=



2N
2
+3N+1

2N
2
+2N+1
) ( ) 1 ( ) ( ) ( k X k W k d k e
H
=
N N+1
) ( ) ( ) 1 ( ) (
*
k e k T k W k W + =
N N
) 1 ( ) ( ) (
) 1 ( ) (
1 1

=

k P k X k T
k P k P
H


N
2
+2N+1

N
2
+N+1
Total operation 3N
2
+7N+2 3N
2
+5N+3
Procedures Multiplication
per
iteration
Addition
per
iteration
T
H
l X l W l Y

= ) ( ) ( ) (

NK

NK
T
K l y
K l y
lK y
lK y
lK y
lK y
l P

+
+
+
+
+
+
=
) ) 1 ((
) ) 1 ((
,
) 2 (
) 2 (
,
) 1 (
) 1 (
) (


K

-
{ | |
} ) ( ) (
) ( ) ( ) 1 (
*
1
l P l X
l X l X l W
H

= +

N
2
K + N
2
+NK +
inversion
operation

N
2
K +
N
2
+NK
Total operations
N
2
K + N
2

+2NK+K+
inversion
operation

N
2
K +
N
2
+2NK
Mulugeta Atlabachew and Mohammed Abdo

42 Journal of EEA, Vol. 28, 2011


In this section, simulation results of different
adaptive beamforming algorithms used in this work
are presented. All the adaptive beamforming
simulations are done for 8 array elements and 5
users, where one intended user is located at 80
0
and

the rest four users are interferers which are located
at 40
0
, 120
0
, 160
0
, and 175
0
. Moreover, on the
simulation output, the solid line position at 80
0

infer to the position of intended user and the
locations of the asterisks (*) correspond to the
locations of the interferers.

SMI Algorithm Simulation Results

a) Radiation pattern rectangular plot
b) Radiation pattern rectangular plot in dB
c) Plot of mean squared error (MSE)
Figure 3 Simulations of Radiation Pattern and MSE, synthesized by using SMI algorithm for SNR=30 dB, and
block size=40.


0 20 40 60 80 100 120 140 160 180
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
, "position of user+ interferers in degree"

N
A
F

0 20 40 60 80 100 120 140 160 180


-200
-180
-160
-140
-120
-100
-80
-60
-40
-20
0
, "position of user+ interferers in degree"

N
A
F


in

d
B
0 50 100 150 200 250 300 350 400
0
0.01
0.02
0.03
0.04
0.05
0.06
# of iteration
M
e
a
n

S
q
u
a
r
e
d

E
r
r
o
r
-


Journal of EEA, Vol. 28, 2011 43


The simulation for SMI algorithm is carried out for
SNR= 30dB, for 10 blocks with block size=40 snap
shot (samples). As can be seen from the simulation,
the SMI algorithm converges quickly.

LMS Algorithm Simulation Results


a) Radiation pattern rectangular plot


b) Radiation pattern rectangular plot in dB

c) Plot of mean squared error (MSE)

Figure 4 Simulation of radiation pattern and
MSE,synthesized by using LMS
algorithm for

=0.0110,SNR=30dB,
iteration=3000.


As can be seen from the simulation results, the
LMS beamforming algorithm has less convergence
rate than the SMI beamforming algorithm, but the
latter one forms much stronger beams only to the
direction of the intended user. Besides its ability to
form much stronger beams only to the direction of
the intended user, it also has less computational
complexity. Because of these reasons the family of
LMS beamforming algorithms is preferred over the
SMI beamforming algorithm to implement in
existing wireless communication scenario.

RLS Algorithm Simulation Results
a) Radiation pattern rectangular plot
0 20 40 60 80 100 120 140 160 180
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
, "position of user+ interferers in degree"

N
A
F

0 20 40 60 80 100 120 140 160 180


-200
-180
-160
-140
-120
-100
-80
-60
-40
-20
0
, "position of user+ interferers in degree"

N
A
F


i
n

d
B
0 500 1000 1500 2000 2500 3000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
# of iteration
M
e
a
n

S
q
u
a
r
e
d

E
r
r
o
r
-

0 20 40 60 80 100 120 140 160 180


0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
, "position of user+ interferers in degree"

N
A
F

Mulugeta Atlabachew and Mohammed Abdo



44 Journal of EEA, Vol. 28, 2011


b) Radiation pattern rectangular plot in dB

c) Plot of least squared error (LSE)

Figure 5 Simulation of radiation pattern
synthesized by using RLS algorithm for
SNR=30dB,

=0.068,

=1, and
number of iteration= 3000.

We observe that the convergence of the RLS
algorithm is better than that of the LMS algorithm,
but this increase in convergence rate is obtained at
the cost of increased computational complexity. In
addition to the good convergence rate, RLS has the
ability to retain information about the input data
vector from the very beginning. Another important
feature of the RLS algorithm is its ability to replace
the inversion of the covariance matrix in the
Weiner solution with a simple scalar division




CMA Simulation Results


a) Radiation pattern rectangular plot


b) Radiation pattern rectangular plot in dB


c) Plot of mean square error (MSE)

Figure 6 Simulation of radiation pattern and
MSE,synthesized by using CMA for
MSK signal with = 0.01,SNR=30
dB,iteration=4000.

0 20 40 60 80 100 120 140 160 180
-200
-180
-160
-140
-120
-100
-80
-60
-40
-20
0
, "position of user+ interferers in degree"

N
A
F


i
n

d
B
0 500 1000 1500 2000 2500 3000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
# of iteration
M
e
a
n

S
q
u
a
r
e
d

E
r
r
o
r
-

0 20 40 60 80 100 120 140 160 180


0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
"position of user+ interferers in degree"

N
A
F

0 20 40 60 80 100 120 140 160 180


-200
-180
-160
-140
-120
-100
-80
-60
-40
-20
0
, "position of user+ interferers in degree"

N
A
F


i
n

d
B
0 500 1000 1500 2000 2500 3000 3500 4000
0
0.5
1
1.5
2
2.5
3
# of iteration
E
r
r
o
r
-

mu=0.01
Investigation of Adaptive Beamforming Algorithms for Cognitive Radio Technology

Journal of EEA, Vol. 28, 2011 45
We observe that the CMA has similar behavior to
that of SMI but with slow convergence rate. The
convergence rate can be improved by increasing
However; care must be taken not to use large value
of that renders the algorithm unstable.
LS-CMA Simulation Results

a) Radiation pattern rectangular plot

b) Radiation pattern rectangular plot in dB


c) Plot of least square error (LSE)

d) Partially enlarged view of the error graph
shown in (c) above.

Figure 7 Simulation of radiation pattern
synthesized by using LS-CMA for
SNR = 30 dB, block size=120, no. of
iterations = 7000.

As we can see from the above simulations, the LS-
CMA has improved the performance of the CMA.
Since the adaptation is made in block form,
increasing the block size results in an increase of
the performance of the algorithm.

CONCLUSION

In comparison to the existing wireless
communication concept which virtually deploys
fixed spectrum band for different wireless
technologies, the concept of cognitive radio
technology is indicative in bringing the wireless
technology to new era. In this work, the
performance of different beamforming techniques
has been investigated. Although smart antennas
technology has been used in the third generation
communication, the way it is proposed for
cognitive radio technology is slightly different
from the way it is used earlier. The difference is
mainly from the point of view of side lobes
requirement. The existing wireless communication
does not require any side lobe, if possible, whereas
the cognitive radio technology takes as an
advantage the generation of side lobes in all
directions equally except to the direction of
interferers so as to simplify the spectrum detection
capability of the system. Therefore, the
investigations in this work are made from this point
of view. In accordance to the aforementioned ideas,
the following conclusions are drawn from this
work.

0 20 40 60 80 100 120 140 160 180
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
"position of user+ interferers in degree"

N
A
F

0 20 40 60 80 100 120 140 160 180


-200
-180
-160
-140
-120
-100
-80
-60
-40
-20
0
"position of user+ interferers in degree"

N
A
F


i
n

d
B
0 1000 2000 3000 4000 5000 6000 7000
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
# of iteration
E
r
r
o
r
-

block size=120
800 1000 1200 1400 1600 1800 2000
-0.4
-0.2
0
0.2
0.4
0.6
0.8
# of iteration
E
r
r
o
r
-

Mulugeta Atlabachew and Mohammed Abdo



46 Journal of EEA, Vol. 28, 2011


The investigation of different adaptive
beamforming algorithms for the cognitive radio
technology from both blind and non-blind
algorithms has shown that the Sample Matrix
Inversion (SMI) from the non-blind beamforming
family and the Constant Modulus Algorithm
(CMA) from the blind beamfoming family have
better radiation pattern (beam pattern) that suits the
cognitive radio technology. The others have low
and dying side lobes and using them for detection
in CR application could result in the scanned RF
giving wrong information (false alarm) about the
vacant and occupied spectrum holes. In comparing
the overall performance, the SMI is preferred for
CR applications as compared to the other adaptive
beamforming algorithms. In fact, it has very fast
convergence rate of all adaptive beamforming
algorithms studied in this work which one big
advantage for the cognitive radio system.

In general, it has been shown that smart antenna
technology has the potential to be used in the next
generation communication i.e. in cognitive radio.

REFERENCES

[1] Cabric, D., Mishra, S.M., Willkomm, D.,
Brodersen, R. and Wolisz, A. CORVUS:
A Cognitive Radio Approach for Usage of
Virtual Unlicensed Spectrum
www.eecs.berkeley.edu/~smm/IST_paper.
pdf, July 29, 2007.

[2] FCC report and order, ET Docket No. 03-
108 Facilitating Opportunities for
Flexible, Efficient, and Reliable Spectrum
Use Employing Cognitive Radio
Technologies, March 11, 2005.

[3] Mitola, J. III Cognitive Radio: An
Integrated Agent Architecture for
Software Defined Radio, PhD
dissertation, Royal Institute of
Technology, Stockholm, Sweden, May
2000.

[4] Fette, BA. Editor, Cognitive Radio
Technology, UK: Elsevier Inc., 2006.


[5] Haykin,S. Life Fellow IEEE, Cognitive
Radio: Brain-Empowered Wireless
Communications IEEE Journal on
Selected Areas in Communications, Vol.
23, No. 2, February 2005.


[6] Zooghby, A. El, Smart Antenna
Engineering, Boston: Artech House Inc.,
2005.

[17 Litva, J. and Lo, TKY., Digital
Beamforming in Wireless
Communications, Boston: Artech House
Inc., 1996.

[8] Liberti, JC. JR., Rappaport, TS., Smart
Antenna for Wireless Communication: IS-
95 and Third Generation CDMA
Application, New Jersey: Prentice Hall
PTR, 1999.

[9] Fette B., Senior Scientist, General
Dynamics C4 Systems, Cognitive Radio
Shows Great Promise, COTS Journal,
http://www.cotsjournalonline.com/home/p
rintthis.php?id=100206, January 19,
2005.

[10] IEEEUSA, Improving Spectrum Usage
through Cognitive Radio Technology,
IEEE USA Position, Nov 13, 2003.

[11] IEEE1900.1,
http://grouper.ieee.org/groups/emc/emc/ie
ee_emcs__sdcom /P1900-1/ieee_emcs_-
_p1900-1_main.htm, July 28, 2007.


[12] ITU Wp8A Working document towards a
preliminary draft new report: software
defined radio in the land mobile service,
August 23, 2005.

[13] Neel, JOD., Analysis and Design of
Cognitive Radio Networks and Distributed
Radio Resource Management
Algorithms, PhD dissertation, September
6, 2006.

[14 ]Balanis,CA., Antenna Theory Analysis
and design, 3
rd
edition, New Jersey: John
Wiley & Sons, Inc., 2005.

[15] Stutzman, WL. Thiele,GA., Antenna
Theory and Design, 2
nd
edition, New
York: John Wiley & Sons, Inc., 1998.

[16] Kim, BK., Smart Base Station Antenna
Performance for Several Scenarios an
Experimental and Modeling
Investigation, PhD Dissertation, Virginia
Polytechnic Institute and State University,
May 7, 2002, Blacksburg, Virginia.

Investigation of Adaptive Beamforming Algorithms for Cognitive Radio Technology

Journal of EEA, Vol. 28, 2011 47
[17] Li, J. and Stoica,P. Editors, Robust
Adaptive beamforming, New Jersey: John
Wiley & Sons, Inc., 2005.

[18 ]John G. Proakis, Editor, Wiley
Encyclopedia of Telecommunications
volume 1, New Jersey: John Wiley &
Sons, Inc., 2003.


[19] Chen, JC., Yao,K. and Hudson, RE.,
Source Localization and Beamforming,
IEEE SIGNAL PROCESSING
MAGAZINE, March 2002.


[20] Haykin, S., Kailath, T. Series Editor,
Adaptive Filter Theory, 4
th
edition,
India: Pearson Education Inc., 2002.


[21] Shared Spectrum Company
http://www.sharedspectrum.com/?section
=nsf_measurements, July28, 2007.


[22] Cerquides, J.R. and Fernandez-Rubio, J.A.
Algorithms and structures for source
separation based on the constant modulus
property, Signal Theory and
Communications Department, Polytechnic
University of Catalonia Mdulo D5,
Campus Nord UPC, C/Gran Capitn s/n,
08034 Barcelona, SPAIN, 2007.




























[23] Gooch, R. and Lundell, J., The CM
Array: An Adaptive Beamformer for
Constant Modulus Signals, IEEE Intl.
Conf. on Acoustics, Speech, and Signal
Processing, Tokyo, Japan, 1986.


[24] Agee, B.G., The Least-Squares CMA: A
New Technique for Rapid Correction of
Constant Modulus Signals, Proc. of the
IEEE Intl Conf. on Acoustics, Speech, and
Signal Processing, 1986, pp. 19.2.1-19.2.4,


[25] Biedka,TE., Tranter, WH., and Reed, JH.,
Senior Member, IEEE, Convergence
Analysis of the Least Squares Constant
Modulus Algorithm in Interference
Cancellation Applications, IEEE
Transactions on Communications, Vol. 48,
No. 3, March 200

*E-mail: derege.hailemariam@AAiT.edu.et
Journal of EEA, Vol. 28, 2011
ENERGY AWARE GPSR ROUTING PROTOCOL IN A WIRELESS SENSOR
NETWORK

Sayed Nouh and Zewdu Geta
Department of Electrical and Computer Engineering
Addis Ababa Institute of Technology, Addis Ababa University*

ABSTRACT

Energy is the scarce resource in wireless sensor
networks (WSNs), and it determines the lifetime of
WSNs. For this reason, WSN algorithms and
routing protocols should be selected in a manner
which fulfills these energy requirements.

This paper presents a solution to increase the
lifetime of WSNs by decreasing their energy
consumption. The proposed solution is based on
incorporating energy information into Greedy
Perimeter Stateless Routing (GPSR) Protocol.

The proposed solution performs better in energy
consumption, network lifetime and packet delivery
ratio, with a performance gain of Network Lifetime
45.9% - 78.69%. However, the performance is
comparatively low in average delay because of
computational complexity.

Key Words: Wireless Sensor Networks, GPSR
protocol, Geographical routing protocol, `
Energy aware routing protocol,

INTRODUCTION

Wireless sensor networks [1] have inspired
tremendous researches of interest since the mid-
1990s. Advancement in wireless communication
and micro electro-mechanical systems (MEMSs)
have enabled the development of low-cost, low
power, multifunctional, tiny sensor nodes that can
sense the environment, perform data processing,
and communicate with each other over short
distances.

The era of WSNs is highly anticipated in the near
future. In September 1999, WSNs were identified
by Business Week as one of the most important and
impactive technologies for the 21st century. Also,
in January 2003, the MIT's Technology Review
stated that WSNs are one of the top ten emerging
technologies [2].

WSNs are composed of sensor nodes that must
cooperate in performing specific functions. In
particular, with the ability of nodes to sense,
process data, and communicate. They are well
suited to perform event detection, which is clearly
an important application of wireless sensor
networks. On the other hand, energy efficiency has
always been a key issue for sensor networks as
sensor nodes must rely on small, nonrenewable
batteries.

WSNs present tradeoffs in system design [3]. On
the one hand, the low cost of the nodes facilitates
massive scale and highly parallel computation. On
the other hand, each node is likely to have limited
power, limited reliability, and only local
communication with a modest number of
neighbors. These limitations make WSNs
unrealistic to rely on careful placement or uniform
arrangement of sensors.

Rather than using globally accessible expensive
global positioning system (GPS) to localize each
sensor, beaconing protocol is used to enable
sensors to know their neighbors positions on
demand. The operation of beaconing protocol is
based on the measure of received radio signal
strength, where this radio information is used to
compute ranges. The one with low radio signal
strength is shortest to the destination and is selected
to forward data.

For example, as shown in Fig. 1, suppose node X
has a packet intended to send to node D. First, node
X sends Beaconing signal to its neighbors, (N1,
N2, N3, and N4). These neighbors in turn reply to
node X. The path which received low signal
strength is selected, (path X N4)), to forward
packet towards destination D.



N4
D
N3
N2
N1
X
Figure 1 Beacons working principle
Sayed Nouh and Zewdu Geta
50 Journal of EEA, Vol. 28, 2011


The rest of the paper is organized as follows:
Section 2 presents the different routing protocols in
WSNs. Greedy Perimeter Stateless Routing
Protocol is explained in Section 3. Section 4
presents related work on energy-efficient routing.
The proposed algorithm and its implementation is
described in Section 5. Simulation set-up and
performance metrics are presented in Section 6.
Section 7 discusses the results obtained, while
Section 8 concludes the paper.

ROUTING PROTOCOLS IN WSNS

Routing in WSNs is a very challenging task due to
the inherent characteristics that distinguish these
networks from other wireless networks like cellular
or mobile ad hoc networks. Traditional IP-based
protocols may not be applied to WSN, due to the
large number of sensor nodes and because getting
the data is often more important than knowing the
specific identity of the source sending it.
Furthermore, almost all applications of sensor
networks require the flow of sensed data from
multiple sources to a particular base station, sink.
Sensor nodes are constrained in terms of energy,
processing, and storage capacities, thus requiring
careful resource management. Sensor networks are
strictly dependent on their applications, and the
design requirements of a sensor network change
with the applications. Furthermore, position
awareness of sensor nodes is important since data
collection is normally based on their location.
Finally, since data collected by many sensors in a
WSN are typically based on common phenomena,
they are often much correlated and contain a lot of
redundancy. Such redundancy needs to be
exploited by the routing protocols to improve
energy and bandwidth utilization [4].

Flat Routing

In flat networks, sensor nodes typically play the
same role and collaborate together to perform the
sensing task [5]. The lack of a global identification
due to the large number of nodes present in the
network and their random placement, typical of
many specific wireless sensor network (WSN)
applications, make it hard to select a specific set of
sensors to be queried.

Hierarchical Routing

In a hierarchical architecture, higher energy nodes
can be used to process and send the information
while low-energy nodes can be used in monitoring
the interested area and gathering data [5]. This

means the creation of clusters with the assigning of
special tasks to cluster heads, such as data fusion
and data forwarding, in order to achieve system
scalability, network lifetime increment and energy
efficiency.

Geographical Routing

Geographical Routing protocol exploits
information about the location of the sensors in
order to forward data through the network in an
energy-efficient way [5]. The location of nodes
may be available directly from a GPS system or by
implementing some localization protocol.

The possible advantage is a much simplified
routing protocol with significantly smaller or even
non existing routing tables as physical location
carries implicit information to which neighbor to
forward a packet to.

GREEDY PERIMETER STATELESS
ROUTING (GPSR)

Greedy Forwarding Rule: In GPSR, packets are
marked by their originator with their destinations
locations. As a result, a forwarding node can make
a locally optimal greedy choice in choosing a
packets next hop. Specifically, if a node knows its
radio neighbors positions, the locally optimal
choice of next hop is the neighbor geographically
closest to the packets destination. Forwarding in
this region follows successively closer geographic
hops, until the destination is reached. An example
of greedy next-hop choice appears in Fig. 2.



N4
D
N3
N2
N1
X
D D
Y
X
Figure 1 Beacons working principle
Energy Aware GPSR Routing Protocol in a WSN
Journal of EEA, Vol. 28, 2011 51
Figure 2 Greedy forwarding examples. Y is Xs
closest neighbor to D.

Here, X receives a packet destined for D. Xs radio
range is denoted by the dotted circle about X, and
the arc with radius equal to the distance between Y
and D is shown as the dashed arc about D. X
forwards the packet to Y, as the distance between
Y and D is less than that between D and any of Xs
other neighbors. This greedy forwarding process
repeats until the packet reaches D.

Advantage of Greedy Forwarding Protocol is its
reliance only on knowledge of the forwarding
nodes immediate neighbors. The state required is
negligible and dependent on the density of nodes in
the wireless network, not the total number of
destinations in the network. For more details about
GPSRs advantage and limitations refer to [6].

RELATED WORK ON ENERGY-EFFICIENT
ROUTING

The current work on energy-efficient routing
assumes that all the nodes in the network are
always available to route all packets. In reality,
since nodes consume power even in idle mode,
significant overall energy savings can be achieved
by turning off an appropriate subset of the nodes
without losing connectivity or network capacity.
There has been much work on topology control
algorithms [7, 8] based on the notion of connected
dominating sets that reduce energy consumption
precisely by periodically putting some nodes into
sleep mode.

Geographic and Energy Aware Routing (GEAR)
exploits geographic information while propagating
queries only to appropriate regions [9]. It can be
classified as a data-centric algorithm with
geographic information knowledge. The process of
forwarding a packet to all the nodes in the target
region consists of two steps. The first one aims at
forwarding the packets towards the target region
and the second step is concerned with
disseminating the packet within the region.
However, the GEAR protocol has a limitation,
which is not scalable and all nodes are active even
though only a part of the network is queried.

Geographic Adaptive Fidelity (GAF) is an energy-
aware location-based routing algorithm [8]. The
network area is divided into fixed zones to form a
virtual grid, as shown in Fig. 3. GAF uses equal
areas of square zones, whose size is dependent on
the required transmitting power and the
communication direction. GAF exploits the
equivalence of all nodes inside the same zone by
keeping at least one node per zone awake for a
certain period of time and turning all the others in
that zone into sleep state during that time. With
high mobility of nodes there is a high packet loss as
nodes may leave the gird without replacing an
active node which is the disadvantage of GAF.



Figure 3 Virtual grid formations in a GAF

Adaptive Self-Configuring Sensor Networks
Topologies (ASCENT) adaptively elects active
nodes from all nodes in the network [8]. Active
nodes stay awake all the time and perform multi-
hop packet routing while the rest of the nodes
remain passive and periodically check if they
should become active. To do this, ASCENT has
four state transitions: Test, Active, Passive and
Sleep. ASCENT depends on the routing protocol to
quickly re-route traffic. This may cause some
packet loss, and therefore an improvement that has
not been implemented is to inform the routing
protocol of ASCENTs state changes so that traffic
could be re-routed in advance. ASCENT does not
work for low density nodes and behaves differently
for a different routing protocol which is the
limitations of this work.

PROPOSED ALGORITHM AND ITS
IMPLEMENTATION

The GPSR routing protocol does consider only the
shortest distance to the destination during path
selection. However, in wireless sensor network
(WSN) energy is a scarce resource, so we are going
to consider the remaining energy of nodes, energy
for transmission and receiving by making nodes
that are not participating in communication to go
into sleep mode. As nodes in sleep mode use least
amount of energy. Hence, we can reduce energy
wastage.

Assumptions

This section presents the basic design of the
proposed protocol, which works with the following
network setting:

A vast field is covered by a large number of
homogeneous sensor nodes which communicate
Sayed Nouh and Zewdu Geta
52 Journal of EEA, Vol. 28, 2011
The wireless communication channels are
bidirectional. Each sensor node has constrained
battery energy.

After having been deployed, sensor and sink
nodes remain stationary at their initial locations.

Target (source) node moves randomly.

Proposed Solution

GPSR routing protocol uses Greedy Forwarding to
route data to neighboring nodes which does not
consider either remaining energy of nodes or the
transmission energy, so that a packet (a data)
reaching to a destination is in question. The
proposed solution consists of two-step-solution.
The first step is concerned with making nodes
which are not participating in either sending or
receiving to go into sleep mode. The second one is
considering remaining energy of nodes in addition
to the shortest path during path selection. Then in
wireless sensor network (WSN) there are 3 states
of a Node:

Active, Sleep and Idle, as shown in Fig. 4.

The active node consumes more amount of energy
while idle node consumes lesser and sleep node
consumes the least amount of energy. Hence a
good power saving algorithm should make the
active number of nodes as little as possible [7].


















Figure 4 Proposed scheme







Step I

If nodes are farther away from a sink node than
source node, they will enter Sleep mode to save
energy and will stay till next communication. All
other nodes will be in active mode and will
participate in sending and receiving a packet.
During a communication cycle, we set a timer. At
the end of the communication cycle, the timer is
reset and all the nodes in a grid are set to active
mode.

Step II

Minimum weight function is an important key to
make the routing decision by a source node to a
destination. In this section we will formally dene
how to calculate the value of minimum weight
function and using this weight to evaluate the
proposed protocol.

Minimum weight function contains two factors, the
distance from a source node to the destination and
the remaining energy level of neighbor nodes. The
minimum weight function W
i,
of neighbor node x(i)
is defined as follows:

W
i
(1)

Where

Wi is the minimum weight value among
the N neighbors of a source node
x(i) is the position of the ith neighbor
node of a source node
d(x(i),y) is the Eculidean distance
between the ith neighbor node and
the destination y
Erfi is the remaining energy factor

E
rfi
(2)

Where

E
oi
is the initial energy of node i
E
Ci
is the consumed energy of node i [10].









Sink
Source
Sensor
Active
node
Sleep nodes
Energy Aware GPSR Routing Protocol in a WSN
Journal of EEA, Vol. 28, 2011 53

Figure 5 Flow chart

The flow chart in Fig. 5 represents the two step
solution, where it first sets a timer to make nodes
either sleep or active by calculating distance d (i,s)
and d(i,t).

If d (i,s) > d(i,t), node i will go into sleep mode
otherwise it will be active.

d(i,s) is the Euclidean distance of each node i to
a sink node s and

d(i,t) is the Euclidean distance of each node I to
a target (source) node t.

The original GPSR routing protocol uses Greedy
Forwarding when there is a neighbor node and
Perimeter Forwarding when the source node has no
neighbor or when its neighbors distance is shorter
than itself to a destination. Whereas our proposed
algorithm uses Minimum Weight Forwarding
(shortest distance plus residual energy) instead of
Greedy Forwarding and Perimeter Forwarding to
come out of no neighbor problem until it reaches a
destination. If the forwarding node is a destination,
wakeup timer (T) will be reset and all nodes
become active and this process is repeated again.

SIMULATION SETUP AND PERFORMANCE
METRICS

The proposed algorithm is implemented by J-Sim
simulation. J-Sim have the following features:

As it is implemented in Java, makes J-Sim a
truly platform-independent and reusable
environment.
N
Reset Timer; T=0
Y
Y
Y
N
END
Set Wakeup Timer T=1
Calculate d (i, s) and d (i, t), i . If d (i,s) > d(i,t) , set node i into Sleep
mode.
Node=Source
Is Node=Source?
Mode = Minimum Weight
Does a node has
Neighbors?
Mode=Perimeter
Have left Max?
Apply Right Hand Rule for Perimeter forwarding
Is Node= Destination?
Minimum Weight Forwarding
Start
Y
N
N
Sayed Nouh and Zewdu Geta
54 Journal of EEA, Vol. 28, 2011
It is a dual-language simulation environment
like NS-2 in which classes are written in Java
and scripts using Tcl/Java (Jacl).



Only the public classes/methods/fields in Java
can be accessed in the Tcl environment instead
of exporting explicitly all classes/methods/fields
like other simulators, e.g. NS-2.

J-Sim exhibits good scalability for the memory
allocation to carry out simulation of length
1000. It is at least two orders of magnitude
lower than that in NS-2 [11].

The simulation is done on different performance
metrics, to compare the performance of the
proposed algorithm against the original GPSR
routing protocol.

The implementation has the following assumptions:

The sensor nodes are deployed in a random
manner.

Node density, target (source) speed may
represent a moving tank, and percentage of
number of node failures are varied during
simulation.

Simulation Setup

To explore the results, we conduct a detailed
simulation using a J-Sim simulator. In our
simulation up to 450 sensors are scattered over to a
350 350 m
2

sensor eld. Other simulation
parameters are listed in Table 1, most of which are
taken from white papers of commercial products
vendors.

Table 1: Simulation parameters

Variables Values
Communication Rage 15 m
Simulation Time 200sec
Simulation Area 350 x350 m
2

Target node Speed 10 m/s and 15m/s
Number of Nodes 450
Node receiving power 14.88mW
Node transmitting Power 12.50mW
Node Idle mode power 12.36mW
Node Sleep mode power 0.016mW

Performance Metrics

Although different researchers propose different
performance metrics to evaluate the performance of
routing protocols, we use the following metrics for
evaluating the efficiency of the proposed routing
protocol.

Average Energy Consumption: The average
energy consumption is calculated across the
entire topology. It measures the average
difference between the initial level of energy
and the final level of energy that is left in each
node. Let E
i
and E
f
be the initial energy and final
energy level of a node respectively and N the
total number of nodes in the network. Then

(3)

Average Data Delivery Ratio: This represents
the ratio between the number of data packets
that are sent by the source and the number of
data packets that are received by the sink.



Average Delay: It is defined as the average time
difference between the moments of data packets
received by the Sink node and the moments of
data packets transmitted by the Source node.
This metric defines the freshness of data packet.


received packets of number Total
sent packet Time recived packet Time
AD
N
i
i i

2

N: number of packets

Network Lifetime (NL): This is one of the
most important metrics to evaluate the energy
efficiency of the routing protocols with respect
to network partition. In wireless sensor networks
(WSN), especially in those with densely
distributed nodes, the death of the first node
seldom leads to the total failure of the network.
When the number of dead nodes increases, the
network is partitioning too. Network Lifetime
can be defined in the following ways

It may be defined as the time taken for K% of
the nodes in a network to die.
It can also be the time for all nodes in the
network to die.
The lifetime of the network under a given
flow can be the time until the first battery
drains-out (dies) [12]

We adopt the third definition for the analysis of this
work. Here, a node with less than 20% of its full
battery capacity is considered as a dead node based
on the definition in [2].
Energy Aware GPSR Routing Protocol in a WSN
Journal of EEA, Vol. 28, 2011 55

SIMULATION RESULTS AND DISCUSSIONS

We deploy the nodes in a region of size 350 x 350
m
2
. Sensor nodes are deployed randomly; Sink
node is fixed at the lower right corner of the grid
and target (Source) node deployed at the center of
grid and moves with a speed of 10 m/s. There are
one sink node, one target node and 450 sensor
nodes in our simulation environment.

Each target node generates stimuli every 1.5
seconds and sensing radius is 15m. The number of
nodes in the region is controlled by increasing
nodes from 50 to 450 with step of 100. The
simulation time is 200 seconds and the parameters
are affected by the number of nodes used in the
simulation, simulation time and node failures. We
consider two scenario designs. All the experiments
are conducted on a dual processor Intel 2.66 GHz
machine running Windows XP Professional with 2
GB RAM. Each data point reported below is an
average of 20 simulation runs, [13].

Scenario-1:

This Scenario follows parameters shown in
Table 2.

Table 2: Scenario 1 parameters

Variables Values
Target Speed 10 m/s
Number of Nodes 450
Sink Location (350,0)
Target Location (150,150) and moves
Sensor Location Randomized and stay static
Random Node
Failure
With no Failure

Scenario-1 Results



Figure 6 Average energy consumption (no node
failure, speed 10m/s)



Figure 7 Average packet delivery ratio (no node
failure, speed 10m/s)



Figure 8 Average delay (no node failure, speed
10m/s)


Figure 9 Network lifetime (no node failure, speed
10m/s)

Scenario-1: Discussion of Results

Figures 6, 7 and 9, show that the proposed solution
performs better in energy consumption and packet
delivery ratio than the original GPSR protocol and
hence the Network Lifetime is improved
significantly. As the aim of our interest is to
increase the lifetime of the network, the goal is
Sayed Nouh and Zewdu Geta
56 Journal of EEA, Vol. 28, 2011
achieved by considering residual energy in the
proposed solution which reduces individual node
failure and network partition. Moreover, making
nodes which are not participating in transmission or
receiving into sleep mode reduces overall node
failures. Hence the number of node failure and
energy wastage decrease, i.e. the lifetime of the
network increases.

Whereas Fig. 8 shows that the average delay of the
proposed solution is larger as compared to the
original GPSR protocol because the proposed
solution checks not only the shortest distance but
also the residual energy and distance calculation to
make nodes either in sleep or in active mode. The
proposed algorithm uses a number of parameters to
select a route than the original GPSR protocol. The
cause of the delay is due to computational
complexity.

Scenario-2
This Scenario follows parameters shown in
Table 3.

Table 3: Scenario 2 parameters

Variables Values
Target Speed 15 m/s
Number of Nodes 450
Sink Location ( 350,0)
Target Location (150,150) and moves
Sensor Location Randomized and stay
static
Random Node Failure 15% Failure

Scenario-2 Results



Figure 10 Average energy consumption (15%
node failure, speed 15m/s)



Figure 11 Average packet delivery ratio (15%
node failure, speed 15m/s)



Figure 12 Network lifetime (15% node failure,
speed 15m/s)



Figure 13 Average delay (15% node failure,
speed 15m/s)

Scenario-2: Discussion of Results

Figures 10, 11, and 12 show that the proposed
solution performs better in energy consumption and
packet delivery ratio than the original GPSR
protocol and hence there is an improvement in
Network Lifetime. As Figures 13 shows, the
average delay is low too.


Energy Aware GPSR Routing Protocol in a WSN
Journal of EEA, Vol. 28, 2011 57
In scenario 2 as compared with scenario 1, the
average energy consumption, average packet
delivery ratio and Network Life time is
comparatively low. This is because in scenario 2,
target speed is more which incur routing over-head.
Further, due to node failure, less number of nodes
will be available for routing i.e. there is more
energy consumption.

CONCLUSIONS

In this paper, we have studied GPSR routing
protocol, which is a geographical routing protocol
and uses a greedy forwarding whenever possible
and perimeter forwarding, if not possible. It
considers only distance during packet routing. In
order to increase the lifetime of a network, we
added energy information and making nodes,
which are not participating in sending or receiving
packets, in to sleep mode.

To show the performance gained, the proposed
solution was compare with the original GPSR
routing protocol using J-sim simulation software.
The simulation output indicates that, there is a
performance gained in average energy
consumption, average packet delivery ratio and
network lifetime from 45.9% to 78.69%. However,
the proposed solution increases the average delay
due to high computational complexity.

REFERENCES

[1] R.Shorey, A.Ananda, Mobile, Wireless, and
Sensor Networks: A John Wiley & Sons, Inc.,
Publication, 2006.

[2] HallT. Rappaport, Wireless Communications:
Principles and Practice: Upper Saddle River,
NJ: Prentice Hall, 1996.

[3] Jonathan Bachrach and Christopher Taylor,
Localization in Sensor Networks,
Massachusetts Institute of Technology, 2005.

[4] Kemal Akkaya and Mohamed Younis, A
Survey on Routing Protocols for Wireless
Sensor Networks: Department of Computer
science and Electrical engineering University
of Maryland, Baltimore County, 2005.

[5] Holger Karl and Andreas Willig, Protocols
and Architectures in Wireless Sensor
Networks: John Willey & Sons, Ltd, 2005.

[6] Anna Hac, Wireless Sensor Network
Designs: John Wiley & Sons Ltd, 2003.

[7] Zhimin He, SPAN: An Energy-Efficient
Coordination Algorithm for Topology
Maintenance in Ad Hoc Wireless Networks:
Oct 1st, 2003.

[8] Ya Xu, John H. and Deborah E., Adaptive
Topology Control for Ad-hoc Sensor
Networks: July 2001.

[9] Yu Y, Estrin D, and Govindan R., GEAR: A
Recursive Data Dissemination Protocol for
Wireless Sensor Networks: by UCLA
Computer Science Department Technical
Report, 2001.

[10] Kewei Sha and Weisong Shi, Modeling the
Lifetime of Wireless Sensor Networks: April
2005.

[11] Comparing ns-2 with j-sim,
http://www.jsim.org/comparision.html
(access date April 23, 2009).

[12] Q. Li, J. Aslam and D. Rus, Online Power-
aware Routing in Wireless Ad-Hoc Networks:
Proceedings of MOBICOM, July 2001.

[13] Ahmed Sobeih, Wei-Peng Chen, Jennifer C.
Hou, Lu-Chuan Kung, Ning Li, Hyuk Lim,
Hung-Ying Tyan, and Honghai Zhang, J-
Sim: A Simulation and Emulation
Environment for Wireless Sensor Networks,
http://www.jsim.org/, (access date April 2,
2009).

[14] Jacobson, Metrics in Wireless Networks,
ACM MOBIHOC, Lausanne, Switzerland, A.
June 2002, pp. 194205.





*E-mail:Edessa_dribssa@yahoo.com
Journal of EEA, Vol. 28, 2011

FLOW SIMULATION AND PERFORMANCE PREDICTION


OF CENTRIFUGAL PUMPS USING CFD-TOOL

Abdulkadir Aman, Sileshi Kore and Edessa Dribssa*
Department of Mechanical Engineering
Addis Ababa Institute of Technology, Addis Ababa University

ABSTRACT

With the aid of computational fluid dynamics, the
complex internal flows in water pump impellers
can be well predicted, thus facilitating the product
development process of pumps. In this paper a
commercial CFD code was used to solve the
governing equations of the flow field. A 2-D
simulation of turbulent fluid flow is presented to
visualize the flow in a centrifugal pump, including
the pressure and velocity distributions. The
standard k- turbulence model and SIMPLEC
algorithm were chosen for turbulence model and
pressure-velocity coupling respectively. The
simulation was steady and moving reference frame
was used to consider the impeller-volute
interaction. The head and efficiency at different
flow rates are predicted and they agree well with
those available in literature for similar pump.
From the simulation results it was observed that
the flow change has an important effect on the
location and area of low pressure region behind
the blade inlet and the direction of velocity at
impeller inlet. From the study it was observed that
FLUENT simulation results give good prediction of
performance of centrifugal pump and may help to
reduce the required experimental work for the
study of centrifugal pump performance.


Keywords: Centrifugal Pump, FLUENT,
Performance prediction, CFD



INTRODUCTION

A centrifugal pump is one of the machines
commonly used in industrial plants to raise the
energy content of a liquid flowing through it. It
does so by converting energy of a prime mover (an
electric motor or turbine) first into velocity or
kinetic energy, and then into pressure energy of a
liquid that is being pumped. The energy changes
occur by virtue of two main parts of the pump, the
impeller and the volute or diffuser. The impeller is
the rotating part that converts driver energy into the
kinetic energy. The volute or diffuser is the
stationary part that transforms the kinetic energy of
the liquid into pressure energy.

Centrifugal pumps are prevalent for many different
applications in the industrial and other sectors.
Nevertheless, their design and performance
prediction process is still a difficult task, mainly
due to the great number of free geometric
parameters involved. On the other hand the
significant cost and time of the trial-and-error
process by constructing and testing physical
prototypes reduces the profit margins of the pump
manufacturers. For this reason, CFD analysis is
currently being used in hydrodynamic design for
many different pump types [1].

Over the past few years, with the rapid
development of the computer technology and
computational fluid dynamics (CFD), numerical
simulation, like academic analysis and
experimental research, has become an important
tool to study flow field in pumps and predict pump
performance [2]. Numerical simulation makes it
possible to visualize the flow condition inside a
centrifugal pump, and provides valuable
information to centrifugal pumps hydraulic design
[3].

Despite the great progress in recent years, even
CFD analysis remains rather expensive for the
industry, and the need for faster mesh generators
and solvers is imperative [1].

GOVERNING EQUATIONS

Since the fluid surrounding the impeller rotates
around the axis of the pump the fundamental
equations of fluid dynamics must be organized in
two reference frames, stationary and rotating
reference frames. To accomplish this, the Multiple
Reference Frame (MRF) model has been used. The
basic idea of the model is to simplify the flow
inside the pump into an instantaneous flow at one
position, to solve unsteady-state problem with
steady-state method [3]. In this approach, the
governing equations are set in a rotating reference
Abdulkadir Aman, Sileshi Kore and Edessa Dribssa

60 Journal of EEA, Vol. 28, 2011

frame, and coriolis and centrifugal forces are added


as source terms. The mass conservation and
momentum equations for a rotating reference frame
are as follows.

Mass conservation:
(1)

Conservation of angular momentum:

(2)

Where:

is the relative velocity

is the absolute velocity
is the angular velocity

The fluid velocities can be transformed from the
stationary frame to the rotating frame using the
following relation:

(3)

(4)


Where:

is the whirl velocity (the velocity viewed
due to the moving frame) and

is the position vector from the origin of
the rotating frame.


DESCRIPTION OF THE MODEL

The model geometry is complex and asymmetric
due to the blade and volute shape. The GAMBIT
package was used to build the geometry, to
generate meshes and set up boundary zones of the
centrifugal pump model for the CFD simulation
analysis. The model contains six impeller blades
spaced 60
o
between them. A triangular mesh was
selected for meshing the flow domain. The impeller
has an outlet diameter of 124 mm and inlet
diameter of 52 mm. The eye diameter is 20mm.
the model is divided into three boundary zone types
(inlet, impeller and volute). A view of the
generated grid of the centrifugal pump considered
for this study is shown in Fig. 1.






















Figure 1 2-D model of centrifugal pump after
meshing.

The mesh file contains the coordinates of all the
nodes, connectivity information that tells how the
nodes are connected to one another to form faces
and cells, and zone types and number of all the
faces. The grid file does not contain any boundary
conditions, flow parameters, or solution
parameters. It is an intermediate step in the overall
process of creating a usable model which is
exported, as a mesh file, to be read in FLUENT.

COMPUTATIONAL METHOD AND
BOUNDARY CONDITIONS

In order to calculate the flow field in the vane and
channel of the casing a commercial CFD code,
FLUENT, was used. The governing integral
equations for the conservation of mass and
momentum were discretized using finite volume
method. Then, standard k- model was adapted for
turbulence calculation, from the three known k-
models (Standard k-, RNG k- and Realizable
k-). The standard k- model is a semi-empirical
model based on model transport equations for the
turbulence kinetic energy (k) and its dissipation
rate (). The model transport equation for k is
derived from the exact equation, while the model
transport equation for was obtained using
physical reasoning and bears little resemblance to
its mathematically exact counterpart [4].


Flow Simulation and Performance Prediction

Journal of EEA, Vol. 28, 2011 61

Two numerical solvers of segregated and coupled


employ a similar discretization process, but the
approach used for linearizing and solving the
discretized equations is different. The segregated
solver solves the governing equations sequentially,
while the coupled solves them simultaneously [1].
In the present analysis, the segregated solver was
used since the coupled solver is usually used in
high compressible flows in which the flow and
energy equations are coupled.

The pressure-velocity coupling methods
recommended for steady-state calculations are
SIMPLE or SIMPLEC [1, 5, 6]. For the present
simulation SIMPLEC algorithm was preferred due
to its high convergence rate. Second order upwind
scheme was employed for discretization for
equations of momentum, turbulent kinetic energy
and turbulent dissipation rate.

Velocity-inlet boundary condition was imposed on
pump inlet position. It was specified to be normal
to the boundary and it is defined with reference to
the absolute frame. The turbulence intensity for all
conditions is considered 1%. Out flow boundary
condition was imposed at outlet with a flow rate
weighting of 1.

Outer walls were stationary but the inner walls
were rotational. There were interfaces between the
stationary and rotational regions. Also non-slip
boundary conditions have been imposed over the
impeller blades and walls, the volute casing and the
inlet wall. A constant angular velocity of 2900 rpm
was imposed for rotating fluid.

RESULTS AND DISCUSSION

Pressure Distribution

The contour plot of variation of Static Pressure is
shown in Fig. 2. It can be seen from the figure that,
static pressure inside impeller and volute is
asymmetry distributed. The maximum static
pressure area appears at volute tongue and outlet
regions and the minimum one at the back of blade
at impeller inlet region.

























Figure 2 Contours of static pressure (Pascal)

It can also be observed from the figure that, the
pressure increases gradually from impeller inlet to
outlet. The static pressure on pressure side is
evidently larger than that on suction side at the
same impeller radius. The lowest static pressure (-
89000 Pa) inside pump appears in suction surface
at the impeller inlet, the position where cavitation
often appears inside the pump.

The variation of static pressure with flow rate is
also shown in Fig. 3. As the flow rate goes on
increasing the pressure gradually lowers. As can be
seen from Fig. 3, there is an obvious low pressure
area at the suction side of the blade inlet at small
flow rate, as the flow increases the area gets close
to the middle of the blade suction side.















Figure 3 Variation of static pressure at different
flow rates



Abdulkadir Aman, Sileshi Kore and Edessa Dribssa

62 Journal of EEA, Vol. 28, 2011

Velocity Distribution

The contour plot of absolute velocity distribution is
shown in Fig. 4. As shown in the figure, the
velocity increases from impeller inlet to outlet and
reaches a peak value of 21.1 m/s at impeller outlet.
After entering the volute, the velocity begins to fall
down, reaching the lowest at the outlet region
inside the volute.















Figure 4 Contours of absolute velocity magnitude
(m/s)

The contour plot of the tangential component of the
absolute velocity is also shown in Fig. 5. As
expected, the tangential velocity reaches its peak at
impeller outlet. It starts to fall after entering the
volute and reaches the lowest at the outlet region of
the volute.




























Figure 6 Absolute velocity vectors colored by
velocity magnitude

Validation

In order to validate the analysis, the simulation
results of flow rate at outer circumference of the
impeller are compared with analytical formula used
to compute the volume flow rate at impeller outlet.
The volume flow rate is given by:

(5)

Where:
r
2
is the outer radius of the impeller
b
2
is thickness of flow passage at the outer
circumference.
C
r 2
is the radial component of the absolute
velocity at the outer circumference of
the impeller.

The outer radius and thickness of flow passage for
each design flow rate are 0.062m and unit thickness
respectively.

The comparison between the design flow rate used
for simulation and the flow rate obtained using the
analytical formula is summarized in Table 1.

Figure 5 Contours of tangential component of the
absolute velocity (m/s)

Figure 6 shows velocity vectors colored by velocity
magnitude. As can be seen from the figure the
velocity is higher at impeller outlet and lower at
impeller inlet and volute outlet. It can be seen
clearly how the velocity goes on decreasing from
volute inlet to outlet.









Flow Simulation and Performance Prediction

Journal of EEA, Vol. 28, 2011 63


Table 1: Comparison of results of volume flow rate obtained from simulation and
analytical formula.


As can be seen from the above table the analytical
result gives a result which is very accurate and the
percentage error is less than 1.25%. This shows the
accuracy of both the analytical result and the CFD
tool Fluent.

Generally the variation of velocity and pressure
obtained in this analysis are consistent with
theoretical concept and experimental values in
pump analysis. Besides, the contour plots are
similar to that obtained for similar pump analysis
by different authors. Shujia and Baolin [2],
obtained similar results in their virtual performance
experiment of centrifugal pump using CFD.

Performance Curves of Centrifugal Pumps

The design parameters, i.e., Head Coefficient, Flow
Coefficient, Power Coefficient and Hydraulic
Efficiency are evaluated from the numerical output
results of FLUENT. These parameters are used to
compare the performance characteristics of
different pump models [7]. The equations used to
compute Head, Power and Efficiency are:

Head H is calculated by the following formula:

(6)


Where p
out
is the total pressure at pump outlet, p
in
is
the total pressure at pump inlet, is the density of
liquid and g is the gravitational acceleration.
Hydraulic efficiency
h
is calculated as:

M
gHQ
h
(7)






Where M is the impeller torque, is the angular
velocity.

The water power is determined from the
relationship [8]:

P= gHQ (8)

The operating characteristics are plotted after
processing the numerical results from Fluent using
Matlab.

Figure 7 shows the variation of head with flow rate.
Theoretically it is expected that the head goes on
decreasing as the flow rate increases for backward
curved blades. Here also it can be seen from Fig.7
that, the head decreases with an increase in flow
rate. The profile is similar to experimental results
obtained for similar pump models by different
authors [1, 3].

















Figure 7 Pump head vs flow rate



Design flow rate
(m
3
/s)
Radial velocity at
impeller outlet,C
2r
(m/s)
Flow rate computed using
analytical formula (Eq. (5)) (m
3
/s)
Percentage
error (%)
0.1 0.2535 0.09876 1.24
0.2 0.5123 0.19957 0.215
0.3 0.76511 0.29805 0.650
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
6
8
10
12
14
16
18
20
22
flow rate [m
3
/s]
h
e
a
d

[
m
]
H= -264.8508Q
3
+ 38.4572Q
2
- 9.2674Q+22.2289
Abdulkadir Aman, Sileshi Kore and Edessa Dribssa

64 Journal of EEA, Vol. 28, 2011

Theoretically for an ideal case the head at design


point is given by:

(9)
Where:
u
2
and u
1
are peripheral velocities of the rotor
at outer and inner radius respectively.
C
2u
and C
1u
are whirl components of the
absolute velocity at outer and inner radius
respectively.

At design point of the pump (point of maximum
efficiency) which is 0.25 m
3
/s, the magnitudes of
velocity components are given below.

C
2u
= 9.665 m/s and C
1u
= 0.988 m/s (obtained
from simulation result)
u
2
= 18.83 m/s and u
1
= 7.896 m/s (calculated,
)
Table 2 presents a comparison of the theoretical
head and CFD result at design point
Figure 9 shows the variation of hydraulic efficiency
with flow rate. As can be seen from the figure, the
point of maximum efficiency is 0.25 m
3
/s.















Figure 9 Efficiency Vs. flow rate curve for
centrifugal pump

Table 2: Comparison of the theoretical head and CFD result at design point

From Table 2, it can be seen that the theoretical
head obtained using the analytical formula is close
to the result obtained by simulation, with a
percentage error of only 6.77%. Hence, it can be
said that the CFD tool predicts well the pump head
and the analytical formula also gives reasonably
good result.

Figure 8 shows the variation of the fluid power
with discharge. As shown in the figure, the power
goes on increasing until it reaches a certain limit
and then decreasing beyond a certain value of flow
rate. The output power is maximum for a flow rate
of 0.3 m
3
/s.












Figure 8 Output power vs. flow rate

The optimum design flow rate for the pump
considered for this study is 0.25 m
3
/s. The
variation of efficiency of the pump at design and
off-design condition is similar to the experimental
result for efficiency of the pump where the
maximum efficiency is attained at design flow rate.
Minggao and Shouqi [2] investigated
experimentally and numerically a 6 bladed pump
and obtained similar profile for efficiency variation
with flow rate.

The pump head and efficiency curves are also
shown in Fig. 10. The operating point of the pump
can be selected by combining with the pump
system curve. But from the graph it can be
observed that, the flow rate - head combination of
0.25 m
3
/s and 18 m gives the maximum efficiency.










Design flow rate (m
3
/s)

Pump head obtained from
simulation result (m)
Theoretical pump head
obtained using analytical
formula (Eq. (9)) (m)

Percentage Error (%)
0.25 19.05 17.76 6.77
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
15
20
25
30
35
40
45
50
Flow Rate [m
3
/s]
O
u
t
p
u
t

p
o
w
e
r

[
k
W
]
P (kW)= (-2.0925Q
3
+0.7183Q
2
+.1123Q+.0045)*10
3
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
35
40
45
50
55
60
65
70
75
Flow Rate [m
3
/s]
E
f
f
i
c
i
e
n
c
y

[
%
]
=-6.3591Q
3
- 8.0537Q
2
+5.1204Q+0.0245
Flow Simulation and Performance Prediction

Journal of EEA, Vol. 28, 2011 65








\





Figure 10 Pump head and efficiency vs. flow rate

CONCLUSIONS

In this study, a steady state CFD analysis of a 2-D
model of backward curved six bladed centrifugal
pump is carried out. The contour and vector plot of
pressure and velocity distributions in the flow
passage are displayed. Besides, the operating
characteristics of the pump are also computed from
fluent numerical results.

Although specific experimental results are not
available for the pump considered for this study,
the results agree well with most of the available
results obtained by different authors for a similar
pump. From the study it was observed that there is
a low pressure area at the suction side of blade inlet
at small flow rate, as the flow increases, the area
gets close to the middle of blade suction side. The
static pressure also increases on diffusion section of
the volute outlet markedly at small flow rate while
the static pressure on the same place decreases at
higher flow rate.

The simulation results for flow rate and head are
also compared with analytical formulae used to
predict the flow rate and theoretical head. There is
a good agreement between the results and this also
shows the accuracy of the analysis.

From the analysis it can be concluded that the flow
pattern of a centrifugal pump can be described
quite well with the moving reference frame (MRF)
and the k- Turbulence model. Moreover, valuable
information to the pumps performance optimization
can also be provided by analysis of numerical
results from Fluent, which furthermore improves
CFD based centrifugal pump design.





REFERENCES

[1] Jafarzadeh, B., Hajari, A., Alishahi, M.M.
and Akbari, M.H., The Flow Simulation of a
Low-Specific-Speed High-Speed Centrifugal
Pump, Applied Mathematical Modeling, vol.
35, 2011, pp. 242-249.

[2] Minggao, T., Shouqi,Y., Houlin,L., Yong,W.
and Kai,W., Numerical Research on
Performance Prediction for Centrifugal
Pumps, Chinese Journal of Mechanical
Engineering, Vol. 23,aNo. 1, 2010, pp. 16.

[3] Shujia, Z., Baolin, Z., Qingbo, H. and
Xianhua, L., Virtual Performance
Experiment of a Centrifugal Pump,
Proceedings of the 16th International
Conference on Artificial Reality and
Telexistence--Workshops (ICAT'O6), 2006.

[4] Fluent Inc.: FLUENT 6.3.2Documentation
Users Guide, 2006.

[5] Derakhshan, S. and Nourbakhsh, A.,
Theoretical, Numerical and Experimental
Investigation of Centrifugal Pumps in
Reverse Operation, Experimental Thermal
and Fluid Science, vol. 32, 2008, pp. 1620
1627.

[6] Cheah, K.W., Lee, T. S., Winoto, S. H. and
Zhao, Z.M., Numerical Flow Simulation in
a Centrifugal Pump at Design and Off-
Design Conditions, International Journal
of Rotating Machinery, Hindawi Publishing
Corporation, Article ID 83641, 2007.

[7] Derakhshan, S. and Nourbakhsh, A.,
Experimental Study of Characteristic
Curves of Centrifugal Pumps Working as
Turbines in Different Specific Speeds,
Experimental Thermal and Fluid Science,
vol. 32, 2008, pp. 800807.

[8] Thin, K.H., Khaing, M.M. and Aye, K.M.,
Design and Performance Analysis of
Centrifugal Pump, World Academy of
Science, Engineering and Technology,
vol.46, 2008, pp. 422429.
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
6
8
10
12
14
16
18
20
22
Flow rate
T
o
t
a
l

H
e
a
d

[
m
]
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
35
40
45
50
55
60
65
70
75
E
f
f
i
c
i
e
n
c
y

(
%
)
*E-mail: danielkitaw@yahoo.com
Journal of EEA, Vol. 28, 2011

SUPPLY CHAIN NETWORK DESIGN-OPPORTUNITIES FOR COST REDUCTION
AS APPLIED TO EAST AFRICA BOTTLING SHARE COMPANY IN ETHIOPIA

Daniel Kitaw*, Temesgen Garoma and Desalegn Hailemariam
Department of Mechanical Engineering
Addis Ababa Institute of Technology, Addis Ababa University

ABSTRACT

A mathematical model capturing many practical
aspects of network design problems and
optimization techniques is proposed. This model is
applied to East Africa Bottling Share Company. By
the application of this proposed model an annual
cost saving of 192,192 Birr, and through
alleviating vehicle scheduling problem a cost
saving of 405,000 Birr per year may be achieved.
Moreover, by reviwing the existing situation of the
company, an annual demand of 366,080 cases
which is equivalent to 8,785,680 Birr is
demonstrated to be realized.

This research methodology and outcome may be
applied to other companies intending to emulate
the benefits in SC network design illustrated in this
study.

Keywords: Supply Chain Network Design,
Mathematical Model, Optimization

INTRODUCTION

Supply Chain (SC) is an integrated business model
for logistics management. It covers the flow of
goods from suppliers through manufacturing and
distribution chains to the end customers. Effective
SC is viewed as the driver of reductions in lead
times and costs, and improvements in product
quality and responsiveness. Despite its benefits
structuring supply chain network is a complex
decision-making process. The typical inputs to such
a process consists of a set of customer zones to
serve, products to be manufactured and distributed,
demand projections for the different customer
zones, information about future conditions, costs
and resources. Given the above inputs, companies
have to decide where to locate new facilities, how
to allocate resources to the facilities, and how to
manage the transportation of products through the
chain in order to satisfy customer demands. Supply
chain network design is therefore as such complex
process that needs proper investigation of existing
and future situations of manufacturing plant.

To verify this fact company is identified based on
its need to restructure the supply chain network. A
company that involves high transportation, large
distribution volumes, and high demand is selected.
Therefore, the authors selected East Africa Bottling
Share Company for case analysis.

The company has two processing plants and five
warehouses in the country. According to the
information obtained from the company, the
demand for its products is increasing by four
percent annually. However, the existing supply
chain network is not able to capture available
demands at different regions and the customer
service level is also not to the satisfactory level
with the increased demand. As a result, some
segments of the market are experiencing shortage.
Currently, the company does not have well-
established means to sense the shortage in the
market and also mechanisms how to supply the
market as soon as possible. The objective of this
paper is to design a supply chain network model,
which will sense and capture customers demand at
acceptable customer service level with minimum
cost. Here, a mathematical network model is
developed and verified with the solution technique.
This model and the optimization technique would
be helpful to other companies seeking to emulate
similar benefit.

BASICS OF SUPPLY CHAIN MANAGEMENT

As Christopher [1], Suhong et.al [2], and Nicholas
[3] have pointed out; effective supply chain
management (SCM) has becomes a potentially
valuable way of securing competitive advantage
and improving organizational performance since
competition is no longer between organizations, but
among supply chains. The phrase Supply Chain
Management came in to use in the early 1990s [4].
The Global Supply Chain Forum defined SCM as
the integration of key business processes from the
end user through original suppliers that provide
products, services, and information that add value
for customer and other stakeholder [5].

The benefits of an effective SC can be: cycle time
reduction, inventory cost reduction, optimized
transportation, increased order fill rate, early
prediction of disturbance to downstream, increase
customer service, and increase returns on assets [4,
6]. To achieve these benefits, the decisions that are
to be taken should be strategic, tactical, and
operational. The principles of SCM that can ensure
the above benefits are: customer segmentation,
Daniel Kitaw, Temesgen Garoma and Desalegn Hailemariam
68 Journal of EEA, Vol. 28, 2011


customizing SC networks, demands planning,
sourcing suppliers strategically, integration of
technology, and performance measure [7, 8]. The
supply network must be optimized and react to
supply uncertainties and demand variability to
serve customers demand [9].

In general, different authors agree that SCM
involves in integrating three key flows across the
boundaries of the supply chain: product/material,
information, and financial flow [1, 4, 10, 11].
Fig. 1 below shows a simplified supply chain
management system.







Figure 1 Simplified supply chain diagram

Successful integration of the three flows has
produced improved efficiency and effectiveness.
Hence, the initial step in implementing and
practicing SCM system is supply chain network
design. Therefore, the study is mainly focused on
SC network design.

Supply Chain Network Design

Many organizations today are forced to increase
their global market share in order to survive and
sustain growth. At the same time, organizations
must defend their domestic market share from
international competitors. The challenge is how to
expand the global logistics and distribution
networks in order to ship products to customers
who demand them in a dynamic and rapidly
changing set of channels. A Strategic positioning of
inventories is essential, so that the products are
available when the customer wants them.

Long-term competitiveness therefore, depends on
how well the company meets customer preferences
in terms of service, cost, quality, and flexibility by
designing the SC, which will be more effective and
efficient than the competitors. Optimization of this
equilibrium is a constant challenge for the
companies which are part of the supply chain
network.

To be able to optimize this equilibrium, many
strategic decisions must be taken and many
activities need to be coordinated. This requires
careful management and design of the supply chain
at first instance followed by well thought
execution.

The design of supply chains represents a distinct
means by which companies innovate, differentiate,
and create value [10, 12]. The challenge here is in
the capability to design and assemble assets,
organizations, skills, and competences. Depending
on the complexity of supply network three chain
categories can be defined: [10, 12, 13].

1. Direct supply chain: consists of a company, a
supplier, and a customer.

2. Extended supply chain: includes suppliers of
the immediate supplier, as well as customers of
the immediate customer.

3. Ultimate supply chain: includes all the
organizations involved in all the upstream and
downstream flows.

At the highest level, performance of a distribution
network should be evaluated along two dimensions:
customer needs that are met, and cost of meeting
customer needs [10, 12].

Thus, a firm must evaluate the impact on customer
service and cost as it compares different
distribution network options. Customer services
that are influenced by the structure of network
include response time, product variety and
availability, customer experience, order visibility,
and return-ability [10]. For this purpose three
distinct outbound distribution strategies are used:
direct shipment, warehousing, and cross-docking
[14].

After examining the basic concepts and principles,
SC network design approaches and input data
required for the modeling technique are presented.

Supply Chain Modeling


Numerous modeling approaches in SCM have been
proposed so far. These include supply chain
network design [6,13,15], mixed integer
programming [7,15,16,17], stochastic programming
[14,16], heuristic methods, and simulation based
methods [4,16]. In this study especial focus is given
to SC network design in which optimization
techniques are used. This is because it considers
the structure of the network and also incorporates
other optimization models to reach at final
decisions.

Customer
Supplier Manufacturer
Flow of Finance/fund
Flow of Information
Flow of material/Product
Supply Chain Network Design-opportunities for Cost Reduction

Journal of EEA, Vol. 28, 2011 69


Optimization Techniques of Supply Chain
Network

Network configuration may involve issues relating
to plant, warehouse, transportation, and retailer
location. These are strategic decisions since they
have a long-lasting effect on the firm. To come up
with a better network design, appropriate number
of warehouses, location of each warehouse, and the
size and capacity of each warehouse has to be
identified and determined.

The objective is to design the SC network so as to
minimize annual system-wide costs and improve
service level requirements, thereafter increase
market share. Increasing the number of warehouses
typically yields: improvement in service level,
increase in inventory costs, increase in overhead
and set-up costs, reduction in outbound
transportation costs, and increase in inbound
transportation costs. In this setting, the tradeoffs are
clear. In essence, the firm must balance the costs of
opening new warehouses with the advantages of
being close to the customer. Thus, warehouse
location decisions are crucial determinants for the
efficiency of the product distribution. The design
approaches therefore, require the following three
major activities to produce a good optimized result
[10].

1. Data collection and aggregation regarding
transportation rates, mileage estimation,
warehouse costs, and service level
requirements.

2. Modeling.

3. Use of solution techniques.

A general Mathematical Model (MM) of the
distribution network design is as presented in Eq. 1.
The total cost function is the minimum value of
sum of fixed plants and warehouses costs, and
transportation cost in supply of raw material and
distribution of finished goods. This model can
simultaneously identify location of multi plant and
warehouses for a company. The constraints that are
to be considered in the objective function are given
in Eq. 2 to Eq. 9.









Objective function
Total cost = Min

=
n
i
i i
Y F
1

=
+
t
e
e e
Y F
1

= =
+
l
h
n
i
hi hi
X C
1 1

= =
+
n
i
t
e
ie ie
X C
1 1
)

+

= =
t
e
m
j
ej ej
X C
1 1
(1)

Subject to constraints

Total amount shipped from supplier cannot exceed
suppliers capacity;

=
n
1 i
hi
X
S
h
for h = 1, 2, 3L (2)

Amount shipped out of factory cannot exceed the
quantity of raw material received;

=
l
h
hi
X
1

=
t
e
ie
X
1
0 for i = 1, 2, 3 n (3)

Units produced in factory cannot exceed factory
capacity;

=
t
e
ie
X
1
K
i
Y
i
for i = 1, 2, 3 n (4)
Amount shipped out of warehouse cannot exceed
quantity received from factories;

=
t
e
ie
X
1

=
m
j
ej
X
1
0 for e = 1, 2, 3 t (5)

Amount shipped through warehouses cannot
exceed its capacity;

=
m
j
ej
X
1
WeYe

for e = 1, 2, 3t (6)

Amount shipped to customer must equal the
customer demand.

=
t
e
ej
X
1
= D
j
for j = 1, 2, 3 m (7)

Each factory or a warehouse is either open or
closed.

Y
i,
Y
e,
{0, 1} (8)

X
ie
, X
ej
0 (9)
Daniel Kitaw, Temesgen Garoma and Desalegn Hailemariam
70 Journal of EEA, Vol. 28, 2011



Where:

m= number of markets or demand points
m= number of potential factory locations
l= number of suppliers
t= number of potential warehouse locations
D
j
= annual Demand from customer j
K
i
= potential capacity of factory at site i
S
h
= supply capacity at supplier h
W
e
= Potenital warehouse capacity at site e
F
i
= fixed cost of locating a plant at site i
F
e
= fixed cost of locating a warehouse at site e
C
hi
= cost of shipping one unit from supply source
ha to factory i
C
ie
= cost of shipping one unit from factory I to
warehouse e
C
ej
= Cost of shipping one unit from warehouse e to
customer j
Y
i
= 1 if warehouse is located at site e,0 otherwise
Y
e
= 1 if warehouse is located at site e, 0otherwise
Xej
= quantity transported from warehouse e to
market j
X
ie
= quantity transported from warehouse e to
market j
X
hi
= quantity shipped from supplier h to factory at
site i.

Once the model has been developed based on
network configurations, the next step is to optimize
the configuration of the logistics network. In
practice, mathematical optimization techniques,
which include exact algorithms that are guaranteed
to find optimal solutions is used. A case study of
East Africa Bottling Share Company has been
undertaken to investigate the model developed for
the supply chain network design. In the study,
extensive data collection and analysis have been
carried out.

CASE STUDY: EAST AFRICA BOTTLING
SHARE COMPANY IN ETHIOPIA

Companys Background

The first Coca-Cola bottler in Ethiopia was
established in 1959 as the Ethiopian Bottling Share
Company in Addis Ababa. As the business
expanded a branch was set in Dire Dawa in
1965. After ten years of operation, the two plants
were nationalized in 1975 and for over two decades
they operated as a public company until 1996. With
the introduction of the privatization program of
Federal Democratic Republic of Ethiopia, South
Africa Bottling Share Company and Ethiopian
Bottling Share Company signed a joint venture
agreement on May19, 1999. Finally, it became East
Africa Bottling Share Company in 2003.

In the current business operations, some segments
of the markets for the companys product
experience shortage. The shortage may either be
owing to shortage of supply from the company or
lack of supply to specific market segment while
excess supply is experienced in others. This means
the company is losing its sale because the customer
may cancel the order or shift to some other brand.
However, the company does not have well
established means to sense the shortage in the
market and mechanisms how to supply the market
accordingly. Hence, new supply chain network
design will be developed and evaluated with the
MM for the company.

Existing Supply Chain Network Diagram of the
Company

To clearly portray how the MM of SC network
design works, it is important to thoroughly
scrutinize the existing SC structure of the company.
The company has few geographic regions which it
directly supplies, seven regional towns and Addis
Ababa, whereas other regions are supplied by
agents whose numbers are variable. A simplified
schematic diagram of the SC for the companys
existing operation is given in Fig. 2



















Supply Chain Network Design-opportunities for Cost Reduction

Journal of EEA, Vol. 28, 2011 71

































Key: USA= United States of America,AA= Addis Ababa, DD=Dire Dawa, MDC= Manual
Distribution Centers

Figure 2 Existing supply chain network of the case company
Data Collection and Aggregation

Raw material type and price analysis

The raw materials that are utilized for the
production of the companys product mix include
concentrate, sugar, bottle, crown, carbon dioxide
(CO
2)
and caustic soda. Table 1 depicts the sources
of raw materials.

Table 1: Raw material type, sources and their
respective prices ( Source : East Africa
Bottling Share Company)


Production capacity of the company

The two plants of the company, located in Addis
Ababa and Dire Dawa are operating at about 80
percent of maximum production capacity as may be
observed in Tables 2 and 3. Therefore, for all
practical purposes production capacity of 80
percent will be used. Table 2 shows the maximum
capacities of the two plants.

Table 2: Maximum production capacity of the
Company per year (Source: East Africa
Bottling Share Company)

Warehouses and their supply regions

There are five warehouses that receive deliveries
directly from the manufacturing plant at Addis
Ababa. Below are the warehouses and their supply
regions.

Addis Ababa market is supplied by 265 Manual
Distribution Centers (MDCs).

No. Raw Material Source
1 Concentrate USA
2 Sugar Wonji/Metehara
3 Bottle AA
4 Crown AA
5 CO
2
In house
production
6 Caustic Soda Ziway

Plants
Capacity in Cases per year
Total Line 1 Line 2 Line 3
Addis
Ababa
660,000 816,000 1,440,000 2,916,000
DD
Plant
468,000 - - 468,000
Grand Total 3,384,000
Plant
AA
DD
Wonji
Ziway
Supplier
Adama
Shashemene
Bahir Dar
Retailer
MDC
Customer
Customer
Metehara
Jimma
AA
Dessie
DD
Warehouse (WH)
USA
Retailer
Customer
Retailer
MDC
Awash
Daniel Kitaw, Temesgen Garoma and Desalegn Hailemariam
72 Journal of EEA, Vol. 28, 2011



Awash is mainly supplied by the plant at Dire
Dawa

Adama warehouse covers Adama, Mojjo,
Ziway, Arsi, and Wolenchiti areas.

Shashemene warehouse covers demand regions
of Hawassa, Bale, and Arsi-Negelle.

Dessie warehouse supplies areas with in radius
of 20 km excluding Kombolcha.

Bahir Dar warehouse covers regions to Chagni
and Addis-Zemen.

Jimma warehouse supplies Jimma-Agaro and
Jimma-Sekoro.

The remaining geographic regions are supplied by
agents who have franchise from the Company.
Other basic data are presented in later.

Basic Data Presentation and Related costs

Described below are some of the issues related to
data collection and the calculation of costs required
for the optimization models.

(i) Annual demand at warehouses

In Addis Ababa, each retailer is supplied by manual
distribution centers (MDCs) nearby. It can be
assumed that demand is concentrated at the point of
MDC location. The MDCs can further be
aggregated based on the total distance to serve a
specific market segment. This is determined by the
customer service level set by the company, which
is 12 hrs a day.

In regions where there are warehouses, demand is
taken to be fixed at the warehouse location. In fact
there are places which can have supply from
multiple warehouses. As there is no warehouse in
Addis Ababa, the company directly ships and sells
its products to agents at MDCs. In such cases,
multiple of MDCs are grouped based on their
geographic proximity to represent demand at a
specific location. All MDCs at Addis Ababa are
summed together to represent a single warehouse.
As a result there are demand locations at seven
towns. The amount of cases shipped to these
destinations annually (average) is given in Table 3.








Table 3: Annual demands at depots (warehouses)
(Source: East Africa Bottling Share
Company-Companys sales report, 2009)


(ii) Transportation Rates

The cost of transporting products from a specific
source to a specific destination is a function of the
distance between these two points. The
warehouses at AA and DD are integrated with in
the plant. The cost per case of soft drink per km
can be calculated in two ways. Firstly, assuming
that third party vehicles can be rented and
secondly, using own transport system. Considering
the relevant carrier and operational costs, the
average transportation cost per case per kilometer
is found to be 0.06 Birr in a round trip.

A 4 pallet truck has a capacity of transporting 300
cases. A single pallet means 300/4 which is equal
to 75 cases. Therefore, capacities of other trucks
can be calculated by multiplying their pallet
capacity by 75. The summary for all cases are
presented in Tables 4 -6



















Warehouse Demand
AA 1284800
Bahir Dar 183040
Dessie 183040
Jimma 183040
Adama 274560
Shashemene 274560
Awash 91520
DD 183040
Total =3657600
Supply Chain Network Design-opportunities for Cost Reduction

Journal of EEA, Vol. 28, 2011 73







Table 4: Warehouses and their distances from the plants in kilometers (Source: East Africa
Bottling share Company)

Table 5: Type of trucks and their capacity (Source: East Africa Bottling share Company)









The company uses vender managed inventory and agents must fulfill minimum criteria to qualify for
it. Agents owned trucks and their capacity are given in Table 3.6.
Table 6: Types of third party trucks and their capacity (Source: East Africa Bottling share Company)

No. Type Location
AA WH Adama Shash Mekelle Jimma DD De BD Total
1 4 Pallet 16 - - - - - 1 1 1 18
2 6 Pallet - - - - 2 - - - - 11
3 8 Pallet 13 - 5 4 7 2 2 - - 4
4 Hauler Trailer - 16 - - - - 9 - 24
Key: AA= Addis Ababa, WH= Warehouse, Shash= Shashamane, DD= Dire Dawa, De= Dessie BD= Bihir Dar
(iii) Potential Warehouse Locations

This factor is considered to use the excess
production of 388,000 cases. Hence, potential
warehouse locations are identified based on the
factors like potential markets, weather condition,
and population. Based on these considerations and
companys expert discussion; Mekelle and Gonder
towns were identified as warehouse locations in
addition to the already located ones. The major
reasons for establishing warehouses in these towns
are:

a. Demand is high in the two towns;
b. MDCs can easily be established;
c. Agents can be cultivated in
nearby towns; and
d. Competitor is present in Gonder town
(PEPSI COLA)


(iv) Warehouse Capacities

The capacity of warehouses can be calculated by
taking in to consideration the total physical size of
the demand units. Also, factors on accounts of
storing, retrieving, and other recording place
allowances are considered. Generally speaking the
capacity of warehouse is referred to as the average
amount of demand the warehouse serves. In this
particular case, the warehouse capacity in each
location is found to be the peak weekly demands as
shown in Table 3.

(v) Warehouse Costs

From related warehouse costs, only warehouse
fixed cost is needed to be found for the very reason
that (1) it is this cost that widely differs from place
to place, and (2) it is incurred regardless of the
amount of material stored. Fixed warehouse costs
at both Addis Ababa and Dire Dawa are integrated
with the main plants. The fixed cost of warehouse
for different locations is given in Table 7.
From To
Warehouse Locations
AA Bahir Dar Dessie Jimma Adama Shashemene Awash DD
AA plant 0 560 400 335 100 250 240 515
DD plant 515 1075 915 830 415 695 275 0
S.no. Type Number of trucks Capacity in cases
AA DD Total AA DD
1. 4 Pallet truck 15 3 18 4500 900
2. 6 Pallet truck 2 9 11 900 4050
3. 8 Pallet truck 2 2 4 1200 1200
4. 10 Pallet truck 1 0 1 750 0
5. Hauler Trailer(22 pallet) 15 9 24 33000 19800
Total 40350 25950
Daniel Kitaw, Temesgen Garoma and Desalegn Hailemariam
74 Journal of EEA, Vol. 28, 2011



Table 7: Annual warehouse fixed costs(Source:
East Africa Bottling Share Company-
Companys financial report, 2009)

Warehouse Fixed Cost
Bahir Dar 120000
Dessie 100000
Jimma 120000
Adama 170000
Shashemene 120000
Awash 100000

(vi) Service Level Requirements

Though not exclusively mentioned, the company
aspires to meet demand with in 12 hrs. Assume that
order processing and loading/unloading as well as
waiting time will take a total of 6 hrs. The
remaining 6 hrs can be taken for the service level
that the company wants to maintain, i.e about 180
km assuming 30 km/hr loaded truck travels.

MODEL FORMULATION AND DATA
VALIDATION

Model formulation and data validation are typically
done by reconstructing the existing network
configuration using the collected data, and
comparing the output of the model to existing data.
Here, to validate the model, existing distribution
costs to warehouses are calculated using analytical
method and is compared against the excel
optimization model. The cost of transport/km/case
is 0.06 Birr, and the distances between the plants
and warehouses are given in Table 4. Accordingly,
the transportation costs/case is as depicted in
Table 8

Table 8: Average transportation cost in Birr/case
between plants and WH locations
(Source: East African Bottling Share
Company-Companys financial report,
2009)

To
From AA plant DD plant
AA 0 30.9
BD 33.6 64.5
Dessie 24 54.9
Jimma 18.9 49.8
Adama 6 24.9
Shashemene 15 41.7
Awash 14.4 16.5
DD 30.9 0

Based on the available data the total transportation
cost for existing network can be calculated first
analytically and then compared with the
optimization solution techniques. Finally,
optimization with the renewed setting will be made

(a) Analytical approach

Table 9 shows the results of the transportation
costs. Analytically, the total annual transportation
cost from plants to warehouses using Eq. 1 results
21,278,400 Birr. The actual transportation cost
obtained from plants is almost equal to this value.










































Daniel Kitaw, Temesgen Garoma and Desalegn Hailemariam
Journal of EEA, Vol. 28, 2011 75


Table 9: Annual Costs based on transportation cost in Birr between plants and warehouses


(b) Using Solution Techniques to Optimize
Distribution Costs

A general form of the mathematical model for the
distribution network is given in Eq 1.,and the
related Constraints are given in Eqs. 2 to 9.

For the present case the constraints indicated in Eq.
2 and Eq 3 shall be omitted since there is only one
source of each categorie of raw material consumed.
Hence, the optimization formula can therefore, be
modified to suit the company, i.e.

Totalcost=Min
+

=
n
i
i i
Y F
1
+

=
t
e
e e
Y F
1
+

= =
n
i
t
e
ie ie
X C
1 1

)

= =
t
e
m
J
ej ej
X C
1 1
(10)

To see the benefit of supply chain network design
for the company: two sets of optimization are
considered:

1. Optimization based on existing set of
operation of the company: In this case, the
existing plant and warehouse location are
fixed.

2. Optimization with renewed setting: In this
option, all warehouse locations are set to
change and optimization techniques are used to
arrive at a minimum cost scenario.

Optimization Based on Existing Set of
Operation

Based on the existing network structures, the plant
at Addis Ababa supplies all the WH locations
except Dire Dawa, which is supplied by the Dire
Dawa plant. In the optimization approach, a built-in

MS-Excel tool called SOLVER is utilized. In this
scenario, the total annual cost is found to be
21, 086,208 Birr. Thus, it resulted in 192,192 Birr
annual saving from the actual cost investigated
with the analytical method which is 21,278,400
Birr. All the demand is met and all warehouses
supply demands within their proximity. The excess
transportation capacities to transport 149, 840 cases
from Addis Ababa and 248,960 cases from Dire
Dawa are used to serve third party distributors or
agents who directly take shipments from plants.
The detailed analysis is shown in Appendix 1.

Optimization with Renewed Setting

The basic problem with agents is that, they are not
likely to travel longer distances to collect
shipments. For instance, it is difficult to find
distributors and MDCs in towns located far from
plants. The total cost they incur coupled with their
capacity to satisfy market largely hampers their
performance. Besides, the opportunity the company
loses is taken up by competitors right away.
Therefore, it is better for the company to outreach
as much markets as possible. Accordingly,
warehouses at Mekelle and Gondar towns are
identified as potential sites in addition to the
already existing ones. Other places in the country
have relatively level topography and nearby to AA
and DD plant, hence, agents can easily be found.
After potential places in the country are proposed,
the optimal solution taking into consideration all
potential market locations is formulated in the MS-
Excel Solver. The problem formulation and results
are presented in Appendix 2.

In the renewed network optimization, the Dire
Dawa Plant which was used to supply only Dire
Dawa and its area is now utilized to supply Dire
Dawa, Awash and half of Adama. In doing so, the
company can increase its responsiveness by fully
utilizing its whole capacity to supply itself. In this
scenario a total of 366,080 market demands in
cases which is equivalent to 8,785,680 Birr are
achieved, and at the same time all demands are

To
From
Warehouses
AA BD Dessie Jimma Adama Shashemene Awash DD
AA
Plant 0 33.6 24 18.9 6 15 14.4 30.9
DD
Plant 30.9 64.5 54.9 49.8 24.9 41.7 16.5 0
Demand
1284800 183040 183040 183040 274560 274560 91520 183040
Cost
0 6150144 4392960 3459456 1647360 4118400 1510080 0
Daniel Kitaw, Temesgen Garoma and Desalegn Hailemariam
76 Journal of EEA, Vol. 28, 2011


met. The final SC network design is therefore, as
given in Fig. 3 and 4 respectively.



















































Figure 3 Renewed supply chain network design of the case company

Figure 4. Renewed warehouse locations on the map of Ethiopia

Proposed Distribution Centers for the Company
in Addis Ababa

The capital city, Addis Ababa and the surrounding
area has a population of about 5 million people. As
it is a major trading area in the country, it is very
essential to locate distribution centers in and


around the city. Accordingly, demand for the
companys product within AA region is aggregated
in to the following clusters based on their relative
distance from the plant. All MDCs available are
included in the aggregation. The location of the
demand is found on account of center points and
demand areas. (Table 10)

Plant
AA
DD
Wonji
Ziway
Supplier
Adama
Shashemene
Bahir Dar
Retailer
MDC
Customer
Customer
Metehara
Jimma
AA
Dessie
DD
Warehouse
USA
Retailer Customer
Retailer
MDC
Gonder
Mekelle
Awash
MDC
Retailer
Customer
Journal of EEA, Vol. 28, 2011 77


Table 10: Aggregated demands in Addis Ababa city


To simplify the SC operation, WHs should be
established away from the plant. This is a basic
issue for the company with the objective to relieve
the queue at the AA plant, to respond faster to
demands with higher customer service level, to
simplify information processing between MDCs
and the company, and to use different set of trucks
for different purposes.

There is an excess number of trailers within Addis
Ababa Plant. By proper scheduling of the
shipments, cost savings and resource utilization
will be assured. Table 11 shows the proposed
schedule for shipments to warehouses in the city.
For distribution outside the city only 12 trailers are
required, the remaining can be used for distribution
with in the city.





















Number of Warehouses to be Established
To determine the number of warehouses to be
established and find the right locations, waiting
time to load shipment from plant to MDCs and lost
market demand have to be considered. To respond
to demand at any MDC, a vehicle spends an
average of one hour in the queue, another one hour
to get empty bottles to inspect and load. Moreover,
to arrive at the destination an average of one hour
is lost. Therefore, a total of two and a half hours are
spent on average. An average of (1284800/365) or
3520 cases have to leave the plants per day. As the
company uses trucks of capacity 300 cases for
Addis Ababa shipments, a total of 18 trucks have to
wait to get their shipment done. This means 18 hrs
are wasted per day only in the queue, which is
equivalent to 6570 hrs per year. If we assume that























Center Point Demand Demand Areas
Sidist Kilo 187200 Arat Kilo, Shiromeda, Ferensay, Bela, and Menelik
Abune Petros 171600
Piazza, Merkato, Teklehaymanot, Semen mazagaja and
adjoining area
Ayer Tena 47000 Ayer Tena and Alemgena
Bole 94600 Bole and adjoining area
Gulele 47000 Yohannes, Paster, Asko, Gulele
Kazanchis 45700 St.Urael, Aware, Kebena
Kera 44000 Kera, Gotera, Mekanisa
Kolfe 45000 Kolfe area
Megenagna 182700 22 Mazoria , Maganagna , and Kotebe
Meskel Square 134500 Meskel Square Ambassador, Lancha
Mexico 150500 Mexico and adjoining area
Nefassilk 135000 Nefassilk, Saris, Kaliti, Akaki
Place
Days of the Week
Distance Mon Tue Wed Thurs Fri Sat Sun
Max no. of
trucks required
Nazareth 100 x x x 2
Awash 240
Shashemene 250 x x x 2
Jimma 335 x x 2
Dessie 400 x x 2
DD 515
Bahir Dar 560 x x 2
Gonder 738 x x 2

Table 11: Vehicle scheduling for out of AA shipment from Plant at Addis Ababa
Daniel Kitaw, Temesgen Garoma and Desalegn Hailemariam
78 Journal of EEA, Vol. 28, 2011



cost of one hour of a vehicle is 60 Birr (average) a
total of 405,000 Birr is lost in queuing. This cost is
enough to open a warehouse in another place
within the city. In addition to the existing plant
warehouse in Addis Ababa, it would be advisable
to establish one warehouse in Kazanchis with a
capacity of 1770 cases. This warehouse supplies
shipments to areas included under Kazanchis,
Sidist Kilo, Megenagna, Bole and Nefassilk. Initial
shipment is made directly from plant to the
warehouse at Kazanchis by a truck of capacity
1760 per day and then the shipment can further be
distributed by trucks of capacity 300 cases. After
utilizing 12 Hauler Trailers with a capacity of 1760
outside the city, the company remains with other
trucks with a combined capacity of carrying 12630
cases per day.

CONCLUSIONS

In this paper, an attempt has been made to model
the SC network design and justify with the case
analysis. The following are the results of the case
analysis for East Africa Bottling Share Company:

1. Three new warehouses should be established at
Mekelle, Gonder, and Kazanchis.

2. The Plant at Addis Ababa, supplies
warehouses at Kazanchis, Bahir Dar, Dessie,
Shashemene, Jimma, Gonder, Mekelle,
Adama, and part of Awash.

3. The Plant at Dire Dawa supplies to Dire Dawa,
Awash and some part of Adama.

Furthermore, interesting results are observed. First,
by the application of network model optimization
techniques, an annual cost saving of 192,192 Birr
could be achieved. Secondly, by renewing the
existing network design of the company, a market
share of 366,080 cases per year which is equivalent
to 8,785,680 Birr could be realized. Moreover, by
analyzing and revising the vehicle scheduling
problem a cost saving of 405,000 Birr per year
could be achieved.

To sum up, SC network design has a high
economic benefit for manufacturing company. The
MM developed and optimization techniques
employed here will be good grounds for any
similar bottling companies with a need to design
appropriate SC network thereby reducing their
costs. This model could be easily adapted to
another manufacturing plant based on their existing
operating situations.

REFERENCES

[1] Christopher, M., Logistics and Supply
Chain Management: Strategies for Reducing
Cost, Improving Cost and Improving
Services, Pitman, 1992.

[2] Suhong, Li., Nathan, B., Nathan, T.S., and
Rao, S., The Impact of Supply Chain
Management Practices on Competitive
Advantage and organizational
Performance, The International Journal of
Management Science, Omega, vol. 34, 2006,
pp. 107-124.

[3] Nicholas, M., Competitive Manufacturing
Management, Mc Graw Hill, 2005.

[4] Change, Y. and Harris, M. Supply Chain
Modeling Using Simulation, Institute for
Manufacturing, University of Cambridge,
vol. 2, 2009.

[5] Peter, T. and Ale, G., Measurement of
Supply Chain Integration Benefits,
Interdisciplinary Journal of Information,
Knowledge, and Management vol. 1, 2006.

[6] Gunasekaran, A. and Ngai, E., Virtual
supply-chain management, Taylor &
Francis Ltd., vol. 15, No. 6, 2004, pp. 584
595.

[7] Daniel, K. and Abraham, D., Model
Development of Supply Chain Management
System- A Case Study on Meta Abo
Brewery, ESME Journal, vol V. No. 2,
2005.

[8] Anderson, D.L., Britt, F.F., and Favre, D.J.,
The Seven Principles of Supply Chain
Management, Reed Business Information,
vol. 14, 1997, pp. 41-46.

[9] Melnyk, S.A., Supply Chain Management
2010 and Beyond: Mapping the Future of
the Strategic Supply Chain, Michigan State
University, 2006.

[10] Chopra, S. and Meindal, P., Supply Chain
Management, Strategy, Planning and
Operation, Prentice-Hall, New Delhi, 2
nd

edition, 2006.



Supply Chain Network Design-opportunities for Cost Reduction

Journal of EEA, Vol. 28, 2011 79


[11] Handfield, R.B. and Nichols, E.L.,
Introduction to Supply Chain
Management, Prentice-Hall, 2006.

[12] Klemencic, E., Management of the Supply
Chain-Case of Danfoss District Heating
Business Area, Faculty of Economics,
Ljubljana University, 2006.

[13] Fawcett, S.E., Ellram, L.M., and Ogden,
J.A., Supply Chain Management: From
Vision to Implementation, Prentice-H all,
2007.

[14] Thomas, E., Manufacturing Planning and
Control for Supply Chain Management,
McGraw-Hill, New Delhi, 5
th
edition, 2005.

[15] Bernard, W., Introduction to Management
Science, prentice-Hall, New Jersey, 5
th

edition, 1996.

[16] Benita, B.M., Supply Chain Design and
Analysis: Models and Methods,
International journal of Production
Economics, vol.55, No.3, 1998, pp. 281-294.

[17] Vidal, C., A global supply chain model
with transfer pricing and transportation
cost allocation, European Journal of
Operational Research, vol. 129, 2001, pp.
134158.





Daniel Kitaw, Temesgen Garoma and Desalegn Hailemariam
80 Journal of EEA, Vol. 28, 2011



Appendix1. Optimized Minimum Cost for the Existing Network























Supply Chain Network Design-opportunities for Cost Reduction

Journal of EEA, Vol. 28, 2011 81


Appendix2. Minimum Cost Scenario for the Renewed Network



Daniel Kitaw, Temesgen Garoma and Desalegn Hailemariam
82 Journal of EEA, Vol. 28, 2011







*E-mail: Edessa_dribssa@yahoo.com
Journal of EEA, Vol. 28, 2011

PERFORMANCE EVALUATION OF A REACTIVE MUFFLER USING CFD

Sileshi Kore, Abudlkadir Aman and Eddesa Direbsa*
Department of Mechanical Engineering
Addis Ababa Institute of Technology, Addis Ababa University

ABSTRACT

The main objective of this study was to simulate
and investigate the performance of a reactive
muffler by simulation technique. The dimensions of
Alfa Romeo 145 vehicle model 1999 muffler were
used to perform the simulation. The simulation was
carried out using a commercial software package
named FLUENT. In this package, a program
GAMBIT was used to create a mesh surface and to
define the boundary conditions of the required
object, which was read and analyzed by FLUENT.
The result shows that CFD can be used to evaluate
both the mean flow and acoustic performance of a
reactive muffler.

Keywords: Reactive muffler, Aero-acoustics, CFD,
Sound attenuation, FLUENT, Exhaust noise
reduction

INTRODUCTION

A pollutant of concern to mankind is the exhaust
noise in the internal combustion engine. However,
this noise can be reduced sufficiently by means of a
well designed muffler. The suitable design and
development will help to reduce the noise level, but
at the same time the performance of the engine
should not be hampered by the back pressure
caused by the muffler [1, 2, and 3].

A muffler (or silencer or back box in British
English) is a device fitted to internal combustion
engines for reducing the amount of noise. On
internal combustion engines, the engine exhaust
blows out through the muffler. The internal
combustion engine muffler or silencer was
originally invented by Milton O. Reeves. Fig. 1
shows the schematic view of gas flow in a piston
engine. The specific items are given from 1 to 13.

1. ram air,
2. air filter
3. mass flow sensor,
4. butterfly valve,
5. air box,
6. intake runners
7. intake valve
8. piston
9. exhaust valve
10. extractor pipe
11. collector
12. catalytic converter muffler (expansion
chamber)
13. Muffler (expansion chamber)



Figure 1 Schematic view of gas flow in a piston
engine

In general, sound waves propagating along a pipe
can be attenuated using either a dissipative or a
reactive muffler [4]. A dissipative muffler uses
sound absorbing material to take energy out of the
acoustic motion in the wave, as it propagates
through the muffler. Reactive silencers, which are
commonly used in automotive applications, reflect
the sound waves back towards the source and
prevent sound from being transmitted along the
pipe. Reactive silencer design is based either on the
principle of a Helmholtz resonator or an expansion
chamber, and requires the use of acoustic
transmission line theory

This study focuses on an expansion chamber. An
expansion chamber muffler consists of sudden
change in cross sectional area that serves to reflect
sound wave back to the engine.

The main objective of this work is to simulate and
evaluate performance of the muffler of the Alfa
Romeo 145 vehicle using FLUENT. The
dimensions of the muffler were used to perform the
simulation.

AERO-ACOUSTICS

With the ongoing advances in computational
resources and algorithms, CFD is being used more
and more to study acoustic phenomena. Through
detailed simulations of fluid flow, CFD has become
a viable means of gaining insight into noise sources
and basic sound production mechanisms. FLUENT
offers four approaches for simulating aero
Sileshi Kore, Abudlkadir Aman and Eddesa Direbsa

84 Journal of EEA, Vol. 28, 2011

acoustics [5], in order of decreasing computational
effort;

1. Computational aero-acoustics (CAA),
2. The coupling of CFD and a wave-equation-
solver,
3. Integral acoustic models, and
4. Broadband noise source models.

Computational Aero-Acoustics

Computational aero-acoustics is the most
comprehensive way to simulate aero-acoustics [6].
It does not rely on any model, so is analogous to
direct numerical simulation (DNS) for turbulent
flow. CAA is a transient simulation of the entire
fluid region encompassing the sources, receivers
and entire sound transmission path in between [5].
By rigorously calculating time-varying flow
structures, pressure disturbances in the source
regions can be followed. Sound transmission is
simulated by resolving the pressure waves traveling
through the fluid. While CAA is the most general
and accurate theoretical approach for simulating
aero acoustics, it is unrealistic for most engineering
problems because of a number of practical
limitations, including:

widely varying length and time scales
characteristic of the sound generation and
transmission phenomena, and

widely varying flow and acoustic pressures

While these constraints render CAA unsuitable for
most practical situations, there is a small class of
engineering problems to which it can be
successfully applied. This includes cases where:

the frequency range of interest is fairly
narrow
the sources and receivers are located close to
each other, and
the sound to be captured is fairly loud

CFD Wave Equation Solver Coupling

The computational aero-acoustics approach is
prohibitively expensive for most practical problems
due to the large difference in time, length, and
pressure scales involved in sound generation and
transmission. Computational expense can be
greatly reduced by splitting the problem into two
parts:
1. Sound generation
2. Sound transmission.
With this approach, sound generation is modeled
by a comprehensive transient CFD analysis, while
a wave equation solver is used for analyzing sound
transmission

Integral Acoustics Methods

The approach of splitting the flow and sound fields
from each other and solving for them separately
can be simplified further if the receiver has a
straight, unobstructed view of each individual point
that is a source of noise. Sound transmission from a
point source to a receiver can be computed by a
simple analytical formulation. The Lighthill
acoustic analogy provides the mathematical
foundation for such an integral approach [5]. The
Ffowcs-Williams and Hawkings (FW-H) method
extends the analogy to cases where solid,
permeable, or rotating surfaces are sound sources,
and is the most complete formulation of the
acoustic analogy to date. Both methods are
implemented in FLUENT. As an example, the FW-
H method has been applied to the prediction of
sound radiating from a backward facing elbow (a
simplified representation of an automotive A-pillar
rain gutter). Using the large eddy simulation (LES)
turbulence model, predictions of the sound pressure
level for this case were found to be in very good
agreement with experimental data taken from the
literature.

Broadband Noise Source Models

The three methods described so far require well-
resolved transient CFD simulations, since they aim
to determine the actual time-varying sound-
pressure signal at the receiver, and from that, the
sound spectrum. In several practical engineering
situations, only the locations and relative strengths
of sound sources, rather than the sound spectra at
the receivers, need to be determined. If the sound is
broadband the source strengths can be evaluated
with reasonable accuracy from the time-averaged
structure of the turbulent flow in the source
regions. Turbulence is the primary cause of sound
in aero-acoustics, so in a broad sense, regions of
the flow field where turbulence is strong produce
louder sources of sound. FLUENT 6.2 includes a
number of analytical models referred to as
broadband noise source models which synthesize
sound at points in the flow field from local flow
and turbulence quantities to estimate local sound
source strengths. The key advantage of these
models is that they require very modest
computational resources compared to the methods
described in the previous sections. Broadband noise
models only need a steady state flow solution,
Performance Evaluation of a Reactive Muffler Using CFD

Journal of EEA, Vol. 28, 2011 85

whereas the other methods require well-resolved
transient flow solutions. One example recently
studied involves the prediction of prominent sound
sources around a simplified sedan, using Lilleys
acoustic source strength broadband noise model.

In summary, FLUENT offers four ways for
simulating aero-acoustics. These range from highly
accurate, but expensive methods to quick and
approximate approaches. In this work the Ffowcs-
Williams and Hawkings (FW-H) method which is
less expensive and transient [5] was used.

Empirical Relation

Theoretically, the transmission loss will increase
with the increase of the ratio of cross-sectional area
of expansion chamber to both inlet and outlet pipe
cross-sectional areas.

Transmission Loss of a Muffler

The transmission loss of a muffle is given by the
following equation.

|
.
|

\
|
+ = kl
m
m TL
2
2
10
sin
1
4
1
1 log 10
(1)

c
fl
k
t 2
=


light of speed is c
frequancy the is f
length the l
number wave sound is k where

Governing Equation

The computational equation for the muffler is given
by the following differential equation.

) 2 (
2
2
2
2
2

c
c
=
c
c
x
u
t
o o


Where o is the pressure

Substituting p in Eq. 2:

) 3 (
2
2
2
2
2

c
c
=
c
c
x
p
u
t
p



BASIC TERMINOLOGIES

Possibly, the simplest form of reactive muffler is
the so called expansion chamber muffler shown in
Fig. 2.

Figure 2 Simple expansion chamber muffler


The acoustic plane wave of amplitude
inc
P and
angular velocity f t e 2 = is propagating in the
inlet pipe towards the muffler expansion. The
expansion from the inlet pipe result in a reflected
plane wave of amplitude
ref
P propagated away
from the muffler as shown in Fig 2. Plane wave
will propagate in the expansion section and it is the
destructive interference of the wave which makes
the muffler effective. A plane wave of amplitude
tra
P is transmitted along the outlet pipe from the
muffler. The outlet pipe is assumed to be infinitely
long or anechoic-ally terminated so that there is
only one wave, the transmitted wave, in the out let
pipe. The ratio of the acoustic power associated
with the incident wave
inc
W to that associated
with the transmitted wave
tra
W can be used to
determine the frequency dependent Transmission
Loss

TL is given by the following equation.

trs
inc
W
W
TL
10
log 10 = -------------------- (4)

If the initial and the outlet pipe of the muffler have
equal area, the ratio of the powers as given in Eq. 4
is the same as the ratio of the acoustic intensities.
The intensity of a propagating harmonic plane
acoustic wave having amplitude of P is
2
2
2 c
P
p
, where p is the density of the gas and
c is the velocity of sound in the gas. Thus if the
values of the p

and c

at the inlet and outlet pipes
of the muffler are identical, as are the pipe areas,
Sileshi Kore, Abudlkadir Aman and Eddesa Direbsa

86 Journal of EEA, Vol. 28, 2011

the formula for the transmission loss given by Eq. 4
becomes:

trs
inc
P
P
TL
10
log 10 ---------------------------- (5)

GAMBIT

GAMBIT is CAD software and is compatible with
FLUENT software for running simulations based
on fluid flow and heat transfer analysis. GAMBIT
allows the user to create a mesh surface and define
the boundary conditions of the required constructed
object, which will be read and analyzed by
FLUENT.

The geometries used in the simulation are as
follows according to Fig. 3



Figure 3 Muffler dimension for simulation

Inlet pipe length: 0.17 m
Outlet pipe length: 0.16 m
Expansion chamber length: 0.44 m
Inlet pipe diameter: 0.065 m
Outlet pipe diameter: 0.065 m
Expansion chamber diameter: 0.2 m

SOLVER AND BOUNDARY CONDITIONS

A commercial CFD package, Fluent 6.3 was used.
The solver implemented was a 2-D, segregated
implicit solver with 2
nd
order implicit time stepping
[5]. Second order upwind discretization was used
for the density, momentum, energy, turbulent
kinetic energy and the turbulent dissipation rate
equations. PISO pressure velocity coupling is used.
The k turbulence model was used for closure
[5].

The working fluid was air with the density modeled
assuming an ideal gas and the properties shown in

Table1. The boundary conditions consist of a
velocity inlet, a pressure outlet and a series
of walls.

Table1: Solution variables

Variables Value

Working fluid Air
Mean pressure 101.325kPa
Mean temperature 300K
Dynamic viscosity 1.79 x 10
-5
kg/m.s

The edges are meshed with spacing of 2 mm in
order to have better nodes spacing and gives better
results in FLUENT analysis. The inlet is defined as
the velocity-inlet boundary and the outlet is defined
as the pressure-outlet boundary. The file is saved
and exported to FLUENT for analysis.

In this case the inlet velocity is normally 80 m/s.
The outlet boundary condition has been set at
atmospheric pressure.

Velocity and pressure data are recorded at a point
in the inlet pipe and at the pointing the outlet pipe
at each time step. The model is marched forward in
time step of 60 s.

Fourier transforms of the two pressure time
histories is taken to evaluate the sound pressure
level at receiver-1 and receiver-2.

MATLAB is a high-performance language for
technical computing [7]. MATLAB is used to
determine the transmission loss of the simplified
model muffler of Alfa Romeo.

RESULTS

In Figure 4, the static pressure distribution is
indicated by various colored contours and in Fig. 5
the static pressure distribution along the muffler is
shown at 4000 rpm. The pressure rise in the
expansion chamber is due to stack up of gas flow
as the cross-sectional area between the chamber
and the outlet pipe is changed. However, at the
outlet, the velocity of exhaust gas is increased due
to the high pressure in the chamber and pushes the
exhaust gas out to the atmosphere.

Figures 6 and 7, show the variation of velocity of
exhaust gas flow from the inlet pipe to the outlet
pipe. Both figures indicate the inverse relationship
between pressure and velocity which obeys the
Bernoulli principle.

Performance Evaluation of a Reactive Muffler Using CFD

Journal of EEA, Vol. 28, 2011 87

Figures 8 and 9, show the sound pressure level with
frequency for expansion chamber at 4000 rpm at
receiver-1 and receiver-2 i.e. at the inlet and outlet
respectively.


Theoretically, the transmission loss will increase
with the increase of the ratio of cross-sectional area
of expansion chamber to both inlet and outlet pipe
cross-sectional area. Since the dimensions of the
simplified model of Alfa Romeos muffler are
constant, the maximum transmission loss of the
muffler is almost constant. Therefore using Eq. 4
the transmission loss is calculated and the result is
shown in Fig. 10.


Figure 4 Contours of static pressure



Figure5 Static pressure versus position in muffler





Figure 6 Contours of velocity


Figure 7 Velocity versus position in muffler




Figure 8 Sound pressure level in dB at the inlet




Sileshi Kore, Abudlkadir Aman and Eddesa Direbsa

88 Journal of EEA, Vol. 28, 2011


Figure 9 Sound pressure level in dB at the outlet



Figure 10 Transmission losses for plane wave
equation 1


DISCUSSION

The flow of fluid in ducts and pipes will experience
change in pressure and velocity. The fluid pressure
decreases if the fluid speed increases (and vice
versa). The air at inlet and outlet pipe has a greater
velocity compared to the air flow in the expansion
chamber which can be seen in Figs. 6 and 7 of
simulation. The color contour indicates the velocity
of air flowing in and out of the expansion chamber
and the velocity of the air in the expansion chamber
as well.

The overall simulation result on the air flow in the
muffler indicates that the air flow in the expansion
chamber creates stagnation pressure that
contributes to the back pressure for the muffler
performance. The stagnation pressure is the extra
static pressure that increases the pressure of air
flow inside the muffler, which eventually will
affect the engine performance.

Back pressure represents the extra static pressure
exerted by the muffler on the engine through
restriction in the flow of exhaust gases which for a
four-stroke-cycle engine would affect the brake
power, volumetric efficiency, and hence the
specific fuel consumption rate.

The first five points from Figs. 8 and 9 are
considered and tabulated in Table2.The result
shows the oscillation of the transmission loss is
about 14 dB.

The maximum transmission loss from Eq. 4 is 13.6,
as shown in Fig. 10. The deviation from the
simulation is very small. Therefore the empirical
equation is good to evaluate the transmission loss
of such type of muffler, i.e. simple reactive
muffler, Alfa Romeo 145 vehicle.

Table2: Sound pressure level and transmission loss


CONCLUSION

This work indicates that CFD can be successfully
used to evaluate both the mean flow and acoustic
performance of a reactive muffler. The muffler
design requirement is based on the adequate
insertion loss, back pressure, size, durability, sound
quality, breakout noise from muffler shell and the
flow generated noise. The maximum transmission
loss is around 14dB for the developed model. Eq. 1
plots using MATLAB supports this result as
indicated in Fig. 10.










Inlet sound
pressure level
(SPL
in
) dB
Outlet sound
pressure level
(SPL
out
) dB
Transmission
loss TL
(SPL
in
SPL
out
)
94 80.25 13.75
112.5 99.25 13.25
105 91 14
94 82 12
86.25 81.5 4.75
0 500 1000 1500 2000 2500
0
2
4
6
8
10
12
14
16
Frquancy cycle/second (c/s)
t
r
a
m
s
m
is
s
io
n

lo
s
s

in

d

B
TL curve for the given diameter
Performance Evaluation of a Reactive Muffler Using CFD

Journal of EEA, Vol. 28, 2011 89


REFERENCES

[1] Middelberg, J., Determining the Acoustic
Performance of a Simple Reactive Muffler
Using Computational Fluid Dynamics, Proc.
8
th
Western Pacific Acoustics Conference,
University of New South Wales, Sydney,
Australia, 2003.

[2] Selamet,A.,Acoustic attenuation performance
of circular expansion chambers with extended
inlet/outlet, Journal of Sound and Vibration,
vol. 223 no. 2, 1999, pp.197-212.

[3] Munjial, M.L., Acoustic of Ducts and
Muffler, Wiley-Interscience, Inc., 1987.




[4] Wilson, C.E., Noise Control: Measurement,
Analysis, and Control of Sound and
Vibration, Krieger Publishing Company,
1993.

[5] Fluent Inc, Fluent 6.3 Users Guide, 2003.

[6] Mohiuddin, A.K.M., Rahman, A. and Gazali,
Y.B.,Experimental Investigation and
Simulation of Muffler Performance,
Proceedings of the International Conference on
Mechanical Engineering, 2007.

[7] http://www.corneluniversity.edu.us. Visited on
September7th,2006,from FLUENT Tutorials -
Cornell University,



























*E-mail: kassegne@mail.sdsu.edu
Journal of EEA, Vol. 28, 2011


TECHNICAL NOTES

NOTES AND PROPOSED GUIDELINES ON UPDATED SEISMIC CODES IN
ETHIOPIA
IMPLICATION FOR LARGE-SCALE INFRASTRUCTURES

Samuel Kinde*, Samson Engeda, Asnake Kebede and Eyob Tessema

ABSTRACT

In light of recent expansion in the planning and
construction of major building structures as well as
other infrastructures such as railways, mass-
housing, dams, bridges, etc, this paper reviews the
extent of seismic hazard in Ethiopia and proposes a
review and update of the current out-dated and - in
most cases - non-conservative seismic code. In
specific terms, the last three seismic codes are
reviewed and a comprehensive set of discussions
on seismic zoning and PGA (peak ground
acceleration), special provisions in concrete and
steel beams and columns design, and seismic
analysis are provided through a comparison with
major international building codes. Sets of
recommendations in updated and conservative
seismic zoning, need for separate seismic codes for
non-building type structures, a choice of 475 years
as return-period instead of the current 100 years,
and a revisit of the basic seismic design philosophy
to focus on performance basis are provided.

Key-words seismic design, building code, seismic
hazard, earthquake, infrastructure, codes and
standards.

INTRODUCTION

The current economic expansion in Ethiopia which
seems to be driven by a number of enabling factors
has had substantial impact in the transportation,
energy, and water supply sectors with a growing
number of large-scale infrastructure projects such
as dams, power-plants, highway roads, water
reservoirs, and expansion of railways either coming
online or entering construction phase. Furthermore,
pressure from other natural developments - the
staggering population growth of the country being
a primary one - continue to force rapid
implementation of large-scale engineering
infrastructure works such as mass-housing, water-
supply reservoirs, power-plants, dams, new cities,
etc. As things stand, the country's population is
projected to reach a staggering 120 million by 2025
positioning Ethiopia to be among the top 10-15
populous countries on the planet (see Fig. 1) [1-3].





Figure 1 Population Projection - (a) Ethiopia (b)
Addis Ababa [1-3].

In addition to a multitude of other threats that this
population growth could bring, the issue of housing
these additional 30-40 million Ethiopians in the
next few decades will pose a huge risk factor. In a
recent paper, it has been argued that 25 new cities
with size equivalent to present Dire Dawa are
needed or the current 10 cities such as Addis Ababa
and Dire Dawa will have to become mega cities of
10 million or more to accommodate this growth
[4]. While these projections regarding urbanization
may be a little bit on the high-side, there is no
denying regarding the need for housing these
additional millions of citizens in the next several
decades.



[a]
[b]
Samuel Kinde, Samson Engeda, Asnake Kebede and Eyob Tessema

92 Journal of EEA, Vol. 28, 2011


Interestingly, however, a substantial amount of
these large infrastructure works already lie or will
be in or in close proximity to the some of the most
seismically active regions of the country such as
Afar Triangle, the Main Ethiopian Rift (MER), and
the Southern Most Rift (SMR) where well-
documented damage-causing earthquakes are
common. A review of the engineering reports
associated with some of the largest and most
expensive infrastructure projects in the country
suggest that - despite the presence of a substantial
amount of published literature on the significant
seismicity of the region - the severity of threats
posed by seismic hazards on the safety and
serviceability of these structures is not well-
understood by the main stake-holders such as
policy-makers, insurance companies, real-estate
developers, capital investors, building design-
checkers and, not infrequently, the engineering
community itself as well.

Against this background, therefore, the need for
preparing for this real and substantial threat of
seismic hazards in the country is pressing and
requires attention at all levels. It is relevant to
mention that, in this paper, the discussions on
seismic hazard pertain to both building-type
structures as well as other structures such as
railways, bridges, dams, power-plants and the like.
However, since the existing seismic code in the
country covers only building structures, the
discussions here are a bit biased towards buildings.
Historically, the country had adopted three
revisions of seismic codes (specifically for building
structures) since 1978 to address seismic hazards.
The enforcement of building code standards to
'determine the minimum national standard for
the construction or modification of buildings or
alteration of their use in order to ensure public
health and safety' was not legislated until 2009 [5].
This is certainly an encouraging progress with (a)
requirements and mechanisms for building
plan/design checks/reviews by building officials
outlined (Part Two - Administration), and (b)
requirements for ensuring 'safety for people, other
construction and properties'' by designing
buildings according to 'acceptable building design
codes' now legislated (Part Three - Land Use &
Designs). However, several fundamental problems
still main before rationale seismic design is
practiced well in the country. These are: (i) there is
growing evidence that the current building codes
themselves are inadequate, out-dated, and not
stringent enough when compared to the level of
seismic risks associated with the country [6], (ii)
ambiguities that exist in this first legislation
attempt that do not explicitly address the seismicity
of the country (i.e., Part Three-Design, Item 34 that
reads "buildings may not exhibit signs of
structural failure during their life span under
normal loading") may give a ground for stake-
holders to ignore seismic effects because 'normal
loadings' may arguably not include seismic loads,
and (iii) the mechanism for enforcing strict
adherence through design checks at the
municipality offices (as opposed to external peer-
review system) is inadequate because it relies on
design-checkers who are neither well-aware of the
seismicity of the country nor well-trained in
seismic design to start with. Further, the legally
mandated requirements and design review process
do not apply to public and government large-scale
infrastructures (dams, railway structures, electrical
transmission structures, etc) which actually are the
sources of some of the major concerns. Therefore,
ambiguities of the new building construction law
coupled with the lack of awareness and mechanism
for truly enforcing code requirements continue to
introduce a significant risk of endangering the
useful life of these expensive projects as well as
human life.

In this research report, therefore, the objective is to
(i) demonstrate that there is substantial amount of
literature on seismicity in Ethiopia that needs to be
disseminated to a wider audience, (ii) provide a
background and critical review of the last three
building codes of the country, (iii) provide a
background argument and facts that could serve as
starting points for the long-awaited complete
review of the current out-dated seismic code, and
(iv) propose guidelines for rationale and
conservative seismic design in Ethiopia and
surrounding countries for large-scale projects with
particular emphasis on dams, highway structures,
as well as railways and railway structures.

SEISMIC HAZARD AND ITS HISTORICAL
RECORD IN ETHIOPIA

Review of Historical Records of Earthquake

It is well established now that, due to its location
right on some of the major tectonic plates in the
world, i.e., the African and Arabian plates,
earthquakes have been a fact of life in Ethiopia for
a very long time. The earliest record of such
earthquake dates as far back as A.D.1431 during
the reign of Emperor Zara Yaqob [7]. In the 20th
century alone, a study done by Pierre Gouin
suggests that as many as 15,000 tremors, strong
enough to be felt by humans, had occurred in
Notes and Proposed Guidelines on Updated Seismic Codes in Ethiopia

Journal of EEA, Vol. 28, 2011 93
Ethiopia proper and the Horn of Africa [7]. A
similar study by Fekadu Kebede [8] indicated that

there were a total of 16 recorded earthquakes of
magnitude 6.5 and higher in some of Ethiopia's
seismic active areas in the 20
th
century alone. The
most significant earthquakes of the 20
th
and 21
st

centuries like the 1906 Langano earthquake, the
1961 Kara Kore earthquake, the 1983 Wondo
Genet earthquake, the 1985 Langano earthquake,
the 1989 Dobi graben earthquake in central Afar,
the 1993 Adama earthquake, and the 2011 Hosanna
earthquake were all felt in some of the major cities
in the country such as Addis Ababa, Jimma,
Adama and Hawassa. In addition to Gouin's book
that describes the earthquakes of 1906 and 1961
that shook Addis Ababa and caused widespread
panic, a recently published Amharic biography of
Blaten Geta Mersie Hazen Wolde Qirqos vividly
describes the effect of the 1906 Langano
earthquake in Addis Ababa and Intoto [9].

"In the afternoon of Nehase 19, 1898 (August 25,
1906), there was a very large earthquake. The
whole day was marked by a huge pouring of rain
mixed with lightning and thunder. The earthquake
in the middle of such rain and thunder caused many
to panic thinking doomsday had come. I was
studying oral traditional lessons leaning against the
pillar of the house when the earthquake struck. I
was thrown off-balance and fell to the ground. As
the roof rumbled, we thought a calamity had
befallen Intoto and fear gripped us. The people then
pleaded with the Almighty".

In addition to these well-documented seismic
events starting from the 15th century, a number of
earthquakes have shaken the Main Ethiopian Rift
(MER), and the Southern Rift Valley of the country
recently between 2005 and now bringing the
danger of seismic hazard to the forefront [10-15].
As built up environments and human development
activities increase in areas close and within the
MER, the Afar Triangle and the Southern Rift
Valley of the country, it is expected that the
damage on property and loss of human life due to
seismic hazard will increase very significantly.
Because this period coincides with noticeable
infrastructure build-out through the majors regions
of the country, a review of these events and the
damages that they had caused will be provided later
One of the important observations is that newer
buildings are experiencing damages under these
relatively moderate earthquakes of magnitude
around 5.0.

Review of Seismic Mechanisms and Seismicity
in Ethiopia

There is a comprehensive amount of literature in
the area of seismology in the Afar Triangle and
Main Ethiopian Rift (MER) regions of Ethiopia [7,
16-25]. An extensive amount of earthquake records
on Ethiopia that extend up to the 15-th century
exist [7]. Publications on research on the seismicity
of the Horn of Africa, in general, and Ethiopia
proper, in particular, date as far back as 1954. The
seismicity of the Afar Triangle, specifically that of
the so-called Wonji Fault belt has been studied by
Gutenberg and Richter who located 23 earthquakes
in the area [17]. A further five events were located
by Sykes and Landisman [18] and Fairhead [19]
for the period January 1955 to December 1963.
These included an event north-east of Lake
Turkhana and an event close to Chabbi volcano
near Hawassa. Later notable publications include
that of Mohr [20-22], Mohr & Gouin [23], Gouin
[24], Gouin and Mohr [25].

The seismicity of the neighboring region of Kenya
which forms a natural extension of the southern
Ethiopian Main Rift (MER) for the period of 1880-
1979 is also well documented by Shah [26]. In
extension, the seismicity of the East African Rift
System has been studied by Gutenberg and Richter
[17] , Sykes and Landisman [18] and Fairhead [19].
More localized studies have been made by Sutton
and Berg [27], De Bremaecker [28], and
Wohlenberg [29]. Fig. 2.a gives a distribution of
seismic events in and around Ethiopia up until
1995 [30-31]. Fig. 2.b summarizes the number of
publications that had appeared over the past few
decades with the key word of 'Ethiopia earthquake'.























Samuel K

94







Figure 2 (



In terms of
hazard, th
simplified
distinct se
are: the Af
consists of
Aden, an
Escarpmen
running f
devastating
magnitude
Ethiopian
the Red Se
Rift Syste
Kebede an
recent dev
tectonics o
complicate
seismic z
surroundin
earthquake
in South Su
(SMR) of E
(Zone 2),
depression
Djibouti (Z
Afar with t
the western
extensiona

[a]
Kinde, Sams

(a) Seismicity
Rift Valley (A
of published li
f the mechanis
he well-accep
model that
eismic zones i
far Triangle se
f the junction
nd the Main
nt seismic zon
faults associa
g earthquake
6.7 Kara K
Rift System s
ea, Gulf of A
em through t
nd Asfaw - b
velopments in
of the region
ed model con
zones in pr
ngs as main c
es [33]. These a
udan (Zone 1)
Ethiopia and M
, (iii) the w
n (Zone 3), the
Zone 4), the r
the axial troug
n Gulf of Aden
al tectonics regi
]
son Engeda,

of Ethiopia, wi
Atalay Ayele 19
iterature on ear
sm that give ris
pted theory
typically con
in Ethiopia pr
eismic zone (w
between Red
n Ethiopian
ne (characteriz
ated with so
es such as
Kore earthquak
seismic zone (
den with the E
the Afar Tria
based on rece
the understan
- have propo
nsisting of e
roper Ethiopi
contributors o
are (i) the Asw
, (ii) the south
Main Ethiopian
western margi
Afar depressi
region connect
gh of the Red S
n (Zone 7) and
ion (Zone 8).
, Asnake Ke

ith particular e
995) [30-31]. T
rthquakes in Et
se to seismic
suggests a
nsiders three
roper. These
which further
Sea, Gulf of
Rift), the
zed by N-S
me of the
the 1961
ke), and the
(which links
East African
angle) [32].
ent data and
nding of the
osed a more
eight unique
ia and its
of damaging
wa shear zone
ernmost rifts
n Rift (MER)
in of Afar
on including
ting northern
Sea (Zone 6),
d the Yemen
ebede and Ey
J




emphasis on the
The dots repres
thiopia
Newer
region
better
country
called
Experim
2001-2
of cont
array [
Rift Co
UK Na
made u
Ethiopi
major s
underst
shaping
also ha
from th
magma
extend,
split a
dividen
discipli
underst
seismic
and -
Howev
detail
catalog
unattain
main cu
yob Tessema
Journal of
e Main Ethiopi
ent earthquake
studies rega
continue to
understanding
y. For example
Ethiopia Afa
ment (EAGLE
2003 to explore
tinental breaku
34]. Another r
onsortium whi
atural Environm
up of scientists
ia, France and
set of experime
tand the tect
g the surface
as a stated fo
he above to d
a movement t
, the crust to
apart. These
nds for the Eth
ine and com
tanding and m
c and volcanic
by extension
ver, the signifi
site-specific
gue for the wh
ned goal with
ulprit.
[b]
a
f EEA, Vo
ian Rift as wel
e locations. (b)
arding the sei
contribute to
g of seismic
e, a multi-disci
ar Geoscientif
E) was carrie
e the kinematic
up using a bro
recent effort in
ich is a projec
ment Research
from the UK,
the US. Its aim
ents in the Afa
tonic processe
of the Earth.
ocus in combin
determine the
that allow the
evolve and gr
efforts will
hiopian earthqu
mmunity by p
more complete
c threats in th
- the Main
icant gap that
conditions an
hole country re
h prohibitive
ol. 28, 201

l as the souther
Distribution
ismicity of th
owards gainin
hazards in th
iplinary progra
fic Lithospher
ed out betwee
cs and dynami
oadband seism
nvolves the Af
ct funded by th
h Council that
with partners
m is to conduct
ar area to furth
es involved
The consortiu
ning the resul
mechanisms
e lithosphere
row as the pla
have defini
uake engineerin
providing bett
picture of bo
he Afar Triang
Ethiopian Ri
exists today
nd fault zon
emains a large
cost being th
11

rn

he
ng
he
am
ric
en
ics
mic
far
he
is
in
t a
her
in
um
lts
of
to
ate
ite
ng
ter
oth
gle
ft.
in
nes
ely
he


Notes and Proposed Guidelines on Updated Seismic Codes in Ethiopia

Journal of EEA, Vol. 28, 2011 95

Review of Response of Built-up Structures to
Seismic Events in Ethiopia

As discussed above, while an extensive amount of
earthquake records on Ethiopia exist, the structural
damage to infrastructures in the vast part of this
period was obviously very low due to the extreme
limitation of built-up environments in the country.
It is only, perhaps, starting from the 1950s and
1960s that one sees what could be characterized as
noticeable building and infrastructure activity in
the country, particularly in the seismic-prone areas.
Therefore, this study concentrates exclusively in
the period from 1978 to the present.

For the period between 1960 and 1978, Gouins
work [7] provides a wealth of information on the
response of built-up structures like buildings and
bridges to some of the large and damaging
earthquakes such as Karakore (1961) and Serdo
(1969). With regard to infrastructural damages
from 1978 onward, there have been isolated reports
[35-45] of which some are unpublished [43].
Interestingly, this period coincides with a growth in
built-up areas and infrastructure in some of the
seismically active areas, particularly MER and the
Afar Triangle. Areas where there were no
infrastructure damages even under strong ground
motions - such as the 6.3 intensity Chabbi Volcano
earthquake of 1960 near the present day Hawassa -
have now seen encroachment of built-up areas
which have suffered damages under recent but
much less-strong ground motions. Therefore, it has
increasingly become clear that structural damages
to buildings and infrastructure due to earthquakes
are on the rise in the country. A catalogue of these
damages presented in Table 1 - particularly for the
time period after 1978 - is a first attempt in
understanding the pattern of damages observed so
far and preparing the groundwork for predicting the
potential structural damages that could occur in the
years to come. Fig. 3 and Fig. 4 show the
distribution of damage-causing earthquakes in
Ethiopia with damage defined as damage to
property or injury, or human life loss or all.
Fig. 5-7 shows photo of structural damages due to
recent earthquakes.


Table 1: List of earthquakes and reported damages between 1979-2011.
Earthquake Intensity Year
STRUCTURAL DAMAGE
Reference
Akaki
8.85N 38.7E
Magnitude 4.1.
Intensity VII
near epicenter
1979
(28 July)
No damage to the then Aba Samuel HEP station a few
kilometers away.
Cracks in poorly built masonry structures.
[36]
8.9N 39.9E 5.1 1981
(Feb 7)
Cracks in masonry buildings in Awara Melka town, north
of the Fentale volcanic center.
[42]
7.03N 38.6E 5.1 1983 Rock slides and damage and destruction of masonry
buildings in Wendogenet, east of Lake Hawassa.
Well-built single-story building cracked at the
Forestry Institute.
Large boulders dislodged, plaster fallen off walls,
electric poles thrown down.
[42]

[43]
Hawassa 5.3 1983 Damage to steel frames in Hawassa.
Damage to Wetera Abo Church in Wondo Genet
(1983 earthquake, masonry building with irregular
vertical and horizontal stiffness. Damage seems to
occur where there is stiffness discontinuity).
[41]

11.37N 38.7E
Near Lake
Hayk.




1984
(Apr 10)
High-rise buildings shaken. Mortgage Bank Building in
Kazanchis.
[39]
8.95N 39.95E 1984
(Aug. 24)
Concrete building in Piazza shaken [39]
8.3N 38.52E
Oitu Bay
(Langano)
5.1 1985 Strongly felt in Lake Langano camp, central MER.
Cracks in buildings in resort area hotels.
[37]
[43]
9.47N 39.61E
Langano
(4.8), 105 Km
away
1985
(Oct)
Panic in high-rise buildings in Addis Ababa. [38]
Samuel Kinde, Samson Engeda, Asnake Kebede and Eyob Tessema

96 Journal of EEA, Vol. 28, 2011108


5.4 1987
(Oct 28)
Already weakened blocket building collapsed,
strongly felt Arba Minch.
Panic No damage in Jimma.
Students knocked against one another in classroom,
poorly built house collapsed in Sawla.
[43]
Hamer and
Gofa
Earthquake
Swarm
5.3 6.2
magnitude.
1987
(Oct 7
28)
Details given separately for Hawassa, Jima and Arba
Minch.
[39]
5.3 1987
(Oct 7)
Light-sleepers woken. No structural damage in
Hawassa.
Poorly built structures cracked, many woken up, birds
shaken-off trees.
[43]
8.9N 40E 4.9 1989 Cracks in buildings in the town of Metehara, northern
MER.
Felt like passing truck by many, shaking beds.
[42]
[43]
Dobi Graben
[Afar]
1989 Several bridges damaged.
Mekelle 5.3 1989
(Apr 13)
Felt by many causing some panic. [41]
Dichotto

5.8 1989
(Aug 20)
Dining people thrown-off table, masonry house collapsed,
landslides killed 4 people and 300 cattle, 6 bridges
destroyed in Dichotto.
[43]
Soddo
6.84N 37.88E
5.0 1989
(June 8)
Widespread panic, broken windows and some injured in
Soddo.
[43]
8.1N 38.7E 5.1 1990 Minor damage in towns at the western escarpment, i.e., at
Silti and Butajira, West of Zway town.
[42]
8.3N 39.3E
Nazareth
5.0 1993 Collapse of several adobe buildings in Nazareth town
northern MER.
Felt as far as Debre Zeit and Addis Ababa.
[42]
7.2N 38.4W 5.0 1995 Cracks in flour factory building at Hawassa town. [42]
Mekelle 5.2 2002
(Aug 10)
Buildings shaken in the city of Mekelle. [44]
Afar Triangle 2005
Sept. 26
Fumes as hot as 400
o
C shoot up from some of them; the
sound of bubbling magma and the smell of sulfur rise
from others. The larger crevices are dozens of meters
deep and several hundred meters long. Traces of recent
volcanic eruptions are also visible. This was followed by
a week-long series of earthquakes. During the months that
followed, hundreds of further crevices opened up in the
ground, spreading across an area of 345 square miles.
[45]
Ankober 5.0 2009
Sep 19
Earthquake strikes near Ankober Town and was widely
felt in Addis specially by residents who live on multistory
buildings.
[10]
Hosanna 5.3 2010
(Dec 20)
Damage sustained by reinforced concrete frame
dormitory building at Jimma University with in-filled
walls at where as many as 26 students were injured.
structural damage to slab and column joint. Damage to
many building in Hosanna.
[12]
Ethio-Somali
Border
6.1 2011
(March 3)
Buildings shaken in Dire Dawa, Jijiga, and Somalian
towns.
[13]
Abosto/
Yirga Alem
5.0 2011
(March 9)
Damage to unreinforced cinder-block cladded timber
building. 100 houses were destroyed and 2 people were
injured in this earthquake.
[14]
Journal


Figure 3 D
ea
re
re
bo
ea
in
M

Figure 4
[a]
of EEA, Vol
Distribution
arthquakes bet
epresent magni
eported damag
oth. Note th
arthquake sequ
n more than
Magnitude mb >
Cumulative
causing earth
2011. These
and above w
property or l
data from the
in Dabbahu
this picture
associated risk
Notes and P
l. 28, 2011
of damage
tween 1960 -
itudes 4.9 and
ge to property
hat in 2005
uence in Dabb
n 200 earth
> 4.5 just in ab
distribution
hquakes betw
represent mag
with reported
life or both.
2005 earthqua
could signifi
further emph
ks.
Proposed G

e causing
2011. These
d above with
y or life or
alone, the
bahu resulted
hquakes of
out 10 days!
of damage
een 1960 -
gnitudes 4.9
damage to
Inclusion of
ake sequence
icantly alter
hasizing the
Guidelines on




Figure


[a]
[b]
[b
n Updated S
5 (a) Damage
Wondo Ge
Damage t
Warehouse
masonry bu
caused by b
building).
Mariam Asfa
]
Seismic Code
e to Wetera A
enet (1983 e
to Arba M
(1987 Earth
uilding. Damag
biaxial bendin
(Photo sourc
faw - private co
es in Ethiopi
Abo Church
earthquake). (
Minch Kebe
hquake Swarm
ge seems to b
ng at corner
ce: Dr. Laik
ommunications
ia

97
in
(b)
ele
m,
be
of
ke
s).


Samuel K

98

Figure 6 P
da
in
th
So


Figure 7 (a
E
to

BACKGR
CODE-

Ethiopian

The first se
introduced
four seism
a return-pe
not being e
rating was
'major' co



[a]
[c]
Kinde, Sams
Photographs of
amages to rein
njured. [b] struc
he masonry cla
ource: Ethiopia
a) Damage to u
arthquake of M
o cinder-block b
ROUND AND
-REQUIRED
ETH
Building Stan
eismic code for
d in 1980 (CP
mic code region
eriod of 100 ye
exceeded [46,4
assigned with
orresponding
son Engeda,
f structural dam
forced concret
ctural damage
d of a timber b
an TV.)
unreinforced cin
March 19, 2011
building in Aw
D CURRENT S
SEISMIC DE
HIOPIA
ndard Codes
r buildings in E
P1-78). This c
ns (i.e., 0, 1, 2,
ears and 90% p
47]. To each zo
'no', 'min', 'mo
to zones 0,
, Asnake Ke
mages due to th
te frame dormit
to slab and col
building. EBCS
nder-block cla
1. 100 houses w
wara Melka due
STATE OF
ESIGN IN
Ethiopia was
code defined
, and 4) with
probability of
one, a danger
oderate', and
1, 2, and 4.
ebede and Ey


he December 20
tory buildings
lumn joint. [c]
S-8:1995 classi

added timber bu
were destroyed
e to Awara Me

Figure
this co
seismic
Engine
equival
revision

[d]
[b]
yob Tessema
Journal of
010 Hossana e
at Jimma univ
debris from da
ifies Jimma as
uilding due to A
d and 2 people w
elka earthquake
8.a shows th
ode. The zone
c factor 'R'
eers Associatio
lent static loa
n was introduc
a
of EEA, Vol.
arthquake. [a]-
versity where 2
amaged frame.
seismic zone 1
Abosto/Yirga A
were injured [1
e of April 1980
he seismic zon
numbers corr
used in SEA
on of Californ
d procedure [
ced in 1983 as
. 28, 201110
-[c] show
6 students wer
[d] damage to
1.( Image
Alem
14] (b) damage
0 [35].
ning adopted b
esponded to th
AOC (Structur
nia) and UBC
[7,47]. The ne
ESCP1-83 [48
08

re
o
e
by
he
ral
C's
ext
8].

Journal



Figure 8 (a
zo
[a]
of EEA, Vol
a) Seismic zon
oning of Ethiop
[c]
Notes and P
l. 28, 2011


ning of Ethiopi
pia as per ESC
Proposed G
a as per Gouin
CP-1:1983, (c) S
Guidelines on





n (1976) which
Seismic Zoning

[b]
n Updated S
was also used
g of Ethiopia a
]
Seismic Code
by CP1-78. (b
as per EBCS-8
es in Ethiopi
b) Seismic
: 1995
ia

99

100 Journal of EEA, Vol. 28, 2011



Both codes were influenced by the so-called
SEAOC Blue Book and UBC (Uniform Building
Code) [49,50]. The CP1-78 code dealt primarily
with seismic zoning and determination of
equivalent static loads on structures and left actual
aseismic design of structural members (beams,
columns, and shear walls) to the judgement of the
engineer with other established international
building codes, primarily UBC, serving as a basis
for aseismic design. ESCP1-83 has a separate code
(ESCP-2:1983 - Ethiopian Standard Code of
Practice for the Structural use of Concrete) for
guidelines for concrete design [51].

These were followed by a substantial change
introduced in 1995 as EBCS-1995 by the Ministry
of Works and Urban Development [52]. The
seismic zoning was an improvement over previous
codes based on additional data obtained from
newer earthquake records inside Ethiopia as well as
neighbouring countries. However, the whole
Ethiopian Building Code Standard (EBCS) that
consisted of 10 volumes was predominantly based
on the European Pre-Standard (experimental) code
(ENV 1998) which was drafted by CEN (European
Committee for Standardization). The seismic
provisions code, EBCS-8: 1995 (Design of
Structures for Earthquake Resistance), was also
predominantly based on ENV 1998:1994 Eurocode
8 - Design Provisions for Earthquake Resistance of
Structures - except the equivalent static load
procedure which still had the UBC influence [53].
The use of the draft Eurocode as a model was a
significant departure from earlier codes which used
UBC as a model to a large extent. It appears that
there was no overriding technical basis for this
departure. Further, the adaptation of this 'draft'
code before the Europeans themselves commented
on it and approved an improved version as a
standing code causes - as will be shown later - a
number of significant inconsistencies and
controversies [54-55]. It seems likely, therefore,
that, in the next code review cycle, the issue of
whether to continue in the traditions of UBC (and
hence IBC and ASCE-inspired codes) or follow
Eurocode will be in the forefront and deserves a
well-thought and unbiased discussion that
considers the long-term interest of the
building/construction industry in the country.

A commonality between all the three codes
introduced in the country over the past 30 years is
the choice of 100 years return-period in contrast
with a 475 years return-period which is adopted by
most codes around the world. The main argument
in favor of this choice has been the relatively
economical construction of structures designed for
a less powerful earthquake [53]. In general, PGA
(peak ground acceleration) values corresponding to
a return-period of 475 years are about twice those
of 100 years return-period [53].

While the existence of history of three generation
of seismic codes in the country is a commendable
effort, its legal enforcement was never codified by
the country's legal systems until 2009 when the
Ethiopian Building Proclamation 624/2009 was
introduced as a legal document that outlines the
building regulations and requirements, for use by
local authorities to ensure building standards are
maintained in their jurisdiction [5]

Seismic Zoning

Gouin who used probabilistic approach is credited
for the initial attempts in producing the first
seismic hazard map of Ethiopia as shown in Figure
8.a [46]. Gouin's work also served as a basis for the
seismic zoning adopted by the ESCP-1:1983
building code of Ethiopia (see Figure 8.b). Since
the production of Gouins seismic zoning maps,
quite a large number of destructive earthquakes
have occurred in the country causing damages both
to property and human life. Further, destructive
earthquakes that occurred in the neighboring
countries were not included in the production of the
first map in 1976. Subsequently, Kebede [56,57]
and Panza et al [58] produced a new seismic hazard
map of Ethiopia and its northern neighboring
countries to account for these additional earthquake
records. Unlike previous works, the seismic zoning
of Ethiopia and the Horn of Africa reported by
Kebede [56], Kebede and Asfaw [59] also account
for ground motion attenuation in addition to newer
data obtained from such sources as the US National
Earthquake Information Service (NEIS). The works
of Kebede [56-57] and Kebede and Asfaw [59]
served as a basis for the seismic zoning adopted by
the current Ethiopian building seismic code -
EBCS-8:1995 as shown in Fig. 8.c. Further, there
have been other attempts on seismic zoning of
some of the countrys important economic regions
such as the city of Addis Ababa. The work of the
RADIUS project is a notable example [15]. There
have been additional studies that are continually
shaping understanding of seismicity in Ethiopia.

A summary of the seismic zonings corresponding
to each of these three codes are given in Figure 8.
Seismic Zoning of Ethiopia as per CP1-78, ESCP1-
83 and EBCS-8:1985 all considered 4 seismic
zones. The availability of relatively newer data was
credited for the changes in seismic zoning of
Ethiopia as per EBCS-8: 1995 which considers
Notes and Proposed Guidelines on Updated Seismic Codes in Ethiopia

Journal of EEA, Vol. 28, 2011 101
some areas in MER to have the same zoning as the
severest of the Afar region. The nature and
location of recent damage-causing earthquakes
such as the December 2010 Hosanna [12] and
March 2011 Aboso/Yirga Alem [13] earthquakes is
expected to add further support for the need for
further improving the current seismic zoning to
account for previously unknown and less-
understood faults as well as local site conditions.


DEFICIENCIES IN CURRENT CODE AND
PROPOSED REVISIONS

Substantial amount of new data has been
accumulated from earthquakes that have occurred
in Ethiopia in the 90s as well as early parts of the
current century that suggest that the current seismic
zonings adopted in the codes are incomplete,
inadequate, and non-cognizant of local site effects
that could amplify earthquake effects. Further, the
inherent weakness and flaws of basing the country's
code on a 'draft' European code that was not even
reviewed and critiqued by the Europeans
themselves at that time add a lot of urgency on the
call for the substantial review of the current
building code, EBCS-1995. In fact, the European
code has not been accepted 'as is' even by its
member states like Italy who have added not
insignificant modifications for national uses [60].

In this section, a review of some of the outstanding
deficiencies of the current building code along with
suggested improvements that could serve as basis
for the proposed code review process is given.
Particular emphasis on seismic zoning, structural
design, and dynamic analysis issues is provided. A
summary of the discussions is given in a
comparative way in Table 2.

Seismic Zoning and PGA

As stated earlier, the works of Fekade Kebede [56-
57] and F. Kebede and L.M. Asfaw [59] served as a
basis for the seismic zoning adopted by the current
Ethiopian building seismic code - EBCS-8:1995
(with a return-period of 100 years which
corresponds to 0.01 annual probability of
exceedance). Associated with this, there are at least
three areas that offer an opportunity to improve the
usefulness as well as address some of the
inadequacies of the current seismic zoning.

1. The effects of local site-conditions such as local
fault lines and soil conditions for - at least the
major population areas - need to be considered.
While preparing a detailed one may be too
prohibitive of an expense and beyond the means of
the country, doing so for major cities like Addis
Ababa, Jimma, Adama, Hawassa, Mekelle, and
Dire Dawa may be a reasonable approach. Even in
current practices, there have been isolated attempts
in performing such local site-effects for some
infrastructure projects around the country. The
inconsistencies of the current seismic zoning
devoid of local site-conditions becomes more
apparent when considering the case of Addis
Ababa where areas such as Nefas Silk which is
only 20-25 kilometers away from Debre Zeit (zone
4,
0
=0.1) has the same seismic zone 2(
0
=0.05)
classification as Intoto and its mountainous
surroundings. Interestingly, Akaki which is only 5
or so kilometers away from Nefas Silk and has no
overriding geological dissimilarities with the latter
is classified as zone 3 with
0
=0.07. Against this
background, the work of L.M. Asfaw's where he
showed that there is significant geological and
topographic variation in different parts of Addis
Ababa that had resulted in variations in the felt
intensities in past earthquakes adds another
dimension to the argument [36]. In general, L.M.
Asfaw's work suggests that the southwestern part of
Addis Ababa mainly consists of thick alluvium
deposits whereas the northern part of the city has
prominent topographies (mountains) with thin soil
cover. Both types of topographies are known to
increase felt intensities. Interestingly, L.M. Asfaw
shows that, due to local site effects, the felt
intensities in Intoto area (seismic zone 2 according
to EBCS-8:1995) were higher than those in the
southeast of the city towards Bole field (seismic
zone 3) [36]. Therefore, until a complete site-
specific zoning is available sometime in the future,
it is suggested that - for consistency purposes as
well as conservative designs - the city of Addis
Ababa and its industrial surroundings adopt similar
seismic zoning of at least zone 3. This could be
addressed, for example, by establishing the contour
lines of seismic zones near major metropolitan
areas to be continuous with no jump in zones
giving continuity in seismic zoning.

2. The current code considers a return-period of
100 years only which effectively reduces peak
ground acceleration by almost half as compared to
the commonly used 475 years return-period (10%
probability of exceedance in 50 years) [53]. As
discussed before, economical considerations were
often cited as the main argument in favor of this
choice. However, this view needs a revisit in light
of the current significant boom in construction
activities across the country which is expected to
continue in the foreseeable future despite some
hiccups along the way as well as with regard to
continuity and compatibility of risk levels in the
region and beyond. Does the cost-saving in
designing for lower seismic loads offset the risk of
Samuel Kinde, Samson Engeda, Asnake Kebede and Eyob Tessema

102 Journal of EEA, Vol. 28, 2011108
losing large investments in these infrastructures
due to large earthquakes with return periods of
200-475 years? While it may be argued that a
return-period of 475 years may introduce a sudden
substantial jump in cost, that the level of
investment going to these structures is substantially
high enough to warrant consideration of 475 year
return-period. Further, it is suggested that for large
infrastructure projects such as dams, bridges,
power-plants, railway structures - these structures
should be mandated by specialized codes as is done
elsewhere -, the tendency to use existing practice of
100 year return-period should also be discouraged
and disallowed and the proposed use of 475 years
of return-period should also be extended to these
specialized codes.

3. While the catalogue of earthquakes used for the
current zoning extended up until 1990 only, the
earthquakes that have occurred since then in the
past 20 years have some interesting aspects that
could have a bearing on the current seismic
zonings. A good example is the 5.3 magnitude
Sunday December 19, 2010 Hosanna earthquake
that injured as many as 26 students in Jimma and
damaged buildings. While the current seismic
zoning puts Jimma in seismic one 1(with
0
=0.03)
and the city is at least 100 kilometers away from
the epicenter, the damage caused is surprising.
Interestingly, the city of Jimma had always felt the
effect of past earthquakes in the MER (Main
Ethiopian Rift) and SMR (Southern Most Rift)
including the Woito earthquake swarm of October -
December 1987 that rattled the city and its
residents [43]. As development in the Jimma area
expands, the damage from earthquakes centered in
the MER, SMR and beyond could cause more
damages and this current classification of this city
of increasing commercial importance as seismic
zone 1 and
0
=0.03 is non-conservative and hard
to support.

Structural Design

Over the past several years, a number of
deficiencies with the European draft code, ENV
1998:1994, that could have significant design
bearings have been brought up with the intent of
rectifying them in the actual ratified building code
[53-54]. These include: criteria for regularity of
buildings, accidental torsion, use of 2D building
models and torsional effects, design response
spectra for linear analysis, P-delta effects, user-
defined time history records, etc. In addition to
these, there has been additional progress in modern
structural engineering practice such as the
increasing acceptance of performance-based design
approach and the use of nonlinear time history
analysis. All these will have a bearing on the
usefulness of the current building code and - more
importantly- on what sort of remedies need to be
considered in the expected code updating process.

Dynamic Analysis

Again, over the past several decades, there have
been significant developments in structural
engineering practice, particularly in the areas of
software supported structural analysis and design.
These structural analysis/design software have
enabled the building of complex 3-dimensional
models and the design of all structural members
and reinforcements almost routine. As a result,
code requirements that had long assumed 2-
dimensional frame models as substitutes for the
whole 3-dimensional (spatial) model because of
simplification in analysis and modeling efforts
using traditional - but increasingly rarely used -
methods continue to appear redundant and
unnecessary. In fact, to account for irregularity and
hence additional torsional effects, the use of these
2-dimensional models was accompanied by
additional (sometimes confusing) considerations
for inherent and accidental torsion. As a result,
there is a push for modern building codes to move
towards completely eliminating these arcane
requirements. The adoption of spatial (3-
dimensional) building models as the default is
recommended coupled with discouraging the use of
2-dimensional simplifications.

Along the same line of argument, in Table 2, a
detailed list of code specifications that need special
attention along with suggested revisions are
provided. It is hoped that this serves as a starting
point for a substantial review of the current out-
dated seismic code.
















Notes and Proposed Guidelines on Updated Seismic Codes in Ethiopia

Journal of EEA, Vol. 28, 2011 103

Table 2: Summarized comparison of past Ethiopian codes with proposed code modifications.

Criteria EBCS8-1995 Model/International Codes Proposed Review for
Ethiopia
Seismic Zoning and PGA
1. Seismic Zoning 4 zones UBC 1997 - 5 in the US (1, 2A, 2B, 3, 4).
But generally 4.
Keep 4 zones for
buildings
2. Soil Type Limited to 3: A, B and
C.
UBC-1997 has 6 soil type, i.e., S
A
-S
F
Consider more to
account for variation
in different parts of
the country
3. Return Period 100 years 475 years in most codes and countries. 475 years for
buildings as well as
large infrastructures
like bridges, dams,
power plants. Also
makes it consistent
with the region.
4. Seismic ground
motion used
Allows user defined
typical ground motion
records
Same Consider shallow
ground motions for
time history analysis,
recent earthquakes
tend to be that type.
5. Topographic/Sit
e amplification
effects
Does not consider site
effects
Eurocode (2004) considers when > 1. Need to consider
Design
1. Model Code
used
Predominantly Eurocode For IBC, ASCE 7-10 is used as the model
code
Use ASCE 7-10 as
engineering
curricula in Ethiopia
is based on ASCE
predominantly.
2. Design
Philosophy
Basis
Elastic response Performance-based Move towards
Performance-based
3. Special Seismic
Provisions -
Concrete
Three ductility classes
are defined:
DC"L" - (Basic EBCS 2)
DC"M" - (well within
the elastic range under
repeated reversed
loading with no brittle
failures).
DC"H" - (ensure, in
response to the seismic
excitation, the
development of chosen
stable mechanisms
associated with large
dissipation of energy)
Eurocode (EN 1998-1:2004) defines
essentially similar classes of ductility.
DCL or L - Basic (low dissipation) design;
use EN 1992-1-1:2004.
DCM - medium ductility, DCH - high
ductility.
For both DCM and DCH ductile modes of
failure (flexure) to precede brittle failure
modes (e.g., shear) with sufficient
reliability.
IBC-2006 references to ACI-318 which
uses 'seismic design category' A-F, A &B
being used in seismic zones 0 and 1, B in
zone 2, and D,E,F in zone 3 and 4.

Keep the same; but
if IBC is followed,
consider using ACI-
318.
4. Special
Provisions for
Beam Design
DC"L"
a) Anchorage: d
bL
( of
long. bars of beams
anchored along beam-
column joint)
Eurocode (EN 1998-1: 2004)
DCL Left to concrete code.
DCM
a) Anchorage: d
bL

d
bL
/h
c
7.5 (f
cm
/f
yd
)(1+0.8
d
)/(1+0.75k
D
Adopt ACI-318 or
keep it as simple as
Eurocode 2004.

EBCS:8 is not
Samuel Kinde, Samson Engeda, Asnake Kebede and Eyob Tessema

104 Journal of EEA, Vol. 28, 2011108
d
bL
6.0
(f
cm
/f
yd
)(1+0.8v
d
)h
c
(interior joint)
d
bL
7.5
(f
cm
/f
yd
)(1+0.8v
d
)h
c
(exterior joint)
b) critical length l
cr

l
cr
= 1.0 h
w
(height of
beam)
c) ductility
max. tension rein.
max

p
max
= 0.75*max. ratio of
EBCS 2
DC"M"
a) Anchorage: d
bL

d
bL
4.5
(f
cm
/f
yd
)(1+0.8v
d
)h
c
(interior joint)
d
bL
6.5
(f
cm
/f
yd
)(1+0.8v
d
)h
c
(exterior joint)
b) critical length l
cr

l
cr
= 1.5 h
w
(height of
beam)
c) ductility
max. tension rein. p
max

p
max
= 0.65*f
cd
*p'/f
yd
/p +
0.0015
d
bw
> 6mm (dia. of
hoops)
s = min (h
w
/4; 24d
bw
;
200mm; 7d
bL
)
First hoop placed
50mm from end section
of beam.
At least 2 S400 bars with
d
b
= 14mm@top and bot.
of span of beam.
DC"H"
a) Anchorage: d
bL

d
bL
4.0
(f
cm
/f
yd
)(1+0.8v
d
)h
c
(interior joint)
d
bL
6.0
(f
cm
/f
yd
)(1+0.8v
d
)h
c
(exterior joint)
b) critical length l
cr

l
cr
= 1.5 h
w
(height of
beam)
c) ductility
max. tension rein. p
max

p
max
= 0.35*f
cd
*/'/f
yd
/p +
0.0015
d
bw
> 6mm (dia. of
hoops)
s = min (h
w
/4; 24d
bw
;
150mm; 5d
bL
)
'/
max
)
(interior joint)

d
bL
/h
c
7.5 (f
cm
/f
yd
)(1+0.8
d
(exterior
joint)
b) critical length l
cr

l
cr
= 1.0 h
w
(beam frames to beam-column
joint)
l
cr
= 2.0 h
w
(beam supports discontinued
columns)
c) ductility
max. tension rein.
max

max
= ' + 0.0018*f
cd
/f
yd
/


d sy,

min
= 0.5(f
ctm
/f
yk
)
d
bw
> 6mm (dia. of hoops)
s = min (h
w
/4; 24d
bw
; 225mm; 8d
bL
)
First hoop placed 50mm from end section
of beam.
DCH
a) Anchorage: d
bL

d
bL
/h
c
7.5
(f
cm
/f
yd
/1.2)(1+0.8
d
)/(1+0.75k
D
'/
max
)
(interior joint)

d
bL
/h
c
7.5 (f
cm
/f
yd
/1.2)(1+0.8
d
) (exterior
joint)
b) critical length l
cr

l
cr
= 1.5 h
w
(beam framing to beam-column
joint as well as beam supporting
discontinued columns)
c) ductility
max. tension rein.
max

max
= ' + 0.0018*f
cd
/f
yd
/


sy,d

min
= 0.5(f
ctm
/f
yk
)
s = min (h
w
/4; 24d
bw
; 175mm; 6d
bL
)
At least 2 high-bond bars with db = 14mm
@ top and bot for entire span.
25% of max top reinf. at support to run
along entire span.
First hoop placed 50mm from end section
of beam.

conservative for
beams supporting
discontinued
columns.
Notes and Proposed Guidelines on Updated Seismic Codes in Ethiopia

Journal of EEA, Vol. 28, 2011 105

First hoop placed
50mm from end section
of beam.
At least 2 S400 bars with
d
b
= 14mm@top and bot.
of span of beam.
5. Special
Provisions for
Column Design
DC"L"
a) critical length l
cr

l
cr
= max (1.0d
c
, l
cl
/6,
450mm)
b) ductility
A
sh
= 0.02 sb
o
f
ck
/f
yk

(spiral hoop)
A
sh
= 0.02 (sb
o
f
ck
/f
yk
)
[A
c
/A
o
- 1] (rectangular
hoop) A
c
/A
o
1.3
No specs on d
bw

s = min (b
o
/2; 200mm;
9d
bL
)
dis between bars
restrained by hoops
250mm
DC"M"
a) critical length l
cr

l
cr
= max (1.5d
c
, l
cl
/6,
450mm)
b) ductility
A
sh
= 0.025 sb
o
f
ck
/f
yk

(spiral hoop)
A
sh
= 0.025 (sb
o
f
ck
/f
yk
)
[A
c
/A
o
- 1] (rectangular
hoop) A
c
/A
o
1.3
d
bw
0.35
d
bL,max
sqrt(f
ydL
/f
ydw
)
s = min (b
o
/3; 150mm;
7d
bL
)
dis. between bars
restrained by hoops
200mm
DC"H"
a) critical length l
cr

l
cr
= max (1.5d
c
, l
cl
/5,
600mm)
b) ductility
A
sh
= 0.03 sb
o
f
ck
/f
yk

(spiral hoop)
A
sh
= 0.30 (sb
o
f
ck
/f
yk
)
[A
c
/A
o
- 1] (spiral hoop)
A
c
/A
o
1.3
d
bw
0.40
d
bL,max
sqrt(f
ydL
/f
ydw
)
s = min (b
o
/4; 100mm;
5d
bL
)
dis between bars
restrained by hoops
150mm

Eurocode (EN 1998-1: 2004)
DCL Left to concrete code.
DC"M"
a) critical length l
cr

l
cr
= max (h
c
, l
cl
/6, 450mm)
b) ductility
ductility defined in terms of curvature
ductility factor (5.4.3.2.2.6P,7P, and 8)
DC"H"
a) critical length l
cr

l
cr
= max (1.5d
c
, l
cl
/6, 600mm)
b) ductility
ductility defined in terms of curvature
ductility factor (5.5.3.2.2.6P,7P, and 8)

Adopt ACI-318 or
keep it as simple as
Eurocode 2004.
Samuel Kinde, Samson Engeda, Asnake Kebede and Eyob Tessema

106 Journal of EEA, Vol. 28, 2011108

6. Drift Limit
s
= 0.01h (brittle non-
structural elements)
s
= 0.015h (fixed
non-structural elements)
[61]
2009 IBC -
s
= 0.025h or 0.015h (RC
structures)
UBC 97 -

m
0.025h (T < 0.7 sec) and

m
0.020h (T 0.7 sec) [61]. Eurocode
2004 introduces - reduction factor.
Keep as is.
7. Soil structure
interaction
No provisions IBC 2009 Section 9.5.5 of ASCE7. Adopt Section 9.5.5
of ASCE7
Analysis
1. Reference
method for
determining
seismic effects
ESF (Equivalent Static
Force procedure)
Eurocode 2004. Modal response spectrum
analysis (linear)
Move towards modal
response spectrum
analysis (linear)
2. Accidental
Torsion
(Static)
e
x
=0.05b
e
y
=0.05d
Amplification Factor 'A'
3.0 used.
UBC 97
e
x
=0.05b
e
y
=0.05d

e
x
=0.1b
e
y
=0.1d
due to limited quality-
control [6]
3. Accidental
Torsion
(Dynamic)
i
y
i
y
i
y
i
x
i
x
i
x
F e M
F e M


UBC-1997
move CMs by e
x
, e
y
and do RSA or
i
y
i
y
i
y
i
x
i
x
i
x
F e M
F e M


move CMs by e
x
, e
y

and do RSA or
i
y
i
y
i
y
i
x
i
x
i
x
F e M
F e M


4. Cracked
concrete and
masonry
properties
No UBC 97 -
0.5 for flexure and shear
(ACI 318)
Adopt ACI-318
5. P-Delta Effect
Considers if
= P
x
/V
x
h
x
> 0.1 and <
0.2; but
max
=0.25
UBC 97 - Consider P-Delta if > 0.1
Eurocode 8,
max
= 0.3. [61]
Keep the same.
6. Joint
Deformation
Neglected Consider Consider as it is
already automated by
most software.
7. Drift
Requirements
s 0.01h (with brittle
non-structural elements)
s 0.015h for
buildings with fixed non-
structural elements
IBC 2009 - s 0.007h to 0.025h
(depending on structural system and
importance)
UBC 97 -
m 0.025h for T 0.7 sec;
m 0.020h for T 0.7 sec
Keep EBCS:8-1995
which is aggressive.
8. Structural
analysis/design
software use
Hand calc required for
plan check
No specification Review Process
needed at design-
check. Add hand calcs
for beam, column, and
shear wall design and
detailing.
9. Push-over
analysis
No Yes Allow as transitional
approach
10. Base Shear
Calculation
S
d
(T
1
)
= 1
m. in Eurocode (EN 1998-1:2004)
-
11. F
t
in story shear
calculations
F
t
= 0.07T
1
F
b
UBC 97 F
t
= 0 for T 0.7 sec; = 0.07TV
0.25V (for T>0.7sec)
F
t
= 0 for Eurocode 8 (uses linear
fundamental mode; no higher modes)
F
t
= 0 for IBC2009
Follow IBC, UBC,
and Eurocode 2004.
Notes and Proposed Guidelines on Updated Seismic Codes in Ethiopia

Journal of EEA, Vol. 28, 2011 107

12. 2D Models for
dynamic
Response
spectra analysis
Allowed Allowed Encourage the use of
spatial (3D) models
13. Dynamic Load
Cases
Combinations
SRSS + CQC CQC CQC (more accurate).
Drop SRSS.
CM = center of mass. i story number. RSA Response spectra analysis. F
x
and F
y
are story shears. M
x
and
M
y
are the torsional moments [6].
CONCLUSIONS AND RECOMMENDATIONS

In this paper, it has been argued that as a boom in
large-scale infrastructure projects such as dams,
power-plants, highway roads, and expansion of
railways in Ethiopia continues along with pressure
from the staggering population growth of the
country, the severity of threats posed by seismic
hazards on the safety and serviceability of these
structures needs to be known by all stake-holders.
Currently, this awareness does not seem to be
adequate and several observations of engineering
reports of large infrastructure projects suggest that
this substantial threat is actually not well-
understood and appreciated.

Therefore, driven by this observation, in this
research report,

1. it has been demonstrated that there is
substantial amount of literature on seismicity
in Ethiopia that needs to be disseminated to a
wider audience,

2. a background and critical review of the last
three building codes of the country is given,

3. background arguments and facts that could
serve as starting points for the long-awaited
complete review of the current out-dated
seismic code are provided, and

4. guidelines for rationale and conservative
seismic design in Ethiopia and surrounding
countries for large-scale projects with
particular emphasis on dams, highway
structures, as well as railways and railway
structures are provided.

Further, the following recommendations are made:

1. Due to the importance of site-specific zoning
and inconsistencies in metropolitan areas, until
a complete site-specific zoning is available
sometime in the future, - for consistency
purposes as well as conservative designs - the
city of Addis Ababa and its industrial
surroundings adopt similar seismic zoning of
at least zone 3.

2. The seismic zoning of important metropolitan
areas like Jimma which have suffered in recent
moderate earthquake be revised to higher
seismic zoning.

3. Large infrastructure projects such as dams,
bridges, power-plants, railway structures need
to be governed by a separate specialized
seismic code which is more stringent than the
building code.

4. The current return-period of 100 years is not
conservative enough for buildings as well as
large infrastructures. The use of return-period
of 475 years is recommended as strong
candidate for consideration. Further, for large
infrastructure projects such as dams, bridges,
power-plants, railway structures, the tendency
to use existing practice of 100 year return-
period should also be disallowed immediately
and the proposed use of 475 years of return-
period should also be extended to these
specialized codes.


5. The numerous findings summarized in Table 2
strongly advocate that the current code needs a
complete revision in all aspects including the
special concrete and steel seismic provision
chapters. It is also anticipated that, in the next
code review cycle, this issue of whether to
continue in the traditions of UBC (and hence
IBC and ASCE-inspired codes) or follow
Eurocode will be in the forefront. It is hoped
that this determination of the path to be
followed should be based on well-thought and
unbiased discussions that consider the long-
term interest of the building/construction
industry in the country.

6. The basic design philosophy approach to
seismic design had continued to evolve
towards a performance-based approach with
both IBC and Eurocode 2004 implementing
this approach. The next revision of seismic
code for Ethiopia should either directly adopt
this approach that has gained increasing
Samuel Kinde, Samson Engeda, Asnake Kebede and Eyob Tessema

108 Journal of EEA, Vol. 28, 2011108
acceptance among the world-wide engineering
community or offer it as an option till its wide
usage in Ethiopia becomes common.

ACKNOWLEDGEMENTS

The authors would like to thank their colleagues
and collaborators, Dr. Asrat Worku (AAU), Dr.
Atalay Ayele (AAU), and Ato Yibeltal Zewdu who
had been the major forces behind these discussions
towards updating the current building codes of the
country. The fruitful discussions as well as the
encouragements and the long years of
collaborations are highly appreciated.

REFERENCES

[1] Population Reference Bureau, 2010, World
Population Data Sheet, www.prb.org, 2010.

[2] Ethiopian Central Statistical Agency,
Ethiopian Census, first draft, 2007, Addis
Ababa, Ethiopia. Summary and Statistical
Report of the 2007 Population and Housing
Census Results.

[3] Gebeyehu Abelti, Brazzoduro, M. and
Behailu Gebremedhin, Housing Conditions
and Demand for Housing in Urban Ethiopia,
In-depth Studies from the 1994 Population
and Housing Census in Ethiopia, Central
Statistical Authority (CSA), Addis Ababa,
Ethiopia and Institute for Population
Research National Research Council (Irp-
Cnr), Roma, Italy, October 2001.

[4] Zegeye Chernet , ''The NEXT Urban
explosion in Ethiopia'', EiABC - EiABC,
2003.

[5] Ethiopian Government (FDRE), Ethiopian
Building Proclamation No. 624/2009,
Negarit Gazetta, 2009.

[6] Samuel Kinde Kassegne, "Proposed
Considerations for Revision of EBCS-8:1995
for Conservative Seismic Zoning and
Stringent Requirements
for Torsionally Irregular Buildings,
Published by EACEs Zede Research
Journal (Ethiopia), 2006.

[7] Gouin, P., Earthquake History of Ethiopia
and the Horn of Africa, International
Development Research Center, Ottawa,
Canada, IDRC- 118e, 259p, 1979.

[8] Fekadu Kebede, "Seismic Hazard
Assessment for the Horn of Africa", Zede,
Journal of Ethiopian Engineers and
Architects, Addis Ababa, Ethiopia, 1996.

[9] Amha Mersie Hazen, Tizitaye - Sile Rase
Ye-Mastawsew (Memoires: What I
Remember about Myself). 1891-1923,
Biography of Blaten Geta Mersie Hazen
Wolde Qirqos, Addis Ababa, Ethiopia, 2009.

[10] Atalay Ayele, The September 19, 2009 Ml
5.0 earthquake in the Ankober area: lessons
for seismic hazard mitigation around Addis
Ababa, AfricaArray Workshop, 2010.

[11] Earthquake Activity in Eastern Ethiopia,
September 2005, http://www.emsc-
csem.org/Page/index.php?id=69. Accessed
in September 2011.

[12] http://jimmatimes.com/article/Latest_
News/Latest_News/Ethiopia_Minor_Earthq
uake_hits_Jimma_students_injured/33867.
Accessed September 2011.

[13] http://earthquake-
report.com/2011/08/03/m5-
5earthquakehitsethiopi. Accessed September
2011.

[14] http://earthquake-report.com/2011/m-5-
earthquakes-2/. Accessed September 2011.

[15] IDNDR RADIUS Project, Addis Ababa
Case Study, Final Report, Prepared by Addis
Ababa RADIUS group, et al, September
1999.

[16] Fairhead, J. D. and Girdler, R. W., ''The
Seismicity of Africa, Geophys''. J. R. Astr.
Soc. 24, 1971, pp. 271-301.

[17] Gutenberg, B. and Richter, C. F., Seismicity
of the Earth and associated phenomena, 1st
and 2nd editions, Princeton University Press,
1954.

[18] Sykes, L. R. and Landisman, M., ''The
seismicity of East Africa, the Gulf of Aden
and the Arabian and Red Seas'', Bull. seism.
SOC. Am., 54, 1964, pp. 1927-1940.

[19] Fairhead, J. D., ''The seismicity of the East
African rift system 1955 to 1968'', M. Sc.
Dissertation, University of Newcastle upon
Tyne, 1968.

Notes and Proposed Guidelines on Updated Seismic Codes in Ethiopia

Journal of EEA, Vol. 28, 2011 109
[20] Mohr, P. A., ''Surface cauldron subsidence
and associated faulting and fissure basalt
eruptions at Garibaldi pass, Shoa,
Ethiopia'', Bull. Volcan., 24, 1962, pp.421-
428.

[21] Mohr, P. A., ''The Ethiopian rift system,''
Bull. Geophys. Observatory, Addis Ababa,
Ethiopia, 1967.

[22] Mohr, P. A., ''Major volcanic-tectonic
lineament in the Ethiopian rift system'',
Nature, Lond., 213, 1967, pp. 664-665.

[23] Mohr, P. A. and Gouin, P., ''Gravity
traverses in Ethiopia (Fourth Interim
Report)'', Bull. Geophys. Observa., Addis
Ababa, 12, 1968, pp. 27-56.

[24] Gouin, P., ''Seismic and gravity data from
Afar in relation to surrounding areas'', Phil.
Trans. R. SOC. Lond., A.267, 1970, pp. 339-
358.

[25] Gouin, P. and Mohr, P. A., ''Recent effects
possibly due to tensional separation in the
Ethiopian rift system'', Bull. Geophys.
Observa., Addis Ababa, 10, 1967, pp. 69-78.

[26] Shah, E. R., Seismicity of Kenya, Ph.D
Thesis, Univ. Coll. Nairobi, Kenya, 1986.

[27] Sutton, G. H. and Berg, E., ''Seismological
studies of the Western rift valley of Africa'',
Trans. Am. geophys. Un., 39, 474481, 1958.

[28] De Bremaecker, J. Cl., ''Seismicity of the
West African Rift Valley'', J. geophys. Res.,
64, 1959, pp. 1961-1966.

[29] Wohlenberg, J., "Seismizitaet der
ostafrikanichen Grabenzonen zwischen 4" N
and 12" S sowie 23" E und 40" E,
Veroffenttichung der Bayerischen Akademie
der Wissenschaften, Heft 23, 1968, pp. 95.

[30] Atalay Ayele, Earthquake Catalogue of
the Horn of Africa for the Period 1960-
93, Seism. Dept, Uppsala, Report, 395,
1995.

[31] Atalay Ayele, Spatial and Temporal
Variations of Seismicity in the Horn of
Africa from 1960-1963, Geophyics Journal
Int, 130, 1997, pp. 805-810.

[32] Tilahun Mammo, Site-specific ground
motion simulation and seismic response
analysis at the proposed bridge sites within
the city of Addis Ababa, Ethiopia,
Engineering Geology, Volume 79, Issues 3-
4, 11 July 2005, pp. 127-150.

[33] Fekadu Kebede and Laike Mariam Asfaw,
Seismic Hazard Assessment for Ethiopia
and the Neighboring Countries, Sinet
Ethiopian Journal of Science, 19(1), 1996,
pp. 15-50.

[34] Nyblade, A. A. and C. A. Langston ,
Broadband seismic experiments probe the
East African rift, Eos Trans. AGU, 83, 405,
2002, pp. 408 409.

[35] AAU Geophysical Observatory, Initial
Report on the Awara Melka and
Surrounding Area Earthquake (in
Amharic), Geophysical Observatory
(Ethiopia), Contribution Number 3, April
1980.
[36] Laike Mariam Asfaw, Site Amplification at
the Western Escarpment of the East African
Rift System near Addis Abeba, Letters to
the Editor, Bulletin of the Seismological
Society of America, Vol. 72, No. 1,
February 1982, pp. 327-329.

[37] Laike Mariam Asfaw, Langano Earthquake
July August 1985, AAU Geophysical
Observatory (Ethiopia), Contribution
Number 5, 1985.

[38] AAU Geophysical Observatory, Woito
Seismic Episode of October-December
1987, (in Amharic), Geophysical
Observatory (Ethiopia), Contribution
Number 8, 1987.

[39] Laike Mariam Asfaw, Seismicity and
Earthquake Risk in the Addis Abeba
Region, Sinet Ethiopian Journal of
Science, 13(1), 1990, pp 15-35.

[40] Laike Mariam Asfaw, Implication of shear
deformation and earthquake disturbances in
the East African Rift Between 4
0
N and 6
0

N, Journal of African Earth Sciences, Vol.
10, No 4, 1990, pp. 745-751.

[41] Laike Mariam Asfaw, Seismic risk at a site
in East Africa rift System, Tectonophysics,
209, 1992, pp. 301-309.

[42] Laike Mariam Asfaw, Environmental
Hazard from fissures in the Main Ethiopian
Samuel Kinde, Samson Engeda, Asnake Kebede and Eyob Tessema

110 Journal of EEA, Vol. 28, 2011108
Rift, Journal of African Earth Sciences,
Vol. 27, No 3/4, 1998, pp 481-490.



[43] Laike Mariam Asfaw, Intensity Reports
Since P. Gouins Book, Unpublished
Report, Obtained through personal
communications, 2003.

[44] IRINNews.org, Ethiopia: Earthquakes in
Tigray, No Casualties, Nairobi, August 14,
2002.

[45] http://www.spiegel.de/international/spiegel/
0,1518,405947,00.html.Accessed September
2011.

[46] CP1-78, Revised Building Code, Ministry of
Public Works, pp. E1-E5, 1974.

[47] Gouin, P., "Seismic Zoning in Ethiopia',
Bull. Geophysi. Obs. (Ethiopia), 7, 1976, pp.
1-46.

[48] ESCP-1:1983, Code of Practice for
Loading, Ethiopia, Ministry of Urban
Development and Housing, Addis Ababa,
Ethiopia, 1983.

[49] Structural Engineers Association of
California (SEAOC), Bluebook
Recommended Lateral Force Requirements
and Commentary, SEAOC.

[50] International Conference of Building
Officials, Uniform building code,
California, 1997.

[51] Asrat Tessema, "EBCS-8:1995 - Aspects of
Design and Detailing", Proceedings of the
EASEE (Ethiopian Association of
Seismology and Earthquake Engineering)
Workshop, Addis Ababa, Ethiopia, February
21, 1996.

[52] EBCS-8:1995, Code of Standards for
Seismic Loads, Ministry of Works and
Urban Development, Addis Ababa, Ethiopia,
1995.

[53] Bekele Mekonnen, EBCS-8: 1995 Basis
of Design, Proceedings of the Ethiopian
Association of Seismology and Earthquake
Engineering (EASEE) on Seismic Hazard
Assessment and Design of Structures for
Earthquake Resistance, Addis Ababa,
Ethiopia, February 21, 1996, pp. 25-38.

[54] Hayhurst, CJ. and Maguire, JR., ''Draft
Eurocode 8-Sample Seismic Force
Calculations for Discussion Purposes'',
Journal of Earthquake Engineering and
Structural Dynamics, Vol. 16, 1998, pp.
775-779.

[55] Anastassiadis, K., Avramidis, I.E. and
Athanatopoulou, A., "Critical comments on
Eurocode 8 - parts 1-1 and 1-2", 11th
European Conference on Earthquake
Engineering Balkema, Rotterdam, 1998.

[56] Fekadu Kebede, Seismic Hazard
Assessment for the Horn of Africa,
Proceedings of EASEE (the Ethiopian
Association of Seismology and Earthquake
Engineering) Workshop on Seismic Hazard
Assessment and Design of Structures for
Earthquake Resistance (EBCS-8: 1995),
Addis Ababa, Ethiopia, February 21, 1996,
pp. 25-38.

[57] Fekadu Kebede, Hazard Maps of Spectral
Response Acceleration for Ethiopia,
Proceedings of the Second Symposium of
the Ethiopian Association of Seismology
and Earthquake Engineering (EASEE),
Addis Ababa, Ethiopia, April 4, 1997.

[58] Panza G.F., Vaccari, F., Costa, G., Suhadolc,
P. and Fh, D. "Seismic input modelling for
zoning and microzoning", Earthquake
Spectra 12, 1996, pp. 529566.

[59] Fekadu Kebede and Laike Mariam Asfaw,
Seismic Hazard Assessment for Ethiopia
and the Neighboring Countries, Sinet
Ethiopian Journal of Science, 19(1), 1996,
pp. 15-50.

[56] De Magistris. F S, Beyond EC8: the new
Italian seismic code', Geofizika, Vol. 28,
2011.

[57] `Asrat Worku, "Comparison of Seismic
Provisions of EBCS 8 and Current Major
Building Codes Pertinent to the Equivalent
Static Force Analysis, Zede, Journal of the
Ethiopian Engineers and Architects, Vol. 18,
December 2002, pp. 11-26.



INTERNATIONAL ADVISORY BOARD



Prof. Abrham Engida, Michigan State University, USA
Prof. Abebe Dinku, AAiT-AAU, Ethiopia
Ato Asrat Bulbula, Consultant, Ethiopia
Dr. Beshawired Ayalew, Clemson University, USA
Dr. Fekadu Shewarega, Universitaet-Duisburg,Essen, Germany
Prof. Gunter Busch, TU-Cottbus, Cottbus, Germany
Dr. Kibret Mequanint, University of Western Ontario, Canada
Dr. Mekonnen Gebremichael, University of Conneticut, USA
Dr. Mulugeta Metaferia, Consultant, Ethiopia
Prof. Negussie Tebedge, Consultant, Ethiopia
Dr. Solomon Assefa, IBM, USA
Dr. Tesfaye Bayou, Consultant, Ethiopia
Dr. Wubshet Berhanu, EiABC-AAU, Ethiopia

ACKNOWLEDGMENTS

The Editorial Board of Zede would like to express its sincere gratitude to the following
individuals for reviewing the manuscripts that were originally submitted for publication in Zede
Vol. 28.

Abebayehu Assefa (Dr.-Ing) Murad Redwan (Ato)
Demiss Alemu (Dr.-Ing.) Naod Duga (Ato)
Dereje Hailemariam (Dr.) Shifferaw Taye (Dr.)
Eneyew Adugna (Dr.) Tesfaye Dama (Dr.)
Messele Haile (Dr.) Teshome Bekele (Ato)
Mohammod Seid (Dr Yalemzewd Negash (Ato)

You might also like