You are on page 1of 399

Research Network for a Secure Australia

Protecting Australian Infrastructure

Recent advances in security technology 2007


Proceedings of the 2007 RNSA Security Technology Conference Melbourne, 2007 Editors: Priyan Mendis, Joseph Lai, Ed Dawson & Hussein Abbass

Recent advances in security technology

Proceedings of the 2007 RNSA Security Technology Conference Melbourne, 2007

Editors: Priyan Mendis, Joseph Lai, Ed Dawson and Hussein Abbass

Recent advances in security technology

The RNSA is an Australian Government initiative.

The Research Network for a Secure Australia (RNSA) is a multidisciplinary collaboration established to strengthen Australias research capacity for protecting critical infrastructure (CIP) from natural or human caused disasters including terrorist acts. The RNSA facilitates a knowledge-sharing network for research organisations, government and the private sector to develop research tools and methods to mitigate emerging safety and security issues relating to critical infrastructure. World-leaders with extensive national and international linkages in relevant scientific, engineering and technological research will lead this collaboration. The RNSA also organises various activities to foster research collaboration and nurture young investigators. Participants are encouraged to join the RNSA. Membership of the RNSA is open to Australian and international researchers, industry, government and others professionally involved in CIP Research. Information on joining is at www.secureaustralia.org. Convenor: A/Prof Priyan Mendis, Head of the Advanced Protective Technology for Engineering Structures Group, The University of Melbourne Prof Ed Dawson, Queensland University of Technology Prof Joseph Lai, UNSW@ADFA Prof Hussein Abbass, UNSW@ADFA Dr Tuan Ngo, The University of Melbourne Anant Gupta, The University of Melbourne

Node Leader: Node Leader: Node Leader Research Program Manager: Technical & Administrative Support Officer: Outreach Manager:

Athol Yates

Published by the Australian Homeland Security Research Centre. The Centre undertakes independent, evidence-based analysis of domestic security issues. Australian Homeland Security Research Centre Tel 02 6161 5143, Fax 02 6161 5144, PO Box 295, Curtin ACT 2605 AIIA Building, Level 1, 32 Thesiger Cct, Deakin ACT info@homelandsecurity.org.au, www.homelandsecurity.org.au ISBN 978-0-9757873-9-7 2007, Australian Homeland Security Research Centre, and the authors. All rights reserved. Other than brief extracts, no part of this publication may be produced in any form without the written consent of the publisher. The Publisher makes no representation or warranty regarding the accuracy, timeliness, suitability or any other aspect of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made.
2

Recent advances in security technology

Foreword
The 2007 Science, Engineering and Technology Summit was the third formal conference organised by the Research Network for a Secure Australia (RNSA) funded by the Australian Research Council. The Summit has become an annual event bringing together both researchers and practitioners in the fields relating to the national research priority area entitled Safeguarding Australia. It provided a forum for the exchange of ideas and research findings between core groups and individuals interested in security technology. The opening address was given by The Hon. Phillip Ruddock, Attorney-General. The plenary speakers were Professor Rob Evans from University of Melbourne; Mr. Mike Rothery, Chairman of the RNSA Advisory Committee and Assistant Secretary of Critical Infrastructure Protection Branch; and Mr. Paul Murphy from GHD. The papers in this volume are those accepted into the refereed streams. Each paper was subjected to a rigorous review process conducted by at least two experts in the appropriate field. The authors were requested to revise the papers according to reviewers comments. The editors would like to thank all of the reviewers for their assistance in maintaining the high quality of papers. Finally the editorial committee wishes to thank the publisher of these proceedings, the Australian Homeland Security Research Centre and its Executive Director Athol Yates.

A/Prof. Priyan Mendis Prof. Joseph Lai Prof. Ed Dawson Prof. Hussein Abbass

Recent advances in security technology

Reviewers- to be updated
The editors would like to thank the following reviewers for their assistance in maintaining the high quality of papers. Prof Hussein Abbass Dr Paul Barnes Dr Collette Burke Dr David Cornforth Prof Ed Dawson Prof Michael Frater Mr Fernando Nestor Gesto Mr Anant Gupta Prof Hong Hao Mr Haiming Huang Prof Joseph Lai Dr Andrew Lambert Dr Kara-Jane Lombard Mr Raymond Lumantarna Ms Leonie Simpson Prof Adrian McCullagh A/Prof Priyan Mendis Mr Michael Netherton Dr Tuan Ngo Dr Jason Reid Dr Alex Remennikov Mr. Massoud Sofi Prof Mark Stewart Mr Murat Tahtali Dr Holly Tootell Dr Jidong Wang Dr. Chengqing Wu Dr John Young Dr Weiping Zhu

Publication Managers
Athol Yates Paige Darby Anant Gupta

Recent advances in security technology

Table of contents
1. Blast-RF: Software for analysing blast risks for facades ........ 10
M.D. Netherton and M.G. Stewart Centre for Infrastructure Performance and Reliability School of Engineering, The University of Newcastle, NSW, Australia

2. Robust Communications using Wireless Mesh Networks for Safeguarding Water Infrastructures .......................................... 21
Saad Khan and Asad Amir Pirzada National ICT Australia Marius Portmann University of Queensland and National ICT Australia Limited

3. Maritime Port Intelligence using AIS Data ................................ 33


Min Han Tun, Graeme S Chambers and Tele Tan Department of Computing, Curtin University of Technology Thanh Ly Defence Science and Technology Organisation

4. High-performance retrofit solutions for blast protection of facades in office buildings ......................................................... 44
A. Remennikov University of Wollongong, Australia D. Carolan Taylor Thomson Whitting International Pty Ltd Consulting Engineers, Sydney

5. Preparing for Post-catastrophe Video Processing................... 55


A. van den Hengel, H. Detmold, A. Dick and R. Hill Australian Centre for Visual Technologies The University of Adelaide, Australia

6. Length Based Modelling of HTTP Traffic for Detecting SQL Injection Attacks.......................................................................... 67
Mehdi Kiani, Andrew Clark, George Mohay Information Security Institute, Queensland University of Technology

Recent advances in security technology

7. Maritime Terrorism and Risks to the Australian Maritime and Resource Industries.................................................................... 79
Dr. Alexey D. Muraviev Curtin University of Technology, Western Australia

8. Polymeric Coatings for Enhanced Protection of Structures from the Explosive Effects of Blast ........................................... 90
K. Ackland, C. Anderson and N. St John Defence Science & Technology Organisation, Australia

9. EMANZE: A comparative Study on Emergency Management in Australia, New Zealand and Europe .......................................... 97
Andreas Meissner Fraunhofer Institute for Information and Data Processing (IITB), Karlsruhe, Germany

10. Practical Crypto-Biometric Systems: What Can a Fuzzy Vault Do? ................................................................................... 111
Marianne Hirschbichler, Wageeh Boles, Colin Boyd and Greg Maitland Queensland University of Technology, Brisbane, Australia

11. Describing asset criticality as an aid for security risk management.............................................................................. 122
Allen Fleckner MSc, PSP Critical Risk Pty Ltd, Melbourne

12. Optimized design of a simple supported one-way RC slab against airblast loads ............................................................... 133
C. Wu and D.J. Oehlers School of Civil and Environmental Engineering, The University of Adelaide W. Sun Department of Civil Engineering, Huaiyin Institute of Technology, China M. Rebentrost VSL Australia Pty Ltd, Australia

13. Intelligent Evacuation Models for Fire and Chemical Hazard ........................................................................................ 145
D.J. Cornforth, H.A. Abbass and H. Larkin Defence and Security Applications Research Centre (DSARC), University of New South Wales, Australia

Recent advances in security technology

14. High assurance communication technologies supporting critical infrastructure protection information sharing networks....................................................................... 156
J.F. Reid, S.Corones, E. Dawson, A McCullagh and E. Foo Information Security Institute (ISI), Queensland University of Technology, Australia

15. Gen E (Generation Extremist): The significance of youth culture and new media in youth extremism............................ 168
K-J. Lombard Curtin University of Technology, Australia

16. An Overview of Pressure-Impulse Diagram Derivation for Structure Components ............................................................. 179
Yanchao Shi The University of Western Australia, Australia Tianjin University, Tianjin, China Hong Hao The University of Western Australia, Australia Zhong-Xian Li Tianjin University, Tianjin, China

17. Location-Based Services in Emergency Management- from Government to Citizens: Global Case Studies ....................... 190
Anas Aloudat, Katina Michael and Jun Yan School of Information Systems and Technology, University of Wollongong

18. Denial of Service Vulnerabilities in IEEE 802.11i.................... 202


J. Smith Information Security Institute, Queensland University of Technology, Brisbane Australia

19. Analysis of Concrete Slab Fragmentation from Blast Damage ............................................................................ 214
X.Q. Zhou and H. Hao The University of Western Australia, Australia

20. Modeling and Integration of Disaster Situational Reports .... 226


Sai Sun and Renato Iannella National ICT Australia (NICTA)

Recent advances in security technology

21. Managing security effects of WLAN deployments in critical infrastructure control systems.................................... 236
R. Gill, J. Smith and M. Branagan Information Security Institute, Queensland University of Technology, Brisbane, Australia

22. Behavioural responses to the terrorism threat: Applications of the Metric of Fear.................................................................. 248
Anne Aly, Mark Balnaves and Christopher Chalon Edith Cowan University

23. Mechanical Output of Contact Explosive Charges ................ 256


Gregory Szuladzinski Analytical Service Pty Ltd, Australia

24. On Ensuring Continuity of Mobile Communications in a Disaster Environment ............................................................... 268


J. Ring, E. Foo and M. Looi Queensland University of Technology, Australia

25. Decision Support Tools for National Security 'Capacity' Problems - A Decontamination Case Study ........................... 278
Dion Grieger and Rick Nunes-Vaz Defence Science and Technology Organisation, Australia

26. Nonlinear dynamic analysis of beam-column subjected to impact of blast loading ............................................................. 287
H.R. Vali Pour, Luan Huynh and S.J. Foster The University of New South Wales, Australia

27. Behaviour and Modelling of Glazing Components Subjected to Full-Scale Blast Tests in Woomera.......................................... 299
R. Lumantarna, T. Ngo, P. Mendis and N. Lam University of Melbourne, Australia

28. Smart Cameras Enabling Automated Face Recognition in the Crowd for Intelligent Surveillance System ....................... 310
Y. M. Mustafah National ICT Australia Ltd. (NICTA), School of ITEE, The University of Queensland A. Bigdeli National ICT Australia Ltd. (NICTA)
8

Recent advances in security technology

A. W. Azman National ICT Australia Ltd. (NICTA), School of ITEE, The University of Queensland B. C. Lovell National ICT Australia Ltd. (NICTA), School of ITEE, The University of Queensland

29. Corporate Counter-Terrorism: The Key Role for Business in the War on Terror .................................................................. 319
Luke Howie Global Terrorism Research Centre, Department of Behavioural Studies, Monash University

30. Robust Face Recognition Techniques for Smart Camera Based Surveillance ..................................................... 331
Ting Shan, Abbas Bigdeli, Brian C. Lovell, Conrad Sanderson, Shaokang Chen and Erik Berglund NICTA and School of ITEE, The University of Queensland

31. A face recognition approach using Zernike Moments for video surveillance ..................................................................... 341
Arnold Wiliem, Vamsi Krishna Madasu, Wageeh Boles and Prasad Yarlagadda School of Engineering Systems Queensland University of Technology, Australia

32. Forensic Challenges in Service Oriented Architectures........ 356


Andrew Marrington, Mark Branagan and Jason Smith Information Security Institute, Queensland University of Technology, Australia

33. A Comparison of Media Response to Recent National Security Events ......................................................................... 367
Holly Tootell University of Wollongong, Australia

34. Security Strategies for SCADA Systems................................. 378


J. Wang and X. Yu RMIT University, Australia

35. The potential for using UAVs in Australian security applications ............................................................................... 388
S. Russell University of South Australia
9

Recent advances in security technology

1
Blast-RF: Software for analysing blast risks for facades
M.D. Netherton and M.G. Stewart
Centre for Infrastructure Performance and Reliability School of Engineering, The University of Newcastle, NSW, Australia
Abstract
There are many computer programs that model the consequences to built infrastructure when subject to explosive blast loads; however, the majority of these do not account for the uncertainties associated with system response or blast loading. This paper describes new software - called "Blast-RF" (Blast Risks for Facades) - that incorporates existing blast-response software within an environment that considers threat/vulnerability uncertainties and variability via probability and structural reliability theory. This allows the prediction of likelihood and extent of damage and/or casualties; information which will be useful for risk mitigation considerations, emergency service's contingency and response planning, collateral damage estimation and post-blast forensic analysis.

Biographies
Michael Netherton is a Post-Graduate research student at the Centre for Infrastructure Performance and Reliability at The University of Newcastle, Australia. He received his MSc in Weapons Effects on Structures in 2000 from the UK's RMCS and when a Squadron Leader with the RAAF was the Senior Australian Representative on the Coalition's Weapons Effectiveness Assessment Team in postwar Iraq 2003. Mark Stewart is a Professor and Director of the Centre for Infrastructure Performance and Reliability, School of Engineering, The University of Newcastle, Australia. He received his PhD in 1988 from The University of Newcastle. His expertise includes multi-hazard risk assessment, structural reliability, security risk assessment, uncertainty modelling, stochastic deterioration modelling and decision analysis.
10

Blast-RF: Software for analysing blast risks for facades

Introduction
There are many instances in the recent past that indicate that terrorist threats will remain as a potential hazard into the future, and that a favoured method of terrorist attack is via Improvised Explosive Devices (IEDs) detonated within urban environments; with a view to disrupt, damage or destroy infrastructure, public systems or people. In attempts to provide better advice to decision-makers about what effects may be reasonably expected from these types of attack, there are continual improvements in the methods used to model the effects of Vehicle Borne IEDs (VBIEDs) and other types of IEDs. Nearly all current explosive-blast modelling techniques are deterministic, in that they provide one result for a given set of problem inputs. Such as, given an explosive weight and range, does a particular element of a building survive the shock wave or not? The ability to obtain a deterministic result to a problem is naturally very attractive to decision makers; where the perception that the result is accurately known can provide a greater degree of confidence in any decision(s) made. However, such confidence is illusory, as deterministic methods fail to consider degrees of uncertainty associated with many aspects associated with threats and vulnerabilities; e.g. there may be considerable variability in the weight of explosive, the range to the intended target, the energetic output of the explosive, the vulnerability of any element effected, the error within the modelling tool itself, and so on. One method for dealing with such uncertainties utilises structural reliability theory, where, quantitative advice can be provided to decision makers in the form of probabilities of damage or safety hazards. This paper's perspective is that information derived using structural reliability theories (and Probabilistic Risk Assessments in particular) has more utility than deterministic solutions. Indeed, society readily accepts the use of probabilistic techniques in riskbased decision-making and applies them to a range of potentially hazardous industries and situations (Stewart & Melchers 1997). When a structure is directly targeted by a VBIED there is often widespread damage to nearby structures; where significant damage can occur to concrete, masonry or glass facade elements; as per the 1995 bombing of the Alfred P Murrah Federal Building, Okalahoma City, USA, where facade damage was observed on buildings up to 1.6 km away from the detonation point (Norville et al. 1999). This phenomenon, whilst not discounting the seriousness of the situation to the target structure, is possibly serendipitous for the terrorist and may account for a greater safety hazard to adjacent structures than at the target itself; as evidenced by the VBIED attack on the Australian Embassy in Jakarta, Indonesia in 2004 (See Figure 1). This paper describes a new research tool - called "Blast-RF" (Blast Risks for Facades) - that incorporates existing blast-response software within an environment that considers threat and vulnerability uncertainty and variability via probability and structural reliability theory. This allows the prediction of likelihood and extent of damage and/or casualties. Blast-RF produces information on damage risks and safety hazard risks for structural facades when subject to explosive blast loads, which may then be used:
11

Recent advances in security technology

(i) (ii) (iii) (iv)

as a decision support tool to mitigate damage, by emergency services to predict the extent and likelihood of damage and hazard levels in contingency planning and emergency response simulations, in collateral damage estimation by military planners, or in post-blast forensics.

Figure 1. Blast damage to glass facades on buildings adjacent to the Australian Embassy in Jakarta, Indonesia, 2004. (Image used with permission of the Australian Federal Police)

2. Probabilistic modelling
Very few parameters are precisely known and so, as distinct from the single point estimates used in deterministic modelling, probabilistic modelling presumes that input variables can have a range of values based on probability distributions; e.g. concrete strength is known to have a coefficient of variation (COV) of 0.1 to 0.2. An illustrative example with Load (S) and Resistance (R) distributions is shown in Figure 2. In this example, failure occurs when load exceeds resistance and the probability of failure (pf) = Pr (R < S). When cast as a probabilistic model, even though nominal (or design) load (Sn) and resistance (Rn) values may be such that Rn > Sn the probability of failure will not be zero but is calculated as:

p f = FR ( x) f s ( x) dx
0

(1)

where FR(x) is the cumulative distribution function of resistance (also referred to as a fragility curve) and fS(x) represents the probability distribution of blast loading for a specific threat scenario (i.e. known explosive weight and stand-off distance) considering inherent, model error and parameter uncertainties.
12

Blast-RF: Software for analysing blast risks for facades

Capacity/Resistance (R) Probability Density

Demand/Load (S)

Sn

Rn

Failure Region

Figure 2. Probabilistic modelling of load and resistance where the probability of failure (pf) = Pr (R<S) See Stewart et al. (2006) and Stewart & Netherton (2007) for further information on reliability theory and the probabilistic modelling of simple facade systems subject to terrorist explosive attack, within which, details are given on the assessment of risks for new and existing glass facades and cost-effective risk mitigation solutions. Glazing strength is dominated by flaw size distributions in glass panels, and so since the 1980's most glass design standards have been derived from statistical and probabilistic approaches. While there may be some disagreement over the selection of test data used to derive glass strength statistics (e.g. Calderone & Jacob 2005), there is full agreement that probabilistic approaches are the only method to capture the variability of glass strength and so continue to be the basis for glazing design. The probabilistic philosophy of accounting for defect, material, dimensional, environmental and modelling variability underpins the development of all design codes for reinforced concrete, structural steel, masonry and other structural materials. Hence, the probabilistic approach developed for Blast-RF builds upon these wellproven design and assessment applications of structural reliability and probability theory.

3. Blast-RF software
Blast-RF is an Excel-based software program written in Visual Basic for Applications (VBA). The glazing module of the Blast-RF software comprises the following:

Glazing response solver; o single degree of freedom (SDOF) model , or o LS-DYNA (2007) non-linear finite element model. ANFO or TNT equivalent explosive weight, Variability of explosive weight, Detonation coordinates (x,y,z) from centre of glazing, Monolithic or laminated glazing, Variability of glazing stress limit state, Aspect ratio of window,
13

Recent advances in security technology

Glazing support conditions, Variability of material and dimensional properties, and Variability of glass-fragment drag-coefficients. Monte-Carlo simulation analysis is the computational method used. The Blast-RF interface is shown in Figure 3. The output from Blast-RF is given in terms of the probability of:

damage (first cracking of glazing), and glazing safety hazard criteria based on glass fragment trajectory.

Figure 3. The Blast-RF (Excel-based) interface. 3.1 Probability Distributions One of the advantages of probabilistic risk assessments is that all parameter distribution types are readily assimilated. Whilst most parameters in the present paper are normally distributed, one of them, the drag coefficient (CD) of a glass fragment, has a triangular distribution. This is to account for the higher likelihood of a maximum CD of 2.05 for annealed glazing (which usually has fragments/shards with large aspect ratios) or the higher likelihood of a minimum CD of 1.05 following the fracture of fully tempered glazing (where square-like fragments are typically observed). Given that these CD values are absolute limits, any distribution with a long probability "tail" is not appropriate. Each of the parameters (and associated probability distribution details) as used within Blast-RF for this paper are shown within Table 1.
14

Blast-RF: Software for analysing blast risks for facades

Parameter TNT energetic output Explosive stand-off Glass height and width Tensile stress limit - Annealed glass - Fully tempered glass Glass fragment mass Drag coefficient - Annealed glass fragment - Fully tempered glass fragment Blast impulse SDOF Model error

Mean value 100.0 kg 10.0 m 2.0 m 84.8 MPa 159.6 MPa 0.028 kg 1.72 1.38 1.00 1.00

Coefficient of variation 0.05 0.03 0.01 0.28 0.10 0.00 0.137 0.170 0.10 0.00

Probability distribution Normal Normal Normal Normal Normal Deterministic Triangular Triangular Normal Deterministic

Table 1. Statistical parameters used within Blast-RF. Figures 4, 5 and 7 provide typical output from Blast-RF; the utility of which supports four distinct capabilities, which are now described. 3.2 Risk mitigation advice Blast-RF can be used to plot and assess the risks associated with an existing facade (for a given threat scenario) and then compare them against the risks associated with a proposed mitigation solution; such as, what is the quantifiable change in risk if a stronger glass pane is fitted, or a greater standoff distance is enforced? Figure 4 shows how the risk (or probability) of window damage can be plotted across the face of a 20-storey building; with Figure 4a showing the risk contours associated with 2 m 2 m, 10 mm annealed glass windows in the facade, whilst Figure 4b shows the relative reduction in risk if stronger yet thinner (8 mm) fully tempered (toughened) glass is used. In both scenarios, all 340 windows were subject to the same VBIED containing 100 kg of TNT detonated on the ground 10 m in front of the ground-floor's centre window. Similar plots can be created for changes in any variables; i.e. what would be the reduction in risk if vehicles were held at 25 m away, as compared to the current 10 m? Alternatively, what if no vehicle of 1000 kg tare could enter a given street or if pedestrian traffic could only approach to within 25 m of an entrance? All manner of variables can be entered into Blast-RF and the associated probability of damage contours subsequently plotted. For the 100 kg VBIED described above, Figure 4b shows that for the 16th floor and above, the risks of window damage approach zero; even though the risks for the lower eight floors are similar for both glazings. Further, the average damage risk (across the whole facade) for 8 mm fully tempered glazing is 0.52, as compared to 0.77 for 10 mm annealed glazing; a 32% reduction in the average risk of window damage. Blast-RF can produce unique Blast Reliability Curves (BRCs) for specific threat/vulnerability scenarios. Again, details of risk in terms of probability of failure
15

Recent advances in security technology

can be shown for the existing situation and readily compared against mitigation options.

(a) 10mm annealed glazing.

(b) 8mm fully tempered glazing.

Figure 4. Example damage risk contour plots (Probability of Window Damage) for different window glass in a 20-storey structure subject to 100 kg TNT detonated 10 m from front of building. (Netherton & Stewart 2007) In the BRCs shown at Figure 5, the risks associated with 1.5 m 1.5 m, 8 mm annealed glass panes are compared against the risks associated with a 6 mm toughened glass replacement; for a range of explosive threats located directly in front of a single window (i.e. angle of incidence (AOI) = 0). It shows for example, that for a stand-off of 30 m, the risk of the 8 mm annealed glass pane failing (when subject to a relatively small 10 kg TNT charge) is 0.6 as compared to 0.03 for the thinner yet stronger 6 mm toughened glass pane. The question for decision makers can then become - is the cost of the mitigation solution acceptable for the quantitatively demonstrated risk reduction?

16

Blast-RF: Software for analysing blast risks for facades

Probability of Failure (p | )

ij

0.8 10 kg 100 kg 50 kg

250 kg

500 kg

1000 kg

0.6

0.4 5 kg 0.2
6 mm Toughened 8 mm Annealed

0 0 50 100 150 200 250 300 350

Stand-off Distance (m)


Figure 5. Example BRC: 1.5 m 1.5 m glass windows with two glazing configurations (for AOI = 0). 3.3 Contingency planning and emergency response simulations If a facade window's tensile stress limit is exceeded during blast-induced out-of-plane deflections, then that glass pane will fail. Blast-RF can be used to determine glazing safety-hazards for personnel within buildings effected by broken glass. Using postbreak glass-element nodal velocities and the UK Glazing Hazard Guide's safety criteria (1997) (see Figure 6), contour plots can be produced (similar to those of Figure 4) which show the probability of achieving either a Minimal, Low or High safety-hazard.

Figure 6. The UK Glazing Hazard Guide's rating scheme (1997); where the level of safety hazard is defined by the post-blast location of glass fragments. Using the same 20-storey scenario as described previously with 100 kg TNT detonated at 10 m, is possible to produce safety-hazard risk contour plots (see Fig. 7)
17

Recent advances in security technology

of the probability of a High-Hazard. Figures 7a and 7b show that whilst the HighHazard risks are similar for the lower eight floors regardless of glazing choice, Figure 7b shows a reduction in risk for the windows above the eleventh floor when the stronger yet thinner glazing is specified. Across the whole facade, the average HighHazard safety risk is 0.59 for the 10 mm annealed glazing, as compared to 0.47 for the 8 mm fully tempered glazing; a 20% reduction in risk of a High-Hazard. This new type of advice will be of significant benefit to emergency services personnel involved in contingency planning for expected safety hazards or the scale of casualties due to various threat scenarios.

(a) 10mm annealed glazing.

(b) 8mm fully tempered glazing.

Figure 7. Example safety-hazard risk contour plots (Probability of High-Hazard) for different window glass in a 20-storey structure subject to 100 kg TNT detonated 10 m from front of building. (Netherton & Stewart 2007) 3.4 Collateral damage estimations for military planners The ability to plot probability of damage/safety-hazard contours in 3-dimensional space will provide significant utility to the military targeting process. Military planning staff continually seek better methods for understanding what the most likely collateral damage will be for a given use of a particular weapon. Blast-RF's probabilistic modelling of risks to facades will complement, and in some cases
18

Blast-RF: Software for analysing blast risks for facades

improve, current Collateral Damage Estimation tools; particularly in the conduct of military operations within urban or complex environments. 3.5 Post-blast forensics It is often a challenging task for security agencies to forensically determine, with confidence, the weight and type of an explosive as used in a VBIED attack. Blast-RF can be used as a complementary source of information to the forensic analyst. For example, in Figure 1, assuming the facade of the building located to the right of the Embassy has all of the same type and size of glass, then is it possible to approximate the whereabouts of a point where half the windows are damaged; i.e. pf = 0.5. In this scenario, this point is determined to be 60% of the way (moving from front to back) along the side of the building facing the Embassy. Using this information, it is possible to reverse-engineer a range of scenarios within Blast-RF and produce damage contour plots with a view to emulating observed damage patterns. This information is by no means exact; however, when coupled with other non-exact and subjective data such as crater depth (assuming such data can be actually gathered), it can provide the analyst with more relevant information and thus help determine, with perhaps more certainty, an estimate of the charge weight and/or type of explosive used. 3.6 Blast-RF's Excel environment Microsoft's spreadsheet program Excel was chosen as the host environment for BlastRF because of:

Ease of data entry via a spreadsheet. The ability to write, compile and call FORTRAN-style script within Excel's inherent VBA macro environment. For each Monte-Carlo iteration, Excel easily assembles different solver command strings, which are then passed to LS-DYNA for execution; whilst VBA's inherent Windows operating system access facilitates commencement or cessation of Windows based programs and then read/write access to solver output files. This particular method of batch-file operation is the key reason for Excel and VBA being chosen as Blast-RF's operating environment. Ease of post-processing information. VBA supports object oriented code, which, when combined with Excel 2007's automatic parallel-processing of formulations within individual cells, means that total analysis time can be significantly reduced when multiple CPUs are used.

3.7 Future development The recent focus of system reliability research using Blast-RF has been on glass facade systems. Work is well developed on the probabilistic modelling of monolithic glazing, with current research now including glass laminates in the prediction of likelihood and extent of building occupant safety hazards and casualties. Further,
19

Recent advances in security technology

studies on the reliability of masonry facades subject to similar blast loads have commenced; with a view that Blast-RF will be a useful research tool in assessing the structural reliability of any facade subject to loads associated with terrorist-based explosive attack.

4. Conclusions
This paper described a new research tool - called "Blast-RF" (Blast Risks for Facades) - that incorporates existing blast-response software within an environment that considers threat and vulnerability uncertainty and variability. The key output of the tool is the prediction of likelihood and extent of facade damage and/or related casualties. This has significant utility for decision makers when considering matters such as blast mitigation choices, contingency and emergency planning, collateral damage estimation and post-blast forensics.

5. Acknowledgments
The support of the Australian Research Council under grant DP0556913 is gratefully acknowledged.

6. References
Calderone, I. & Jacob, L. 2005. The dangers of using a probabilistic approach for glass design. Glass Processing Days 2005:1-3. LS-DYNA (Version 971). 2007. Finite element software for nonlinear dynamic analysis of inelastic structures. Livermore Software Technology Corporation. Livermore, CA. Netherton, M.D. & Stewart, M.G. 2007. Safety hazard and damage risks for monolithic window glazing subject to explosive blast loading. In: Proceedings of the 7th International Conference on Shock and Impact Loads on Structures (SI07). Beijing, China. 17-19 October 2007. (in press) Norville, H.S., Harvill, N., Conrath, E.J., Shariat, S. & Mallonee, S. 1999. Glass-related injuries in Okalahoma City bombing. Journal of Performance of Constructed Facilities 13(2): 50-56. Stewart, M.G. & Melchers, R.E. 1997. Probabilistic risk assessment of engineering systems. London. Chapman & Hall. Stewart, M.G., Netherton, M.D. & Rosowsky, D.V. 2006. Terrorism risks and blast damage to built infrastructure. Natural Hazards Review 7(3):114-122. Stewart, M.G. & Netherton, M.D. 2007. Security risks and probabilistic risk assessment of glazing subject to explosive blast loading. Reliability Engineering and System Safety (in press). UK Glazing Hazard Guide. 1997. Glazing Hazard Guide, Cubicle Stand-Offs, Tables and Charts. SAFE/SSG. Explosive Protection. London. SSG/EP/4/97.

20

Robust Communications using Wireless Mesh Networks for Safeguarding Water Infrastructures

2
Robust Communications using Wireless Mesh Networks for Safeguarding Water Infrastructures
Saad Khan and Asad Amir Pirzada
National ICT Australia

Marius Portmann
University of Queensland and National ICT Australia Limited
Abstract
Since ancient times, water has been considered a vital resource for the survivability of nations. It is a resource that calls for impregnable security and incessant protection. Water infrastructures generally consist of dams and associated water-supply systems. These infrastructures are vulnerable to a number of natural and man-made disasters. The threat of natural calamities such as cyclones, earthquakes and tsunamis and man-made disasters like terrorist attacks are now more imminent than ever. In spite of these facts, water infrastructures are still being provided with meagre communication support such as land-lines and mobile phones for use in normal and emergency scenarios. These communication resources are in fact vulnerable to the very same catastrophes as those threatening the water infrastructures themselves. In this paper we present a communication system, based on wireless mesh networks, which can successfully accomplish the normal as well as emergency communication requirements of any water infrastructure.

21

Recent advances in security technology

Biographies
Saad has been working as a Research Assistant with National ICT Australia (NICTA). He is also currently studying Aerospace Avionics Engineering at Queensland University of Technology. Asad is a researcher in the SAFE Networks work package of the Safeguarding Australia program at NICTA's Queensland Research Laboratory. He holds a degree in MSc Computer Science and in MS Information Security. Dr Marius received his PhD in Electrical Engineering from the Swiss Federal Institute of Technology (ETH) in Zurich in 2002. He is currently a lecturer in the School of ITEE at the University of Queensland.

1. Introduction
Water infrastructures are a key component of a nations assets. Damage to or destruction of a nations water supply infrastructure by natural disasters or terrorist attacks could have disastrous effects, disrupting the distribution of vital human services, threatening public health and the environment, and possibly causing loss of life (Cody, 2005). There are approximately 500 large dams in Australia with a total storage capacity of almost 170 times the volume of the Sydney Harbour (ABS, 2004), a small percentage providing the majority of supplies. Manned or unmanned, these dams represent a critical element on which the Australian people rely for safety, public health and economic vitality. So far, water infrastructure safety has mainly been evaluated in terms of structural and hydraulic stability with respect to natural forces (Grismala, 2005), such as major storms, earthquakes, cyclones and resulting floods. When considering the risk of deliberate threats, the focus is generally directed towards purposeful acts of vandalism or theft, rather than malevolent threats by terrorists, which is rapidly becoming a concern in todays society. Physical destruction to any of these systems, whether natural or manmade, could include the disruption of operating or distribution system components, power or electronic control structures, actual damage to reservoirs and pumping stations, or loss of telecommunication systems. Currently, Australia requires the owners of its dams to implement Dam Safety Management Guidelines under the Australian Water Act (Government, 2000) to minimise the risk of a dam failure, and to protect life and property from the effects of such a failure. A Dam Safety Management Program comprises policies, procedures and investigations which minimises the risk of dam failure (Government, 2002a). This is divided into two sections under the Dam Safety Code provided by the statutory body of Independent Competition and Regulatory Commission (Government, 2003). These include Dam Surveillance and Dam Safety Emergencies. Dam Surveillance encompasses routine checks, annual and 5-yearly, along with security surveillance by video cameras and regular inspections. In the event of a Dam Safety Emergency, an Emergency Action Plan (EAP) is carried out that incorporates the evaluation of severity of the emergency and the subsequent notification to relevant parties.
22

Robust Communications using Wireless Mesh Networks for Safeguarding Water Infrastructures

By far the most crucial procedure in this process is the notification of police, emergency services, consulting engineers, Department of Natural Resources and Environment, and down stream neighbours. The goal is to prevent or minimise any loss of life and property. In such a case, various authorities need to be notified, located upstream/downstream of the dam, as well as in other potentially affected areas. Under the current common practices in Australia, these warnings are generally delivered through commercial landline-based telephony and mobile phones (Government, 2002b). There are a number of major challenges associated with the notification process currently implemented in Australia. Loss of communication infrastructure such as communication cables, telephone exchanges, and mobile phone towers during a storm, flood or earthquake leaves the current communication system incapacitated. As has been demonstrated during recent disasters, high traffic demands during large scale emergencies lead to overload and, therefore, unavailability of these telecommunication systems to emergency response personnel and other persons involved in managing the disaster. A further concern is the vulnerability of these telecommunication systems to deliberate attacks by terrorists with the aim to hamper rescue operations. In this paper, we discuss wireless mesh networks as a potential technology to provide communications services for water infrastructure safety, for both day-to-day operations as well as during emergencies. Requirements for emergency communications as well as existing and alternative systems for Dam Safety Emergencies and Dam Surveillance across water infrastructure are discussed in Section 2. Section 3 provides background on wireless mesh networking technology. Section 4 discusses the application of the technology for provisioning of communication support at water infrastructures and current developments in the work. Finally, conclusions are presented in Section 6.

2. Communication Requirements
Natural and manmade disasters typically occur unexpectedly, without much time for the dissemination of warning messages. Organisation and co-ordination of essential recovery services require rapid response to save lives and restore the community infrastructure. During these events, severe stress is placed on telecommunication systems due to high traffic demands and infrastructure damage. The Tampere Convention on the Provision of Telecommunication Resources for Disaster mitigation and Relief Operations points out the following in the United Nations Treaty Series (Board, 2006): The States Parties shallfacilitate the use of telecommunication resources for disaster mitigation and relief. Such use may include, but is not limited to; ...... (c) the provision of prompt telecommunication assistance to mitigate the impact of a disaster; and (d) the installation and operation of reliable, flexible telecommunication resources to be used by humanitarian relief and assistance organizations.
23

Recent advances in security technology

This illustrates the many dimensions that need to be addressed to achieve an effective solution for emergency telecommunications. The system needs to be robust, demonstrating an ability to recover gracefully from a whole range of failure and overload situations. Inbuilt redundancy within a system and the ability to self-heal are required to be able to provide a reliable and effective communication platform for a wide range of voice and data services. Interoperability between communication systems used by various emergency response teams and organisations has also been identified as a critical requirement for the efficient management of disaster situations. The different requirements and problems of existing and alternative approaches to communications for Dam Surveillance and Dam Safety Emergencies are considered below. 2.1 Dam Surveillance

Routine checks of dams are conducted throughout the year by the dam owners, as required by the Dam Safety Management program. Dams require physical security of their structure and components as well as technical security, such as monitoring of sensor data and equipment. Physical security of dams can be provided through security personnel services or video surveillance. Both of these methods utilise one of the following communication means to relay their information back to the required facility: professional mobile radio (PMR), a local area network (LAN) or traditional landline-based telephony. Technical security of dams includes regular inspections and monitoring of physical data and onsite equipment. Inspections are usually carried out by personnel and monitoring is conducted through Supervisory Control and Data Acquisition (SCADA) systems. Relay of technical security information is typically done through wired local area networks if available or via the public switched telephone network (phone or fax). Some of these systems lack the bandwidth capacity of data intensive applications such as video surveillance. These communications systems suffer from various limitations and vulnerabilities, as outlined in Table 1. PMR systems have limited point to point ranges, and the installation of radio relays is time consuming and costly. Use of incompatible equipment or radio channels provides serious interoperability problems. Landline-based infrastructures as well as wired local area networks are vulnerable to tampering through planned and coordinated attacks. The lack of redundancy in these systems limits their ability to recover from failure of individual components. The loss of a single telephony exchange or mobile phone tower can result in the complete loss of communication for a large group of users.

24

Robust Communications using Wireless Mesh Networks for Safeguarding Water Infrastructures

Scenarios Physical Security Personnel Asset Protection Service Video Surveillance

Communication Means 1. Professional Mobile Radio 2. Ethernet 3. Landlines 4. Mobile Phones

Limitations PMR has limited range and suffer from interoperability problems Limited Bandwidth

Technical Security Inspections Monitoring - Equipment - Assets

1. 2. 3. 4.

Landlines (Phone & Fax) SCADA Network Ethernet Mobile Phones

Mobile phone networks have limited coverage and limited reliability in disaster situations All systems are highly vulnerable to tampering SCADA networks have security problems

Table 1. Surveillance scenarios, communication means and limitations of existing communication infrastructure SCADA systems are used to monitor and control chemical, physical or transport processes. These monitoring capabilities are often required by corporate personnel through remote access. This encourages many utilities to establish connections to the SCADA system to enable engineers to monitor and control the system via the corporate network. These connections are often implemented without addressing security risks. Security strategies for utility corporate network infrastructures rarely account for the fact that access to these systems might allow unauthorised access and control of SCADA systems, creating a vulnerable system (Riptech, 2001). It is thus evident that the currently implemented communication systems used for dam surveillance do not offer the required capacity, reliability, flexibility and interoperability for seamless and efficient transfer of information. We, therefore, consider Wireless Mesh Networks as an alternative technology that can meet most of these requirements. 2.2 Dam Safety Emergencies

Determining whether there is a critical emergency or not requires time. Creating a false panic in nearby towns with limited emergency response resources is a costly mistake. The limited amount of time that is typically available to inform emergency response units and decision makers further emphasises the importance of a reliable and efficient communication system. The communication that takes place in a dam safety emergency follows a predescribed process. Figure 1 shows a simplified generic notification process tree which needs to be followed during an emergency. Often, an observer is first to notify the dam operator, who in turn makes the decision whether or not the dam is in a critical state. If a dam failure is imminent, his/her duty is to notify the upstream and downstream dams, the downstream residents (in part) and the dam manager.
25

Recent advances in security technology

Figure 1. Example Emergency Notification Process The dam manager is then responsible for notifying various other response units and managers and this process continues. The notification that occurs after the decision about the criticality of the infrastructure needs to occur rapidly. However, the current communication systems that are generally being utilised are local telecommunication lines, mobile phones and in some cases PMR radio systems. These systems have immense infrastructure problems during various emergency situations. The vulnerability issues associated with the common communication facilities provided on water infrastructure, during an emergency, are listed in Table 2. The need for a reliable method to communicate emergency situation information to relevant parties is essential. However, traditional communication systems based on fixed infrastructure fail to meet the reliability requirements during these situations. The current systems also suffer from overload during large scale disaster events. During emergencies, usage of the public telephone network increases dramatically and results in congestion and service unavailability, which can result in situations where vital information cannot be delivered to its intended destination. Apart from landline-based phone and mobile phone networks, PMR systems are also used for communication purposes. However, these systems suffer from limited range and interoperability issues, as mentioned previously.

26

Robust Communications using Wireless Mesh Networks for Safeguarding Water Infrastructures

The systems mentioned above do not fulfil the requirements of robustness, redundancy, flexibility and capacity. The current alternate approaches being sought out include the use of satellite phones. However, the cost of these systems is prohibitively high for widespread deployment. Furthermore, they do not provide adequate bandwidth for the transmission of video data and high quality images.
Scenarios Sunny Day Failure Earthquake Technical Failure Piping Effect Cyclone Flood Event Terrorist Attack Communication Means All scenarios use; 1. PMR 2. Landlines 3. Mobile Phones 4. Fax 5. Ethernet - Email Limitations Disruption/Destruction of communication infrastructure occurs due to individual scenarios High traffic demand causes congestion of communication facilities Systems are vulnerable to coordinated attacks

Table 2. Emergency scenarios, communication and limitations with existing communication infrastructure Two different types of satellite phones are currently available in Australia, geostationary services and low earth orbit (LEO) systems. Geostationary service satellite phones are relatively large units and hence have limited mobility. These systems also require a clear view of the sky during use, and can therefore, become ineffective during an emergency situation. LEO systems have smaller footprints and utilise high speed satellites. This requires more satellites as frequent handoff occurs depending on which one enters and leaves the field of view (Valenti, 2001). The continuous availability of the communication link, especially for data communication, has been reported as a major problem. Overall, satellite phones are very expensive, lack reliability and the required capacity for high bandwidth applications such as video.

Wireless Mesh Networks

Wireless Mesh Networks (WMNs) are a type of wireless ad-hoc networks that do not require a wired backbone infrastructure. Ironically, the biggest cost of deploying a wireless network covering a larger area is the installation of the wired backbone infrastructure that interconnects the wireless access points. In WMNs, the wired backbone is replaced via wireless multi-hop communication. Due to their mesh topology, WMNs have a high level of redundancy. Furthermore, these networks have the ability to self-organise and self-heal, and are, therefore, able to cope with failures and loss of parts of the network.

27

Recent advances in security technology

WMNs consist of two types of wireless nodes, Mesh Routers and Mesh Clients (Akyildiz, 2005). Mesh Routers typically have minimal mobility and form the backbone of WMNs. Mesh Clients are typically mobile computing devices such as PDAs, with limited computing, communication and power resources. Figure 2 shows an example WMN network, where all links are wireless.

Figure 2. A simplified Wireless Mesh Network The three main types of WMNs are Infrastructure, Client and Hybrid. Infrastructure WMNs are comprised of passive Mesh Clients that utilise Mesh Routers to access a backhaul network. Thus, all Mesh Clients construct links with Mesh Routers via a single wireless hop and are not involved in the relaying of other nodes traffic. A Client WMN consists exclusively of Mesh Clients communicating with each other directly, without any involvement of Mesh Routers. The most generic type of WMNs is the hybrid WMN. A hybrid WMN is an amalgamation of the two previous types to form a robust and reliable system, providing a network where both Mesh Routers and Mesh Clients are actively involved in the routing and forwarding of packets. The WMN formation requires no manual intervention or configuration. Its ability to self-organise and self-heal creates an autonomous structure, requiring little attention during times it is needed most. Their ability to integrate into existing networks and communication systems, to provide high data rates under high load and their excellent robustness and failure tolerance make WMNs a promising technology for a wide range of applications (Pirzada, 2006), including dam safety communication.

Water Infrastructure Applications

Currently, basic communication technology is employed for dam surveillance and emergency communication. In Table 1 and 2 the communication means utilised during surveillance and emergency scenarios and their limitations and vulnerabilities were shown. It is evident that these communication systems have significant deficiencies in a number of aspects for the considered application. WMNs overcome most of these limitations and provide a cost-effective, reliable and robust alternative.
28

Robust Communications using Wireless Mesh Networks for Safeguarding Water Infrastructures

Table 3 provides a summary of the existing limitations of current communication systems and how WMNs are able to overcome these.
Existing Limitations 1. 2. 3. Limited Range Limited Coverage Limited Robustness and fault tolerance WMN capability addressing limitations Multi-Hop Wireless Communication

Redundancy of Mesh Topology Self-healing capability Ability to support high bandwidth applications Ability to dynamically adapt and re-configure Load Balancing capability Integration Multiple types of network access Layered Security

Outcome achieved by implementing WMNs Coverage of large areas Provision of continuous communication resources during an emergency Reliable support for real time video transmission Communication with existing systems Secure transfer of information within corporate utilities

4.

High traffic demands

5. 6.

Non-interoperability Security

Table 3 Application of WMN systems to solve existing limitations in communication systems for water infrastructure protection WMNs can be used for both dam surveillance and dam safety emergencies. Figure 3 shows a simple representation of WMN application in the vicinity of a dam. Dam surveillance is conducted on a regular basis, and often personnel are required to monitor and transfer information through conventional communication means to relevant authorities. Deployment of a WMN would reduce the need for regular personnel checks, which would be replaced by fixed wireless video cameras interconnected via a WMN. Physical security of the dam structure and its components as well as technical security such as monitoring of data and equipment itself can be achieved using the WMN. Wireless video cameras can be used to transfer high quality streaming video to a local or remote monitoring room. This remote monitoring would be particularly useful in the case of unmanned dams. Mobile phones are generally inoperative during dam surveillance owing to the remoteness of dam sites. WMN offer non-line-of-sight connectivity and routing. Coverage problems are solved in a WMN as all clients may access the network services by connecting to the nearest router or client. Due to the multi-hop system, the client need not be directly in-line-of-sight of the monitoring room to receive data. The use of the Internet Protocol (IP) as a common platform solves the problem of interoperability and allows integration with a wide range of networks and systems, including SCADA systems.
29

Recent advances in security technology

Figure 3. Example application of a Wireless Mesh Network on a typical dam The existing communication systems at water infrastructures are highly vulnerable to intentional tampering, especially SCADA networks. Along with this, destruction of fixed communication infrastructure such as telephone exchanges or cables can potentially leave a dam site isolated over long periods of time during an emergency. WMNs employ a wireless network which is resilient to failure and to intentional tampering. Its high level of redundancy and self-healing ability allows the communication system to choose alternate paths for communication, allowing it to maintain seamless connectivity in the event of node and link failures. Dam safety emergencies present numerous limitations and vulnerabilities of current communications systems which can be largely overcome by WMNs. During an emergency scenario such as an earthquake, cyclone or flood event, disruption or even destruction of communication infrastructure such as landlines can render communication infrastructure incapacitated. In such a case, extending the coverage of a network for limited range devices may be necessary. The multi-hop system of a WMN allows incremental deployment, one node at a time. This means that as more nodes are deployed, the reliability and connectivity for the users increase accordingly. This capability of a WMN permits for a high coverage area network utilising minimal resources. Due to a wireless infrastructure/backbone, the WMNs form a completely resilient and redundant system. Destruction in part or in whole of the system will not affect the communication at all. The system will simply alter the routing of the information to account for the missing or malfunctioned component. Since there are multiple links between the mesh routers, losing one will not affect the transfer of information as an alternate data route can be taken. During a dam emergency, one of the problems with the communication facilities is their lack of adaptability for high traffic demand. Facilities currently provided at
30

Robust Communications using Wireless Mesh Networks for Safeguarding Water Infrastructures

remote dams often lack the ability to handle heavy congestion, as thousands of people upstream and downstream are also using the same facilities. Mesh routers can be configured to perform balancing and prioritisation to solve the problems incurred during high traffic loads. Lastly, dam emergencies can also be manmade, and may include coordinated attacks. The current communication systems employed for dam emergencies are the same as those for dam surveillance and are prone to intentional tampering and attacks. Security in WMNs can be implemented via a wide range of existing and proven mechanisms and protocols, and can be applied at multiple layers of the network. Examples include link layer encryption, virtual private networks and application layer security. This along with the high level of redundancy and self-healing capabilities provide WMNs with resilience against malicious attacks. 4.1 Development

Developments in WMNs are rapidly progressing throughout the world. Focus covers many aspects including involvement of networks with countless nodes using multihop transmission and its performance using different routing methods. Channel assignment for multiple radio routers allow for efficient use of the wide frequency spectrum, permitting transmission and acceptance of data to occur simultaneously. The authors are currently undertaking a research project with an industrial partner for feasibility of such systems to be implemented on Australias water infrastructure. The main concern and focus of the study is to integrate the WMN system with the existing communication structure. One of the major challenges involves setting up robust routing devices through the rough terrain that exists around large water infrastructure such as water dams. This is currently being addressed by the multiple hop nature of WMNs and their ability to transmit data without line of sight.

Conclusions

WMNs provide an alternative communications technology for water infrastructure protection that overcomes many of the limitations of currently used systems. As required by the Tampere Convention, WMNs provide prompt telecommunication assistance to mitigate the impact of a disaster as well as offering a reliable and flexible telecommunication resource to be used by humanitarian relief and assistance organisations. The system is robust; demonstrating an ability to recover gracefully from a whole range of exceptional inputs and situations in a given environment. Redundancy and autonomous execution make WMNs flexible and reliable in a range of given scenarios, adding to their ability to transfer high quality data quickly and efficiently even in high traffic demand situations. Their simple installation and low maintenance are added benefits to their low cost. WMNs can serve a dual purpose, for regular dam surveillance in day to day operations, and as a backup communication system for dam emergency situations. This will solve the vulnerabilities evident in the current communications systems employed at water infrastructures.
31

Recent advances in security technology

Acknowledgements
National ICT Australia is funded by the Australian Government's Department of Communications, Information Technology, and the Arts and the Australian Research Council through Backing Australia's Ability and the ICT Research Centre of Excellence programs and the Queensland Government.

References
ABS. 2004. Water Use - Australian economy consumes 50 Sydney harbour's. Australian Bureau of Statistics. Akyildiz, I. F., Wang, X. & Wang, W. 2005. Wireless Mesh Networks - A Survey. Computer Networks, 47:445-487. Board, T. 2006. Tampere Convention on the Provision of Telecommunication Resources for Disaster mitigation and Relief Operations, International Conference on Emergency Telecommunications. Cody C. & Copeland B. 2005. Terrorism and Security Facing the Water Infrastructure Sector. Congressional Research Service. FEMA. 2006. The National Dam Safety Program: 25 years of Excellence. U.S. Department of Homeland Security. Government. 2000. Water Act 2000. State of Queensland, Australia. Government. 2002a. Dam Safety Management Guidelines. Department of Natural Resources and Mines, Australia. Government. 2002b. Dam Safety Emergency Plan. Department of Sustainability and Environment, Victoria, Australia. Government. 2003. Dam Safety Code. Australian Capital Territory, Australia. Grismala, R. 2005. Infrastructure Safety. ICF Consulting. Pirzada, A. A., Portmann, M. & Indulska, J. 2006. Evaluation of Multi-Radio Extensions to AODV for Wireless Mesh Networks, Proceedings of the 4th ACM International Workshop on Mobility Management and Wireless Access (MobiWac), pp. 45-51. Riptech. 2001. Understanding SCADA Systems Security Vulnerabilities. Riptech Inc. Chandran, N. & Valenti, M.C. 2001. Three generations of cellular wireless systems. IEEE Potentials, 20:32-35.

32

Maritime Port Intelligence using AIS Data

3
Maritime Port Intelligence using AIS Data
Min Han Tun, Graeme S Chambers and Tele Tan
Department of Computing, Curtin University of Technology

Thanh Ly
Defence Science and Technology Organisation
Abstract
Maritime security has never been more important. As the world economy has grown, our dependence on shipping has increased exponentially. Along with that dependence the expense of technology and manpower, and the complexity of processes designed to safeguard maritime assets has soared. In this paper, we introduce a computational method, based on Density Mapping and Hidden Markov modelling, that promises improved maritime security in a cost effective manner. The method makes use of a commercial data network broadcast system, the Automatic Identification System (AIS). A learning algorithm was devised to monitor AIS data. The algorithm is capable of detecting abnormal vessel activities for a range of different Port profiles. The paper summarises development and testing of this algorithm and proposes future applications.

Biographies
Min Han Tun completed his B.C.Sc (Bachelor of Computer Science) at the University of Computer Studies in Myanmar and is currently completing his MSc at Curtin University of Technology. His research interests include Computer Vision, Pattern Recognition and have been working as a research assistant at Curtin. Graeme Chambers completed his B.Sc (First class honours) in Computing at Curtin University of Technology and is currently completing his PhD, part-time, also at Curtin. His research interest includes Computer Vision, Pattern Recognition and Data Mining.
33

Recent advances in security technology

Dr Tele Tan is Senior Lecturer with the Department of Computer Science at Curtin University of Technology. His research interests include pattern recognition, biometrics, mixed media analysis and principles involving physical security technology and processes. Dr Thanh Ly is Research Scientist with Maritime Operation Division of Defence Science and Technology Organisation of Australia. His interest is in intelligent system and human system integration for command decision support, and naval operation analysis.

1. Introduction
Close to ninety percent of all international trade is carried out by sea. The import and export of goods like agricultural products, construction materials, oil and gas, heavy machinery etc. is a primary driver of the current robust world economy. As the international economy continues to expand, the important role of seaborne trade cannot be undermined. Security threats like terrorism, piracy, uninvited guests and internal and external sabotage continue to plague the maritime domain. It is therefore crucial to explore business processes and technologies with potential to provide better safeguards for sea-borne trade. Due to the complexity of the problem it is likely that a range of technical and organisational measures will be required to secure maritime trade activity. Some ports have already invested heavily in advanced Closed Circuit Television (CCTV) systems as one form of counter measure against these threats. This is a very labour intensive solution however. In this paper, we will explore the utility of automated analyses of data collected from the Automatic Identification System (AIS) linked to vessels to complement other counter measures. The AIS is an automated broadcast technology that enables tracking of vessels by shore-based stations and other vessels. AIS is expected to have widespread use due to the International Maritime Organization (IMO) mandating that all large merchant ships have AIS installed and operating (Australian Maritime Safety Authority, 2004). AIS receivers record various broadcast attributes of vessels within transmission range, including their speed, direction, identity, latitude-longitude. All mandated vessels must broadcast this AIS data at intervals specific to their current status (i.e. in transit, at anchor, conducting manoeuvrer). Automation of the task of discovering normal and/or repetitive activities as well as unusual ones in water regions has the potential to provide additional timely information for Port security. Quite simply, AIS analysis can potentially extend the field of view of security authorities beyond the lenses of their CCTV cameras. If, for example, a vessel exhibits abnormal behaviour such as travelling in the wrong direction in a particular region, security operatives can be alerted allowing them to take appropriate action. This paper explores analysis of AIS data using pattern recognition technologies to enhance the situation awareness of port security authorities. The following will be addressed:
34

Maritime Port Intelligence using AIS Data

Automatic determination of movement patterns of vessels that summarises the ingress, egress route and anchorage area of a particular port of call (These key regions best summarise fairway patterns associated with each port). Statistical description of predominant (vessel) features within each identified region. Such intra-region analysis using both statistical and clustering techniques will allow distinguishable details of vessel movements within the region to be captured and used as descriptor for the region. Determination of a higher level ontology of vessel movements across key regions. The progress of vessels from one region to another captures a higherlevel of activity of vessels in the port. It is this understanding of activity that we wish to analyse without making premature generalised assumptions.

The strength of all these approaches is the ability of the algorithm to learn given a set of AIS data. Learning provides a mechanism to adaptively control the performance of the system since the algorithm can relearn from new data as well as false positives and true negatives. Figure 1 shows the components of the system.
Automatic Vessel Behaviour Detection stages

AIS Data

Density Map

Linear Scalespace

Behaviour Analysis

Vessel Behaviour

Figure 1 Components of automatic vessel behaviour discovery

2. Related Work
There is little published work on automated discovery of vessel behaviours in the maritime arena. Rhodes et al. (2005) described a system based on artificial neural networks to determine abnormal behaviour of vessels inside the harbour. The network is trained using common vessel speeds around manually specified buoys of interest. Moser et al. (2004) provides a model for evaluating the efficiency of harbours by simulating vessel movements. Their work is focused around improving the operational efficiency of ports. van Asperen et al. (2003) also present statistics for a simulated jetty focused on optimising loading and unloading of cargo. There has been previous work using AIS data from Fremantle Port (Tan et al., 2006). A Self Organising Map (SOM) is applied to a small region of Fremantle Port. The SOM is able to classify the data as belonging to one of several fairways use by vessels. In this paper, we present an alternate approach to vessel movement analysis and abnormal behaviour detection. The following sections discuss each of the vessel behaviour detection stages shown in Figure 1.

35

Recent advances in security technology

3. Density Map
Automatic region detection is an important first step of this system. Only when the dominant regions around the port are identified, can one proceed to analyse the movement of vessels within and across these regions. The fundamental algorithm used to enhance and subsequently detect these regions of interest was the density map algorithm. The density map is the rectangular region of waterway under scrutiny. The area is discretised into smaller regions, defining a raster-sized image. The selected size of the smaller regions controls the resolution by which a vessel can be located. As vessels travel inside the area of interest, their current locations are captured by the system. The actual path of each vessel is interpolated between successive AIS transmissions. This system is intended to be in operation indefinitely, and thus the method of updating the density map requires some form of decay to avoid numerical overflow. We have found that a simplistic exponential performs the update well

x,y = 1+ x, y

(1)

where x , y is the density of location x, y and 0 1 is the decay rate. The chosen sampling period must be able to reflect all possible vessel activities around the region of interest. In this work, we have found that a sampling period of about a month is sufficient in capturing a fair representation of activities in the Port.

4. Linear Scale Space


Following the development of density maps, Lindebergs (1994) multi-scale approach is applied to the density maps to find all the regions at different scales. Lindeberg (1994) produced several techniques relating to scale space in images. The primary advantage in using the multi-scale approach is its consistency over different levels of detail (low and high resolution). Linderbeg (1998) considered the problem of detecting perceptually significant image structures that exist over a range of scales in an image. Given an image f, its linear scale space representation L is defined by:

where

is the convolution operator and g is the Gaussian kernel for the scale t
g(x;t) = 1 e(x 2 +y 2 )/ 2t e 2pt
(3)

L (; t ) = g(; t ) f , with L(;0) = t

(2)

At each scale level, the structures of interest can be searched for. In case of vessels routes, these structures correspond to ridges representing pathways of vessels. A ridge can be defined as a region that separates two other regions. For vessel traces, this separating region can be a fairway or other well-defined region such as an anchorage area.
36

Maritime Port Intelligence using AIS Data

The measure of ridge strength can be applied for all scales of t in order to rank each potential ridge for each scale. A surface is constructed using the ridge definition (Equation 3) where each scale t is layered on top of another. By combining the ridge surface and the ridge strength over each scale, salient ridge regions can be determined. The intersection of the ridge surface and local maxima of ridge strength corresponds to candidate ridge regions in scale space.

5. Merchant vessel behaviour analysis and modelling


Sections 3 and 4 described the processes to automatically detect the regions of interest, which evidently form the focal point of vessels activities around the port vicinity. An example of such regions is shown in Figure 4. Once the cluster of regions has been correctly located, one can proceed to describe in greater detail the movement activity of vessels as sampled within each region (intra region) as well as between key regions (inter region). 5.1 Intra region vessel characteristics The process of characterising intra region behaviour starts the identified regions. AIS data traces of vessels in each region are extracted. These data points can then be analysed to determine the dominant features of each region. Simple statistical Histogram techniques as well as machine learning tools are used to identify these features. It is important to find these dominant features as they provide a way of describing each region statistically. Once these attributes are identified, a k-means (Witten and Frank, 2005) clustering algorithm is applied to each of the regions. Depending on the nature of the data in the region, the clustering algorithm may be able to identify clusters of data that shared similar vessel speed and heading direction. These attributes may be further enhanced by adding time spent in each region. The clusters detected at this stage allow us to form a link between dominant attributes of the AIS information and the activity of vessels. This link essentially forms the ontology for each region. Ontology in this case is a data model describing the normal activities of a region of interest. 5.2 Inter region vessel characteristics The ontology may then be used to label the route a vessel would take through the identified regions. The AIS dataset is analysed to find the routes vessels take through the identified regions. The region cluster centres are used to identify which region a ship is in. At each AIS transmission, the vessel is labelled with the number of the region it is in. Figure 2 shows a labelled route of a ship, in the example in Figure 4, going to the industry zone and back out of port.

37

Recent advances in security technology

111111111111111111111111111111111111111122222222222222222222 222222333333333333333355555555555555555555555555666666666666 666666665555555553333333333333333322222222222222222111111111 111111111 Figure 2 Vessel route labelled with region numbers. These routes represent how a vessel moves through the port during normal operations. One month worth of AIS data is analysed for each ship and labelled as above. The routes of these ships are then used as input to a Hidden Markov Model (HMM) to learn how the vessels move within the port area. HMMs are preferred over neural networks as they provide the user with a confidence level through probability estimation. Moreover each state of the HMM has meaning mapping to contextual information about each region in the port. Neural networks do not provide this context. A Hidden Markov Model is a statistical model where the system to be modelled is assumed to have a Markov process with unknown parameters. These unknown or hidden parameters are learnt from the observed data (Alpaydin, 2004). Hidden Markov models are very useful tools when the input sequence is stochastic and cannot be modelled using conventional probability distributions. For example, letters in an English word or more typically time series data (Rabiner, 1989). Using the labelled routes as input, the Hidden Markov Model learns the likelihood of vessels moving from one region to another, each represented by a state in the model. It will learn the transition probabilities, i.e. the probability of vessels crossing regions, defined by A ij , the starting probabilities i , that is the probability of vessel routs starting in a particular region and the probability that the observation occurs while in a particular state denoted as B ij . While labelling the vessel routes, it is discovered that vessels follow a predefined path through the port. For example, a vessel coming in to the port to go to the anchorage area must traverse through regions 1 and 2, the entry and fairway regions before anchoring in region 3, the anchorage region. Judging from these labels a good initial estimate of Hidden Markov Model parameters can be made. Once the initial probabilities of A ij , i and B ik are set, the model goes through the training data, i.e. the labelled vessel routes, learns and updates these probabilities1. Vessel movements are modelled by selecting a set of characteristic behaviours from our two months AIS dataset. We choose four models to initially test our approach: Model A: Comprises of vessels going to port. Model B: Comprises of vessels going to or at anchorage. Model C: Comprises of vessels leaving port.

A small value, known as Dirichlet value, is added to all initial probabilities to prevent it from becoming zero. If there are zeros in the initial probabilities the HMM will stop learning when it finds impossible observations indicated by zero probability.
38

Maritime Port Intelligence using AIS Data

Model D: Comprises of vessels going to industry zone and back. When a vessel enters the port, its movement trace can be used to identify which behaviour it is exhibiting. The trace is compared to each of the models, the model with the highest likelihood is considered to be the behaviour of the model.

6. Experimental Results
Figure 3 shows example surface plots of the density map (see Section 3) with sampling window over (a) three days and (b) thirty days in Fremantle Port. The figure clearly shows how updating the density map deforms the surface to create peaks representing regions of activity. The consistent shape present in both figures demonstrates the validity of the approach. The density map is clearly able to highlight the well-used regions regardless of the amount of data. As the movement pattern change over time, the regions produced by the density map will change accordingly. Even when parts of the port are shut down and movement patterns change the density map will pick up the change. The multi-scale approach of Lindeberg (1998) was applied to the density map of the AIS data traces to locate all of the regions at different scales. Figure 4 (a) shows the density map of the vessel traces for the entire AIS data coverage area and (b) is the identified regions (using the Linear Scale Space algorithm) superimposed onto the map of Fremantle port.

(a)

(b)

Figure 3 Density Map of Fremantle Port (a) 3 days (b) 30 days

39

Recent advances in security technology

(a)

(b)

Figure 4. Identified regions from multi-scale approach

Region 2

Region 4

Region 6 Figure 5 Region Statistics for three representative region (Course over ground in degrees (left column), Speed over ground in knots (right column)) Figure 5 shows the statistics (course over ground and speed over ground) for regions 2, 4 and 6 corresponding to the port entry fairway, inner port and industry regions
40

Maritime Port Intelligence using AIS Data

respectively. The peaks at zero indicate vessels either heading exactly north or at anchor. This value is derived from the movement of GPS antenna. If the vessel is at anchor course-over-ground corresponds to the direction wind moves the ship. Figure 6 shows the clusters formed in the entry fairway region (region 2). Each of the separate regions will be assigned a different cluster centre after applying the k-means algorithm. The red and blue clusters in Figure 6 (a) and (b) show vessels coming in and out of port respectively.

(a)

(b)

Figure 6 Clusters in region 2 (a) course over ground (b) speed over ground When a vessel enters the port, its movement trace can be used to identify which behaviour it is exhibiting. New vessel traces are compared to each of the models, the model with the highest likelihood is considered to be the behaviour of the vessel trace. Table 1 shows the log likelihood of each of the test traces occurring. A sample set of traces are selected from a set of thirty six test traces and shown in this table. This test set comprises of normal vessel movements as well as manually injected abnormal traces. The system was able to correctly identify the abnormal traces.
Trace 3 4 14 15 21 23 24 35 36 A -0.0147 -0.0144 -0.0327 -0.0001 -0.0244 -0.0147 -0.0002 -0.0531 -0.0531 B -0.0683 -0.0011 -0.0166 -0.0005 -1.7230 -0.0110 -0.0813 -0.0241 -0.0241 C -0.0010 -0.0236 -0.0001 -0.0022 -0.0320 -0.0009 -0.0025 -0.0120 -0.0120 D -0.0003 -0.0039 -0.0264 -0.0017 -0.0235 -0.0003 -0.0019 -0.0364 -0.0364

Table 1 Log Likelihood of each route occurring Any trace whose behaviour cannot be identified by any of the Hidden Markov models is considered to exhibit abnormal behaviour. The log likelihood produced by each model is compared against a threshold T. For this test set T is set to -0.0115. Table 2 shows the output of the HMM detecting both normal and abnormal (unseen) routes. The abnormal vessel traces can then be incorporated into the training set and allow the Hidden Markov model to re-train on the new dataset or take other appropriate
41

Recent advances in security technology

action at the discretion of the operator. Figure 7 shows the detected vessel traces. The red line indicates the unknown route of trace 21. Trace 3 is class D Trace 4 is class B Trace 14 is class C Trace 15 is class A Trace 21 is Unknown Add to training? Trace 23 is class D Trace 24 is class A Trace 35 is Unknown Add to training? Trace 36 is Unknown Add to training?

Table 2 Detected vessel traces

Figure 7 Detected vessel routes

7. Conclusions and Future Work


In this paper, we explored the use of spatial data accumulated from AIS information to model activities of vessels in a Port. This approach is designed to work with any Port. Density maps and scale space theory are used to identify the focal regions of the port and a number of machine learning techniques are presented to detect abnormal vessel behaviour. Incorporating temporal data can further improve the accuracy of the models. For example, certain ships only appear at certain times of the year or ships typically spend a certain amount of time in each region. By incorporating this type of data, we may analyse patterns at an even finer granularity and identify vessels based on their spatio-temporal data.

References
Alpaydin, E. (2004), Introduction to Machine Learning, MIT Press. van Asperen, E.; Dekker, R.; Polman, M. & de Swaan A Arons, H. (2003),Modelling ship arrivals in port., in 'Proceedings of the 2003 Winter Simulations Conference', pp. 17371744. Australian Maritime Safety Authority (2003),Fact Sheet: Shipborne automatic identification system (AIS), http://www.amsa.gov.au/Publications/Shipping/AIS_fact.pdf. Lindeberg, T. (1998), Edge detection and ridge detection with automatic scale selection, International Journal of Computer Vision 30, 117-154. Lindeberg, T. (1994), Scale-space theory, Kluwer Academic Publishers. Moser, D.; Hofseth, K.; Heisey, S.; Males, R. & Rogers, C. (2004), Harborsym: A data driven Monte Carlo simulation model of vessel movement in harbours., Technical report, Institute for Water Resources, U.S Army Corps of Engineers.
42

Maritime Port Intelligence using AIS Data

Rabiner, L. (1989),A tutorial on Hidden Markov Models and selected applications in speech recognition, in Proceedings of IEEE, pp. 257-286. Rhodes, B.; Bomberger, N.; Seibert, M. & Waxman, A. (2005), Maritime situation monitoring and awareness using learning mechanisms. IEEE Military Communications Conference, pp 646-652. Tan, T.; Lim, F. L & Kang, Y. (2006), Situation awareness system in a maritime port. In Proc. of the OCEANS 2006 IEEE Asia Pacific, Singapore, May 2006 Witten, I. & Frank, E. (2005), Data Mining: Practical Machine Learning Tools and Techniques.

43

Recent advances in security technology

4
High-performance retrofit solutions for blast protection of facades in office buildings
A. Remennikov
University of Wollongong, Australia

D. Carolan
Taylor Thomson Whitting International Pty Ltd Consulting Engineers, Sydney
Abstract
In recent years, explosive attacks against US, UK and Australian critical buildings have demonstrated their vulnerability as terrorist targets. Many office and public buildings are constructed with large glass facades or unreinforced masonry infill walls. Although such structures are aesthetically pleasing and architecturally attractive, protecting such buildings against conventional bomb attacks poses an enormous challenge. A standard glazed faade and masonry infill walls exposed to a bomb blast instantly become a source of flying debris of sharp shards and fragments of masonry which are often more deadly than the blast itself. This paper focuses primarily on the role of external faades in providing protection for building occupants against deliberate bomb attack. Protective high-performance systems that use a combination of high-strength fabrics, cables with energy absorbing devices and blast-resistant glazing are discussed. Practical examples of faade retrofit systems with energy dissipating devices are included.

Biographies
Alex Remennikov is a Senior Lecturer in structural engineering at the University of Wollongong. He received his PhD in 1992 from the Kiev National University of Construction and Architecture, Ukraine. His main research interests include analysis
44

High-performance retrofit solutions for blast protection of facades in office buildings

and design of structures under extreme loads such as blast and impact loading. He has been involved in a number of projects related to blast retrofit of critical infrastructure facilities both in Australia and overseas. David Carolan is a Director of Taylor Thomson Whitting International, Consulting Structural Engineers with their head office in Sydney. He has extensive experience in the structural design of all building types for projects throughout Australia and the Asia-Pacific. He has been Director in charge of several embassy projects in South East Asia which have involved hardening of existing buildings to resist blast effects.

Introduction
Events of the past few years have greatly heightened the awareness of structural designers of the threat of terrorist attacks using explosive devices. Extensive research into blast effects analysis and methods of protective design of buildings has been initiated in many countries to develop methods of protecting critical infrastructure and the built environment. Although it is recognised that no civilian buildings can be designed to withstand any conceivable terrorist threat, it is possible to improve the performance of structural systems by better understanding the factors that contribute to a structures blast resistance. One of such factors is the ability of the external faade of the building to provide protection for its occupants from deliberate, lifethreatening terrorist attack. The designers of commercial and public buildings are increasingly called on by the clients to incorporate protection against bomb blast into their new designs or develop protective measures for the existing buildings while still meeting the other criteria such as gravity, wind and seismic loading. The building faade is often where the greatest degree of effort is required because of its role as the first line of defence against the destructive effects of bomb blast. Many office and public buildings are constructed with large glass facades or unreinforced masonry infill walls. Although such structures are aesthetically pleasing and architecturally attractive, protecting such buildings against conventional bomb attacks poses an enormous challenge. Standard glazed faade and masonry infill walls exposed to a bomb blast instantly become a source of flying debris of sharp shards and fragments of masonry which are often more deadly than the blast itself. This paper focuses primarily on the role of external faades in providing protection for building occupants against deliberate bomb attack. Protective high-performance solutions that use a combination of high-strength fabrics, steel cables with energy absorbing devices and blast-resistant glazing are discussed.

Protective design strategies for existing glazed facades


Behaviour of glass under blast loads The greatest cause of injuries and internal disruption from an external bomb blast is the fragmentation and forceful blowing of glass into the room. The truth of this
45

Recent advances in security technology

observation has been confirmed in large explosions around the world including such terrorist attacks as the Oklahoma City bombing and the Jakarta Marriott Hotel bombing. Plain annealed glass is the most hazardous type as it fragments easily into dagger-like shards. These shards could be thrown at a very high speed by the entering blast wave deep into the room, causing serious laceration traumas. Blast pressures entering through shattered windows can also cause widespread internal damage to the partitioning walls and ceiling or may throw people against walls and other solid objects. Toughened glass shatters at higher loads than float glass and forms roundedged dice-shaped projectiles. Even though it is generally accepted that they are less hazardous than annealed glass shards, the toughened glass particles may travel at significantly higher velocities, still causing serious injuries to the buildings occupants. For a given blast scenario it is of course possible to design monolithic windows in annealed glass (of sufficient thickness) or toughened glass to resist a specified blast threat without cracking. The problem with this design approach would be with exceeding the design level of blast in an explosive event, which could result in complete shattering of the glass and reduced level of protection to the building occupants. The most effective glazing to provide protection against blast is laminated glass. Laminated glass is manufactured as two or more layers of glass with PVB (polyvinyl butyral) interlayers sandwiched between the glass layers. The quality of the bond between the PVB interlayer and the glass together effects the flexural capacity of the window. At cooler temperatures, the PVB bonds the glass panes together well so that the individual glass layers exhibit composite action and the window develops the same strength as a monolithic window of the same thickness. For higher temperatures, the PVB is weakened, the bond between panes is lessened, and the window response is something less than complete composite action. If laminated glass is cracked by blast pressures, the outer glass layer generally remains bonded to the inner plastic interlayer rather than forming flying shards. Maximum protective performance of laminated glass could be achieved by securely attaching the glass to a strong frame. This is often achieved by specifying deep rebates of about 25-30mm and by using structural silicon sealant to anchor the glazing in the frame. This will enable the cracked laminated glass to resist the blast pressures through membrane action and bulge inward while remaining attached to the frame (see Figure 1).

Figure 1. Laminated glass under blast loading


46

High-performance retrofit solutions for blast protection of facades in office buildings

Window hazard mitigation with anti-shatter films Anti-shatter film, also commonly known as security film, is a laminate used to improve post-failure performance of existing windows. Applied to the interior face of glass, anti-shatter film holds the fragments of broken glass together in one sheet, thus reducing the hazard of flying glass fragments. Retrofitting existing windows with anti-shatter films is usually considered when the full replacement of vulnerable windows is cost prohibitive. It should be noted that retrofitting is a compromise solution. Most retrofit solutions provide protection at low to medium blast environments (typically less than 70 kPa peak pressure and less than 700 kPa-msec blast impulse). Since this type of retrofit is usually done using a very restrictive budget, the focus would not be on achieving a certain level of protection, but rather getting the most out of the available funds. Whether one-ply or multi-ply, the overall film thickness can range from 2 to 15 mils (1mil = 0.0254 mm). According to some published design criteria (and verified by published test results), a 7-mil (0.18 mm) thick anti-shatter film is considered to be the minimum thickness required to provide effective response to blast loads. There are three types of anti-shatter film installation methods: 1) daylight installation; 2) wet glazed installation; and 3) mechanical anchored installation. The application of security film to the clear area of glass without any means of attachment within the frame, termed daylight application, is commonly used for retrofitting windows. Applying a daylight film should be considered a minimum for retrofitting windows. This retrofit does provide significant hazard reduction by retaining the fragments and reducing the overall velocity of the window. Under extreme loading the window can be propelled into the occupied space causing the potential for blunt trauma. It should also be noted that a daylight film applied to an insulated window will not contain glass fragments generated by the outer pane. The wet glazed installation is a system where the film is attached to the frame using a high strength sealant such as the Dow Corning 995 structural silicone. The method allows the flexible frame to deform slightly, reducing glass fragments and offering higher protection than the daylight film. The wet glazed installation system is more costly than the daylight system, but is less expensive than the mechanically anchored security films. Anti-shatter films are most effective when installed in conjunction with a positive anchorage system. The films are not particularly effective in retaining the glass in the frame. Therefore securing the film to the frame with a mechanically attached anchorage system further reduces the likelihood of the glazing pulling out of the frame. Mechanical anchorage systems involve screws and/or batten strips to attach the film to the frame along 2 or 4 edges. Figure 2 presents the results of blast analysis for a typical office building window with dimensions 1.2m by 1.6m with a sill height of 0.8 m above the floor. The maximum bomb size that the window could withstand while producing a Medium Level of Protection or lower was determined as a function of standoff distance. Calculations were performed for four separate glazing types: 1) 6-mm monolithic annealed glass with no treatments; 2) 6-mm monolithic annealed glass with a 7 mil
47

Recent advances in security technology

(0.18 mm) thick daylight security window film; 3) 6-mm monolithic annealed glass with a 4-sided wet glazed installation and 7 mil thick security film; and 4) 6-mm monolithic annealed glass with a 4-sided mechanically attached, 7 mil thick film.
600 550 Explosive Charge Size (kg TNT) 500 450 400 350 300 250 200 150 100 50 0 10 20 30 40 50 60 Standoff Distance (m) 70 80 90 100 6-mm Annealed Glass, bite 13 mm Add Daylight Film 7-mil (0.178 mm) Add 4-sided wet-glazed 7-mil Film, 13 mm bead Add 4-sided mechanically-attached 7-mil Film 6-mm annealed PVB laminated glass, bite 13 mm 6-mm annealed PVB laminated glass, bite 25 mm

Figure 2. Effectiveness of security films to provide Medium Level of Protection From Figure 2, it is seen that the bomb size for the window to perform to Medium Level of Protection increases with increasing standoff distance. This is due to the fact the pressure and impulse from the explosion decrease with distance. It could also be noted that all of the film applications increase the amount of explosive that the windows can sustain while maintaining the same performance level. For example, at a standoff of 100 m, the unprotected window will perform to Medium Level of Protection when exposed to less than 100 kg of TNT, whereas the window treated with a mechanically attached film would require over 550 kg of TNT. Hence, the window films increase the ability of the windows to sustain blast loads in a controlled manner (i.e., they reduce vulnerability and risk to occupants). Note that in the example shown, the mechanically attached window film provides the greatest level of protection and the daylight-installed film provides the least. It can also be seen that a window using the laminated glass with deep rebates and same thickness as the monolithic glazing could be as effective in reducing the flying glass hazard as a mechanically attached film. High-performance retrofit solutions for glazed facades In recent years new concepts of blast protection have been introduced that could significantly enhance an existing faade resistance to blast loads. These concepts have proved to be effective in blast tests and are referred to as high-performance solutions. One common feature among these high-performance solutions is their reliance on material ductility and plastic behaviour to achieve effective protection against blast
48

High-performance retrofit solutions for blast protection of facades in office buildings

with a minimum amount of material. Unlike the retrofit concepts where blast designers rely on using brute strength to resist very large blast loads that often results in significant disruption to the facilitys operations and aesthetics, the highperformance concepts make use of the non-standard building materials (e.g. stainless steel, high-strength synthetic fabrics, carbon fibre polymers, foams) and energy absorbing devices to achieve more efficient and cost-effective protection. Energy absorbing catch systems, used in conjunction with a daylight application of anti-shatter films is one of the high-performance protection systems for retaining and reducing glass debris hazards (see Figure 3). Cables spanning across the window will impede the flight of filmed glass and absorb a considerable amount of energy upon impact. The diameter of the cable, the spacing of the strands and the means of attachment are all critical in designing an effective catch system.

Figure 3. General concept of catch cable system for glazed faade Case study 1 TTWI has recently completed the design of a project that involved a highperformance faade retrofit system based on the concepts discussed in this paper: a cable system used to capture glass panes along the perimeter of a leased floor in a
49

Recent advances in security technology

multi-storey office building. The building is a high-rise steel frame building with steel-concrete composite floors located in a dense urban setting. Only one floor is being renovated for use as office space and is to be protected against a potential external terrorist threat located at the street level. The total floor space is nearly 1,000 square metres. A floor plan for the leased floor is shown in Figure 4(a). Three sides of the leased floor are the exterior facades above adjacent streets. One side faces a narrow alley formed by two nearby buildings. The explosive threat in the alley was considered to be the most devastating for the faade due to the blast channelling effect created by the narrow street. This blast wave propagation phenomenon was investigated using the Computational Fluid Dynamics (CFD) simulations as shown in Figure 4(b). From the results of the blast wave-structure interaction analyses, it was established that the blast loads on the exterior faade facing the alley would have been significantly amplified, had the explosive attack taken place between the two buildings.

Figure 4. (a) Plan of office space being retrofitted; (b) CFD analysis of blast loads The design loads for the exterior faade of the leased floor were sufficiently high that mere application of anti-shatter films to the existing glazing would not have provided the required level of protection. In addition, the building owner required that no major alterations could be done to the existing faade structure and the key structural elements (steel columns, floor slabs, etc). Thus, an effective retrofit method was needed which would be neither difficult nor disruptive to install, would not markedly change the buildings aesthetics, and would prevent the bulk of the fragments from entering the office space. The cable catcher system designed to protect the office space from the threat of glass fragments and other debris is shown in Figure 5(a), illustrating the basic components of the system. A series of horizontal stainless steel cables are attached to the steel columns through the energy absorbing devices. The energy absorbing devices consist
50

High-performance retrofit solutions for blast protection of facades in office buildings

of a steel thick plate that acts as a continuous beam with multiple supports along the columns height. The cable reactions applied between the supporting points cause the plastic hinges to form between the supporting points and at the points of cable attachments. The flexural capacity of the steel plate was selected so that the plastic collapse mechanism would develop when the cable force exceeds about 85-90 percent of the cables tensile strength. Thus, the energy absorbing element in the design is the non-linear spring that will fail plastically in bending and prevent a brittle failure of the stainless steel cables. Computer simulation of the interaction of the cables with the glass panes dislodged by the blast pressures confirmed the peak cable inward displacements of about 500 mm and the effectiveness of the energy absorbing devices in reducing the cable forces and preventing their brittle failure (see Figure 5(b)).

Figure 5. (a) View of cable catcher system; (b) dynamic response of the cables and energy absorbing devices

Protective design strategies for non-glazed facades


When designing protective measures for glazed facades it would also be prudent to ensure that non-glazed areas of a faade provide comparable blast protection against the design threat to adjacent glazing. This follows from the balanced design principle which means that the glass should be designed to be no stronger than the weakest part of the overall window system, failing at pressure levels that do not exceed that of the supporting wall system. If the glass is stronger than the supporting members, than the window is likely to fail with the whole panel entering into the building as a single unit, possibly with the frame, anchorage and the wall attached. Unreinforced masonry is commonly used in steel and concrete frame buildings as infill external walls. Unreinforced masonry walls provide limited protection against blast loading. When subjected to high intensity loads from bomb blast, brittle unreinforced masonry walls will fail and the debris will be propelled into the interior of the structure, possibly causing severe injuries to the buildings occupants.

51

Recent advances in security technology

Geotextile fabric retrofit for walls One of the innovative techniques for wall retrofit is the use of anchored high strength fabrics to catch wall debris. The fabrics used are typically woven polypropylene similar to those used for sandbags and often referred to as geotextiles. This technique does not strengthen the wall, but instead it restrains the debris from entering the building and posing a hazard to the occupants. Figure 6 shows a curtain of geotextile fabric which is placed behind the concrete masonry unit (CMU) wall covering the entire face of the wall. This retrofit method is effective, relatively inexpensive, uses lightweight materials and is easy to install. The effectiveness of this type of retrofit depends on the level of the blast loads, the stressstrain behaviour of the fabric, and the tensile and shear capacity of the anchorage at the supports. The geotextile fabric catch system is effective for charges ranging from hundreds to thousands kilograms of TNT, depending on the standoff. This retrofit does not affect the exterior appearance of the building. Any finish material applied over the surface of the geotextile must be selected carefully since the attached material can be thrown into the room by the fast moving geotextile.

Figure 6. Geotextile fabric retrofit of masonry wall

52

High-performance retrofit solutions for blast protection of facades in office buildings

Case study 2 This particular building used as the basis of this case study is a multi-story concrete frame building with CMU infill external walls. Because CMU walls are brittle and potentially vulnerable to blast loading, increased level of protection was required to protect occupants from the threat of flying debris. Since the design blast loads for the exterior perimeter were sufficiently high, some form of wall strengthening was also required. The final blast retrofit measures to increase the resistance of the structural envelope to terrorist attacks included vertical steel tubes that ran from floor slab to floor slab combined with a geotextile membrane covering over the interior surface of the masonry wall to prevent the masonry from becoming a debris hazard during a blast event.

Figure 7. Blast response simulation of original and retrofitted masonry wall The effectiveness of these retrofit solutions in mitigating blast damage was confirmed by high performance computing simulations of the faade structure subjected to the design blast loads generated by a terrorist bomb attack. Figure 7 demonstrates failure mechanisms of the original masonry infill walls and the wall strengthened with the vertical tubes. It can be observed that the retrofitted wall, when subjected to overload from air blast, may also fail in brittle manner. The numerical simulations confirmed the need for the geotextile retrofit of the external wall. The geotextile fabric was placed behind the wall and securely anchored to the top and bottom concrete slabs and to the steel tubes.
53

Recent advances in security technology

Conclusion
The building faade has a vital role in protecting occupants against the effects of an external terrorist bomb attack. By selecting high-performance materials and adopting methods of effective attachment to the building structure, new highly effective protective systems for building facades could be developed. It has been shown that utilising the benefits of high-performance materials and energy absorbing devices, significant reduction in the potential level of injuries to occupants can be achieved. In addition, reliance on ductile, plastic behaviour of materials allowed the use of highly efficient protective systems that not only lowered the final cost of the retrofit but also minimised the disruption to the internal environment. Continued research on blast retrofits is needed to provide design engineers with cost-effective techniques and design methodologies to effectively mitigate consequences of terrorist vehicle bomb attacks on office buildings.

54

Preparing for Post-catastrophe Video Processing

5
Preparing for Post-catastrophe Video Processing
A. van den Hengel, H. Detmold, A. Dick and R. Hill
Australian Centre for Visual Technologies The University of Adelaide, Australia
Abstract
The spread of video surveillance systems and video phones means that video cameras are more ubiquitous than ever before. In the event of a catastrophe the video captured by these cameras is an important source of information for those directing the response. This information is currently either ignored because it is seen as inaccessible, filtered through the media, or requires a huge commitment of human resources in order to perform the required processing. We present here an approach towards acquiring and processing this video in order to extract as much value as possible within the shortest time period. The approach presented allows flexible, intelligent video processing to be carried out quickly and securely.

Biographies
The authors are at the University of Adelaide, Australia. Dr. van den Hengel is director of the Australian Centre for Visual Technologies, and conducts research in automated video surveillance in collaboration with authors Dick and Hill. Dr. Detmold has extensive experience in software architecture research, and is currently working on scalable software architectures for large scale video surveillance networks.

55

Recent advances in security technology

Motivation

Video captured before, during and after a catastrophe can provide valuable assistance to those coordinating the response. As the number of surveillance cameras, phone cameras and consumer video cameras grows see (Valera-Espina. et al., 2005) for a survey - the value of this resource increases, but so does the difficulty of exploiting it. We propose an effective, efficient, secure, and robust approach to processing video related to a catastrophic event. One application of our approach would be in the aftermath of an attack upon an airport. Airports commonly contain thousands of surveillance cameras. In addition, the surrounding shops and road network may well have thousands more cameras. If it is believed that the perpetrators have visited the site before the event (as is often the case) then it may well be necessary to interrogate hundreds of thousand if not millions of hours of video. Reports on the aftermath of the July 2005 London bombings indicate that many thousands of hours of video were searched for traces of the perpetrators, and that the human processing of this volume of footage was an immense operation. The reports also made it clear that useful sections of footage were extremely short, and that their discovery was as much a result of luck as it was of any other factor. Another surprising fact that came to light during this catastrophe was that the police often had access to camera phone footage only after it was filtered through the news services.

The Design of a Response

Our proposed approach incorporates both manual and automated video processing. The major requirements upon the system are as follows:

Efficacy the system needs to be effective, but it also needs to be seen to be effective if it is to have any preventative value. Security the video in question will be of great value to the media, but the forensic value of the video may be diminished by uncontrolled release to the public. The scale of the problem will require that many operators have access to the data, so the development of secure software and human processes is required. Efficiency the efficiency of the process affects the number of queries that can be run within a given period and thus the value that can be extracted from the video. The value of the information extracted from the video also decreases with time, as the opportunity to respond expires. Thoroughness part of the value of automated video processing is the ability to eliminate footage that does not need human scrutiny. This is of value only if the reliability of the system in its operation over the full video database can be guaranteed. Auditability the ability to identify exactly which processes have been run over which sections of the data is critical to the security, efficacy and

56

Preparing for Post-catastrophe Video Processing

efficiency of the system, but also to allowing human operators to interpret the results and design new queries. Extensibility the architecture must allow a hierarchy of responses as required by the circumstances. If only a small volume of highly valuable footage is available then the resources applied to the response should only be those of the maximum possible security level. A larger volume of low value footage, however, would require that all possible resources be allocated to the processing. Flexibility the unpredictable nature of catastrophes, and the diversity of circumstances in which they might occur, mandates a flexible approach, which allows the application of ad hoc filtering processes to the data. Robustness the scale of the problem requires a sophisticated and distributed approach, utilising the broad variety of hardware available to the task at the time. Ensuring the system is robust against the failure of independent components is critical.

To meet these requirements, this paper presents a logistical architecture for the rapid re-purposing of national computing facilities for large-scale forensic analysis of video footage in the immediate aftermath of a catastrophic incident. The architecture is enacted in four sequential but overlapping phases:

Mobilisation: identify potentially significant video footage and available processing, storage and network facilities. Mustering: gather video footage in a diverse range of formats and transform it into standard formats for subsequent operations. Operations: apply techniques such as background subtraction to the footage, producing secondary and tertiary data highlighting features of potential interest. Investigation: support human investigators with a query interface that enables them to pursue lines of enquiry rapidly and with confidence, using products of the operations phase and external data as indices into an otherwise unmanageably large volume of raw footage.

Since the goal is for investigators to reach sustainable conclusions as rapidly as possible, scheduling promotes overlapping of the phases, so that, for example, the investigation phase begins to serve queries as soon as the fraction of data produced by operations is sufficient for confidence in conclusions, then updates the confidence level as more data becomes available.

Architectural Overview

An appropriate architectural specification (Perry & Wolf, 1992, Detmold et al., 2006) is critical to achieving the requirements set out above. The specification that follows is described in terms of a number of views. Each represents a different way of analysing the system, and is suitable for a different purpose. The views presented are the phase view, which describes the steps required to initialise and use the system, the
57

Recent advances in security technology

resource view, which describes resource requirements, and the component view, which describes the parts from which the system is composed. 3.1 Component View

The architecture is made up of a number of components connected by IP network, as shown in Figure 1. The major components are as follows:

Mustering Points the locations at which video is injected into the system. These include fixed physical locations, such as police stations in the vicinity of the catastrophic event(s), but also roving operators with laptops, capable of moving to the sites at which video has been stored by the owners of local surveillance networks. Video Databases the architecture employs databases of video data, including both primary video and secondary video results derived by processing (e.g. foreground footage obtained by background subtraction). To maximise availability of data to processing and also minimise latency of access, this data is widely replicated, with the intention of placing a replica of each datum needed by processing near to the site of that processing. To minimise update cost, there is no expectation of consistency between databases, nor is it expected that any one database will contain all the data. Instead, each database holds a mixture of up-to-date and stale items. Meta-data Databases holding information of central importance to the response, including chronological information describing the time sequence of event of interest and topological/geographical information describing the spatial relationships between event sources. This meta-data is particularly important as it supports indexed querying of video data. Whilst the volume of data is small relative to the video data, a high degree of currency and consistency is required to support accurate query processing, necessitating the design of a specialized distributed database. Web Proxies minimise redundant data flow between parts of the system and reduce the load on the database servers. Processing Sites where video processing is carried out by clusters of computers. This processing includes transcoding the video into common formats, automated foreground segmentation, and higher level processing in response to queries. Investigation Sites where human interrogation of video is carried out through a query-based process. Queries are formulated by operators at investigation sites and sent to the appropriate processing site to be executed. Control Units A single master control unit is selected from a list of candidate control units during the first part of system initialisation and coordinates the operation of the system. This includes the management of all resources allocated to the task.

58

Preparing for Post-catastrophe Video Processing

3.2

Phase View

A co-ordinated approach to extracting the required information from video footage needs to be carried out in a number of phases. This does require, however, that some preparations be made beforehand. These preparations are described below in Section 4. Recall that the phases defined by the architecture are mobilization, mustering, operations and investigation and that these commence in the order given but overlap in time, as shown in Figure 2. 3.2.1 The Mobilisation Phase Mobilising the architecture in response to an event involves the following steps:

Activate a Command Centre the command centre must be prepared in advance, but may be inactive. In fact it will be necessary to have multiple candidate command centres prepared in advance, both to handle the scenario where the catastrophic event incapacitates a command centre and to permit selection of the command centre most convenient for response to a particular event. Activating the command centre requires securing the human and other resources required and initiating the response plan. Identify Data Repositories all sites storing footage that may be of value must be identified, catalogued and prioritised. This requires access to people at the scene, but also pre-existing databases to record camera locations and other configuration data. Identify Mustering Sites the majority of the high-priority footage is expected to be located close to the scene, but some will be accessible from other locations. Mustering thus requires that a variety of sites be set up, with as much of the human involvement as possible located away from the scene. Possible fixed mustering points include police stations, Internet cafes, and call centres. A significant part of the data mustering process will need to occur at the sites at which the data are stored by the owners of local camera networks and individual cameras. Drop-off sites will be provided where members of the public can submit camera-phone footage for inspection and possible incorporation into the system data. These drop-off sites include physical locations such as local police stations and virtual drop-off sites on the Internet. Identify Video Product Repositories large volumes of video will need to be stored in a distributed redundant system. Sites selected will need to have large available online storage capacity and network bandwidth. Internet service providers, supercomputing centres and defense facilities would be possible locations. Identify Operations and Investigation Query Processing Sites the speed with which the required processing of the video footage can be carried out depends critically on the computing resources available to carry it out. Two types of processing are expected to be carried out by these centres: operations processing, which involves running a standard set of filters over the footage, and query processing, which will be carried out in response to requests.
59

Recent advances in security technology

Facilities suitable for use as processing sites include the national supercomputing centres, and commercial data processing centres. Identify Investigation Sites the success of the response ultimately depends on the human operators generating the queries and interpreting the results. These operators do not need to be physically co-located, and would be best placed far from the scene of the catastrophe. It will be easier to manage the spread of information if a small number of sites are selected. These sites need significant numbers of suitable workstations and adequate network bandwidth. University computing labs, call centres and defense research institutions would be possible choices. There will be a need to report investigation results quickly to senior members of government, so there may be justification for some purpose built facilities, at least for the visualisation of query results, if not for query submission. A purpose built facility would naturally be located in Canberra, but there would of course be a need for a second backup facility. Identify Wide Area Network Resources large volumes of data need to be distributed throughout the operation of the system. Suitable network capacity between all sites needs to be acquired and maintained.

Figure 1 The components of the system and the communication channels between them.
60

Preparing for Post-catastrophe Video Processing

The mobilisation phase continues throughout the response as further resources are identified, allocated, and deployed. Mobilisation thus does not end as mustering starts.

Figure 2 The overlapping phases of the architecture. 3.2.2 The Mustering Phase Mustering describes the process of gathering footage and entering it into the system, which occurs at mustering points. This is supported largely through a web-based system. Mustering requires the following operations:

Format Conversion it is likely that a substantial portion of the required footage will have been captured by analogue cameras and stored on tape. This must be digitised before it can be processed. Footage captured by mobile phone and portable video cameras will also need to be transcoded. Stream Identification each available video sequence needs to be given a unique identifying label, and all available meta-data about the stream recorded. This includes associating the sequence with a particular camera, and registering the camera in the meta-data database if it is not already present. Assign Timings timing information is critical to the capacity of the system to make comparisons across video streams taken by different cameras. All available timing information needs to be entered into the system along with estimates of the accuracy of this information. Upload footage, once annotated, is uploaded into video product repositories.

Video captured long before the event and after the event may still have value. Such video will facilitate automated video analysis by providing control footage, and can also be used to determine properties of the camera network, such as its spatial layout. 3.2.3 The Operations Phase In the operations phase a standard set of processing operations are run over video footage:

Primary Processing secondary products are generated by processing the primary video product uploaded by mustering. These include the results of motion detection and background subtraction (Stauffer & Grimson, 1999).
61

Recent advances in security technology

Activity Topology Determination activity topology is required to track a target through a network of cameras. It encompasses both the spatial relationships between the cameras, the paths that targets take within and between cameras, and probabilities associated with these paths. Automated approaches capable of processing the volume of footage in question do exist (van den Hengel et al., 2006, Detmold et al., 2007), but manual data entry to fill in the gaps will also be needed. Secondary Processing tertiary products are those generated through processing of secondary products. These include tracks of targets through the system, and automatically identified events of interest. Condition Checking several automated checks on the integrity of the data are possible, such as ensuring that two video sequences assigned to the same camera do not overlap in time, or that all video from a single static camera has a similar background. Any failure of a condition is brought to the attention of human operators.

3.2.4 The Investigation Phase The investigation phase is the main human interface to the architecture. Each operator uses a web terminal, which provides access to the video and meta-data databases, and the processing resources. The processes carried out within this phase are:

Query Submission operators submit queries, which access information contained within both the video and meta-data databases. These queries are registered with the query activation controller in order that they might be carried out, and archived so that they can be re-run if the data on which they depend change significantly. Query Activation queries are activated as soon as they may usefully be run on the data available. In particular, all of the required input to a query must be in the database. Queries are tested for activation if they have yet to be processed or if the data upon which they depend has changed. Query Processing activated queries are passed to a query scheduler to be run. Query Archiving all queries are archived in order that they may be re-run if the data upon which they depend changes, and for audit purposes. Chronology Recording entry into the system of chronology information gained from non-video sources allows queries to be made with reference to the time at which particular events took place. Conclusion Support query results and the conclusions drawn from their analysis are entered into a database along with references to the data on which they depend. This allows the results of previous queries to be reviewed without re-running the job, but also allows investigators to receive alerts when the data changes, with the possible result that their hypotheses need to be revised. Conclusion support also allows an audit trail to be generated by which a particular hypothesis may be justified. Watermarking in order to assist in the audit process, and to increase security, video is watermarked on each viewing. In the event of a leak this

62

Preparing for Post-catastrophe Video Processing

will support tracing the source of the footage, but the process itself should deter dissemination of material. The investigation phase is the final phase of the architecture, and it is terminated once the required results have been generated. All data associated with the response is then archived. 3.3 Resource View

Many of the resources required have been outlined above, including the use of police stations as mustering points, data centres for storage and university computing labs as investigation facilities. We consider here the processing and networking resource requirements, and how they may be fulfilled. A typical surveillance camera generates approximately 26 GB of video per day. A thousand camera network thus generates 26 TB per day, and we aim to be able to store the video generated over at least a month. Allowing for some duplication it can be expected that for a thousand cameras there will be a need to store a petabyte of primary video product. A similar volume of secondary products would also be expected, along with a smaller volume of tertiary products. All stored data must be duplicated in order to maintain robustness, so we expect to require tens of petabytes of storage. We do not, however, require that each site have a full copy of the data. Using the resources of supercomputing centres, defense facilities, and Internet service providers this should be easily achievable. A petabyte of data is more than can be sent over even a high-speed link within the time available. The data is thus prioritised for transfer by the difference between the time at which the video was captured and that at which the catastrophic event occurred. Contemporaneous footage is transferred first, and by the fastest available means. As this footage arrives at each data repositories it will be transferred on to the others as required. All of the footage at every location will also be copied onto disk. These disks will then be duplicated and physically transferred to the data repositories. With current technology this is in fact by far the fastest solution for transferring large volumes of data. Once all of the data repositories have their allocated video the network transfer from the source is terminated. Figure 3 is a Gantt chart showing a schedule for making footage available for operations, progressively over the first 60 hours after an incident, with the first footage (prioritised by importance) becoming available for operations at 20 hours after the incident. In this diagram:

There are 7 data centres, C1 to C7 distributed around a Australia (e.g. one in each state capital and one in Canberra), with C1 being in the city nearest to the incident. There are 7 petabyte scale data sets of external disk drives, S1 to S7, each initially located at the corresponding data centre.
63

Recent advances in security technology

For purposes of illustration, each data set Sx is divided into four quarters, Sx.1 to Sx.4. In practice, a finer granularity partitioning might be used; perhaps even that of individual disks (with current technology, these are 1TB each). The task is to get the data from the mustering points (in the city where the incident occurred, and thus nearest to C1) copied onto the data sets, with each data set returned to its home data centre for operations to be commenced. The overall approach is for S1 to be physically transported (by road) to the mustering points and then progressively back to C1 as data is copied onto to it. In the meantime, the other data sets have been transported (by air) to C1, so that as S1 arrives back at that data centre, the data it contains can be replicated to the other sets, which are returned to their home data centres once the data has been copied.

Notice that data reaches C2 and C3 before it reaches the remaining data centres. Therefore, C2 and C3 should be selected so as best to exploit the early arrival of this data. One rule for selection would be to choose the data centres with the largest capacity (probably those in Sydney and Melbourne); another rule would be to choose data centres closest to the point at which the response reports to the government (probably those in Canberra and Sydney). These processes ensure that as much high priority data is available at all times as is possible, that the footage is duplicated immediately so as to minimise the probability of loss, and that all footage is transferred within a reasonable time.

Required preparation

A significant amount of preparation is required in order to implement the system described by this architecture. This includes information gathering, detailed design, implementing key system components and extensive testing of the architecture in response to various scenarios. An audit of available resources will need to be carried out. This audit will identify computing and network resources that might be utilised in a deployment of the architecture, but must also identify people capable of performing the required operations. The audit process must also identify the full range of formats in which surveillance footage is currently stored. The audit will produce a response resource database of computing, network, human and footage resources, categorised by location, security level, data format and other labels.

64

Preparing for Post-catastrophe Video Processing

Figure 3 Disk-based Petabyte-scale Data Transport. A detailed system design will need to be undertaken. This will involve specification of all required communications protocols and the format of all data to be stored. The results of this design process will inform refinement of the system deployment strategy, and identify means by which computing and network resources can be sequestered. Initial investigations suggest an approach involving pre-distributing operating system images for the various components to data centres, then burning optical disks from these images in the early stage of the response. The control unit will need to be designed and implemented, and two or more control units deployed. The control unit will need to support a process by which decisions can be made about the level of response appropriate for a particular event and particularly the level of security required. The results of this process are then used to filter the contents of the response resource database, so that only resources meeting constraints appropriate for a given response scenario (e.g. in terms of security) are used in the response.
65

Recent advances in security technology

Conclusion

Video taken in the lead up to, during, and soon after a catastrophe can be valuable in directing the relief effort, and in forensic investigations. Currently there has been little planning as to the capture and exploitation of this valuable resource. We have presented an architecture enabling a systematic approach to acquiring and processing such video resources. The architecture describes a system which meets all of the key requirements, such as security and robustness. Plans are underway for its implementation, which needs further preparation, and the participation of stakeholders.

References

Detmold, H., Dick, A.R., Falkner, F., Munro, D.S., van den Hengel, A. & Morrison, R. 2006, Scalable surveillance software architecture, in Proc. IEEE Int. Conf. Advanced Video and Signal-based Surveillance. Detmold, H., van den Hengel, A., Dick, A.R., Cichowski, A., Hill R., Kocadag, E., Falkner, F. & Munro, D.S., 2007, Topology Estimation For Thousand-Camera Surveillance Networks, in Proceedings of the 1st ACM International Conference on Distributed Smart Cameras (ICDSC), Vienna, September 2007, to appear. Perry, D.E. & Wolf, A.L. 1992, Foundations for the study of software architecture, ACM SIGSOFT Software Engineering Notes, vol. 17, no. 4, pp. 4052, 1992. Stauffer, C. & Grimson, W.E.L. 1999, Adaptive background mixture models for real-time tracking, in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, pp. 246252, 1999. Valera-Espina, M. & Velastin, S.A. 2005, Intelligent distributed surveillance systems: A review, IEE Proceedings - Vision, Image and Signal Processing, vol. 152, no. 2, pp. 192 204, April 2005. van den Hengel, A., Dick, A.R. & Hill, R. 2006, Activity topology estimation for large networks of cameras, in Proc. IEEE International Conference on Advanced Video and Signal-based Surveillance.

66

Length Based Modelling of HTTP Traffic for Detecting SQL Injection Attacks

6
Length Based Modelling of HTTP Traffic for Detecting SQL Injection Attacks
Mehdi Kiani, Andrew Clark, George Mohay
Information Security Institute, Queensland University of Technology
Abstract
SQL injection vulnerabilities occur as a result of incomplete validation of user input to programs that interact with back-end databases. Incomplete validation allows the user to introduce input that modifies program behaviour and results in access to potentially sensitive information. In this paper we focus on anomaly based intrusion detection techniques to detect such activity. Current anomaly based intrusion detection techniques for detecting SQL injection attacks have not been rigorously and systematically validated. Additionally, most proposed techniques require access to source code or the introduction of custom data types to reduce the number of false alerts. In this paper we seek to address these shortcomings with the previous work. Our approach is based upon modelling the expected length of various components of an HTTP request and is tested using two different applications. Our results provide new insights into the application of length-based models for detecting SQL injection attacks.

Biographies
Mehdi Kiani is a PhD student in the Information Security Institute (ISI) at Queensland university of Technology (QUT) and is studying his in intrusion detection. George Mohay is an Adjunct Professor in the ISI. Prior to this he was Head of the School of Computing Science and Software Engineering from 1992 2002. His current research interests lie in the areas of computer security, intrusion detection, and computer forensics. He is on the Program Committee for Recent Advances in Intrusion Detection. Andrew Clark is a Senior Research fellow in the ISI at QUT, he is actively researching in the fields of intrusion detection and computer forensics. He supervises numerous postgraduate research students in these areas and is also an active participant in industrial projects with government and corporate partners.

67

Recent advances in security technology

1 Introduction
In this paper we propose an approach for detecting SQL injection attacks that is anomaly based and builds profiles of normal HTTP requests based on their length. Our approach applies the length feature in four different ways (i.e. 4 models, each being a refinement of the first) based on previous research (Kruegel and Vigna 2003). A profile is generated for each file (or script) within a web application. The profiles that are built during the training phase are based on the length applied to different sections of a HTTP request depending on the particular model. The detection process can be performed either by (a) monitoring HTTP traffic on a network sensor or (b) on the host prior to the HTTP decoding module (after decryption of encrypted requests). Our approach requires no access to source code, or user interaction, or modification to any existing software, or reconfiguration of the target web application and can be applied to applications written in any scripting language. However, there exists a trade off, which is that the HTTP request may not necessarily lead to the execution of an SQL query. Nevertheless, our results show that this approach is advantageous. To evaluate the effectiveness of our approach we implement exploits from two popular open source applications, phpBB and phpNuke. SQL injection attacks occur because of nonexistent and/or incomplete input validation of user input by the programmer of the application (Anley 2002). Typically this results in the attacker manipulating the back-end database to extract critical information and/or gain access to potentially sensitive information. SQL injection attacks are usually embedded in the query section of HTTP GET requests, or the body section of POST requests. 1.1 Motivation Results in prior length models (Krgel, Toth et al. 2002; Kruegel and Vigna 2003) utilised in conjunction with other strategies illustrated that they are effective in detection of attacks that have exceptionally long payload lengths (e.g. Code Red worm). However, as outlined in (Kruegel and Vigna 2003), for more subtle attacks such as SQL injection attacks, the length of the attacks are such that they do not deviate significantly from normal length requests. Beyond this observation, there has not been not been a thorough evaluation of length based models specifically for detecting SQL injection attacks. As a result we apply a variety of length based models; based on previous research (Kruegel and Vigna 2003) to evaluate their ability to detect SQL injection attacks. We focus on two subclasses of SQL injection attacks as identified in (Halfond, Viegas et al. 2006), which we believe are the most common categories of attacks, namely the UNION and Tautology attacks. The contributions of this paper are:

Systematic experimentation and evaluation of existing and new length models to determine the detection and false positive rates of each. Evaluation of The length models applied to two specific classes of SQL injection attacks; UNION and Tautology attacks (explained in Section 4).

68

Length Based Modelling of HTTP Traffic for Detecting SQL Injection Attacks

We show that of the length models; the two most specific models are well suited to the detection of certain types of SQL injection attacks, specifically UNION attacks, and this can be achieved with low false positives. We also show that the length models - even the more advanced length models are not well suited to the detection of SQL injection attacks that fall under the category of Tautology attacks. When detection of such attacks is high, the false alerts are far too high to make them realistic in a production environment. As a result other models are required for the detection of Tautology attacks. The following sections describe the related work, the approach, traffic generation and attacks, testing results, and conclusions and future work.

2 Related Work
Much of the existing research has targeted the detection/prevention of web attacks in general and these approaches are useful and undoubtedly provide benefits. Nevertheless, most do not test their proposed approaches specifically for the detection/prevention of SQL injection attacks, the interested reader is referred to (Almgren, Debar et al. 2000; Almgren and Lindqvist 2001; Krgel, Toth et al. 2002; Wang and Stolfo 2004). As a result issues such as the effectiveness of models with regard to the detection of a specific class of SQL injection attacks remain open. In this paper we make a step in the direction of addressing this gap in knowledge. However, while the approaches proposed by Kruegel and Vigna (Kruegel and Vigna 2003) are not specifically applied to the detection of SQL injection attacks, we will nonetheless discuss them in this section. This is due to the fact that current anomaly approaches applied to the detection/prevention of SQL injection attacks have implemented and utilised the models proposed by Kruegel and Vigna. The seminal work by Krgel and Vigna (Kruegel and Vigna 2003) used six models to detect attacks in HTTP requests that contain a query section. One of the models was length based which we build on in this paper. They achieved complete detection and many of their models have been utilised and built on since as they are useful. Nonetheless, the original authors made the following observation about their work, and this was that the length model did not prove to be very useful in the detection of a class of attacks called input validation attacks (which is the category of attacks that SQL injection falls under). The authors make the observation that this is because SQL injection attacks are more difficult to detect as the length of such attacks do not deviate significantly from normal length requests. Beyond this observation, there has not been not been a thorough evaluation of length based models for specifically detecting SQL injection attacks. Additionally, one of their datasets (a Google dataset) generated a total of 4944 alerts per day (206 alerts per hour). They state that half of these requests were caused by the length model because of very long strings such as URLs directly pasted into the search field, however, no further examination was undertaken to determine if the number of false alerts could be reduced. As far as we know there are three approaches that fall into the category of IDSs that are specifically designed to detect SQL injection attacks. The first approach models user behaviour (Chung, Getrz et al. 1999). This approach assumes users demonstrate
69

Recent advances in security technology

consistent behaviour, however, if this is untrue then the system would suffer from high rates of false alerts. High rates of false alerts are an identified disadvantage of approaches based on user behaviour (Lee, Low et al. 2002). These approaches require a complete set of user behaviour which may change over time (Debar, Dacier et al. 1999). Note that while the other approaches (ours included) utilises training stage, it is harder to generate a training dataset based on user profiles than it is in general as user behaviour can be more random. Another approach compares incoming SQL queries against a database of known legitimate SQL queries (Lee, Low et al. 2002). Valeur, Mutz and Vigna (Valeur, Mutz et al. 2005) state that this approach is potentially vulnerable to mimicry attacks, which they address by creating models for each individual script i.e. each file has its own associated model. In this approach only the SQL queries passed from the web server to the database are extracted. The approach builds on the work of Kruegel and Vigna (Kruegel and Vigna 2003) explained earlier. However, it is now specifically applied to detect SQL injection attacks and operates at a lower level by only examining SQL queries and therefore modifications to existing libraries is required. Modification of existing libraries would have to be done for each SQL database server which might not be scalable and is not a trivial task. The approach proposed by (Valeur, Mutz et al. 2005) is effective and produces promising results in terms of detection and false alerts and is novel in that the models utilised are specifically applied to SQL queries. However, some issues proposed in (Valeur, Mutz et al. 2005) need further exploration. The authors state that many installations of the approach proposed would require the introduction of custom data types to produce acceptable low false positive alerts. For example, adding a new month value that did not exist in their training stage resulted in false alerts. Also, to extract SQL queries the libmysql client library required modifications. Furthermore, this approach requires a training dataset containing all possible SQL queries, which would be difficult such as a web based email that has many different search options (Buehrer, Weide et al. 2005). To summarise, existing approaches provide significant benefits, however, a number of issues require further investigation. The first issue that requires further investigation is that existing approaches require either access to application source code, modification to applications or user interaction. This is a significant advantage of our approach is that the modelling is based upon features which can be extracted directly from the HTTP requests no library, software module or application has to modified. The second issue is that current approaches cast a wide net to detect a wide range of web attacks which produces high rates of false alerts. To solve this, a number of different combined models are utilised. However, there is no systematic testing of a single model on its own to determine how effective a model is in the detection of a particular class of attacks; specifically SQL injection attacks. As a result we lack understanding of the type of models that would be best suited to detect different subcategories of SQL injection attacks such as UNION and Tautology attacks. This is important because the introduction of specific models will improve the detection rate and reduce the number of false alerts. In the following section we discuss our proposed approach which addresses the identified limitations above by
70

Length Based Modelling of HTTP Traffic for Detecting SQL Injection Attacks

using the proposed models to specifically test two subclasses of SQL injection attacks.

3 Approach
In this section we describe our proposed technique for detecting SQL injection attacks based upon modelling of the length of various components of HTTP requests. We apply length based models at a higher abstraction level by capturing the entire HTTP request. This provides the additional advantage that the analysed HTTP requests can be captured directly from the network, rather than necessarily requiring changes to the web server application.

Figure 1. An overview of our prototype Our approach is summarised in Figure 1. Incoming HTTP requests are intercepted and the query section of the requests are parsed according to the appropriate lengthbased model, and then either passed into the training or testing engine, depending on if we are in the training or testing phase. In the training phase the length calculations are used to obtain the average, variance and standard deviation for the relevant extracted section from the HTTP requests. Note that though we have chosen standard deviations as the manner in which the threshold is chosen, the manner in which the threshold is chosen, this was an arbitrary choice and other approaches could be used. We chose the standard deviation because the initial datasets we examined in the phpBB application were roughly normally distributed. We then make another run through the training dataset to determine what thresholds that will produce zero false alerts on the training dataset based on the mean and standard deviation obtained previously. These thresholds are used in the testing stage. Note that the thresholds are an equidistant multiple of the standard deviation value. During the testing phase an HTTP request is deemed to be anomalous if the length of any of the fields being tested (depending on the model) fall outside the range of threshold values. The four length-based models that have been applied are the query length (QL), rudimentary attributes grouping (RAG), attribute - value length (AVL), and finally unique attributes grouping (UAG). Each of these models is now described in greater detail and examples are provided in Table 1.

71

Recent advances in security technology

Model Application of Model QL /localhost/test.php?name=Joe&id=123 Model[QL]: Length=15 /localhost/test.php?name=Joe&id=123 Model[RAG]: Length=15 , num of attributes RAG =2 /localhost/test.php?name=Joe&id=123 Model[AVL]: for attribute name length=3 AVL Model[AVL]: for attribute id length=3 UAG /localhost/test.php?name=Joe&id=123 Model[UAG]: for attributes name,id length = 6

Table1. Different ways to apply the length models to HTTP requests The QL model is based upon the length of the query section of the HTTP request. The RAG model is based on the number of attribute value pairs seen in the query section. A different model is generated for each distinct number of attribute value pairs seen in the requests in the training data. For example, if 10 requests in the training data have only 2 attribute value pairs then those requests will be used to generate a model of requests containing 2 attribute value pairs. The entire length of the query section is used. The AVL approach generates a different model for each unique attribute seen in training using the length of the attributes value. The UAG generates a different model for each unique combination of attributes seen in training. The length used is equal to the sum of the length of the values of all the attributes. For each of the three models (RAG, AVL and UAG) it is possible that a request seen in the testing will not have a corresponding model because no similar requests were seen in the training dataset. Such requests will be flagged as requests not seen in training. This could lead to the flagging of many requests; however, crucially it also indicates that the training dataset has become inadequate. Also requests not seen in the training could be malicious in nature and an administrator would want to flag such requests. Note, because the results for the RAG model were not encouraging to begin with, we did not implement it to flag requests not seen in training.

4 Experimental Setup: Attacks and Datasets


All of our experiments were performed using two well-known open source applications written in PHP; phpBB (bulletin board application versions 2.0.0 and 2.5.0) and phpNuke (a content management system versions 6.0, 7.3 and 7.5). Both applications were running on an Apache v1.3.34 web server using PHP v4.4.2 and MySQL v4.1.11. In this section we describe the experimental setup, the attacks, and the datasets utilised. Halfond, Viegas and Orso (Halfond, Viegas et al. 2006) defined a classification scheme for SQL injection attacks, we tested and evaluated the effectiveness of the described models against two of the categories which are seemingly the most prevalent; the first is UNION attacks, which is the insertion of SELECT statements to gain information. The second is Tautology attacks, where user input is appended to queries so that they always evaluate to true, for more details the interested reader is referred to (Halfond, Viegas et al. 2006). The vulnerabilities relating to the phpBB application are described in SecurityFocus advisories: Bugtraq
72

Length Based Modelling of HTTP Traffic for Detecting SQL Injection Attacks

IDs 9942, 9896, 7979 and for phpNuke these are Bugtraq IDs 10741, 15421,10749,9544. Variations of all the above attacks for both phpBB and phpNuke were created to evaluate the models in detecting subtle and obfuscated attacks (techniques utilised by attackers as suggested in (Anley 2002; Maor and Shulman 2004). Four datasets were used for phpBB (2 synthetic, 2 from production server), and 7 for phpNuke (production server). The creation of the two synthetic datasets for the phpBB application were based on the most common functions for these two files which were the admin_smilies.php and admin_words.php. The requests were generated by first manually browsing the phpBB application and then incorporating them into scripts to automate the process of traffic generation. The proportion of the training and testing for the admin_smilies and admin_words files, which we from now on will refer to as the SMILIES and WORDS datasets respectively, were divided into 50%-50%. The remaining two datasets utilized for the phpBB application for the viewtopic.php file were obtained from a university production server for the years 2005 and 2006, and the datasets are referred to as VIEWTOPIC/05 and VIEWTOPIC/06. For phpNuke the 7 datasets with the most requests were used. Only two of the phpNuke datasets contain attacks. While the other five datasets did not contain any attacks, we test them as well to examine the number of false positives generated. All requests with dangerous characters (e.g. hexadecimal characters) were excluded from the training datasets (we acknowledge that this does not guarantee that the training datasets are completely attack free). As for the testing datasets, these requests were identified and extracted into a separate file but they would still be examined by our prototype to examine if any alerts would be generated (discussed in Section 5).
Datasets SMILIES WORDS VIEWTOPIC05 VIEWTOPIC06 SEARCH YOUR_ACCOUNT NEWS SUBMIT_NEWS SURVEYS FORUMS STATISTICS TOTAL No. Requests (Training) 21496 1360 28598(Jan) 43719(Jan) 2466(Mar) 5788(Mar) 49495(Mar) 453(Mar) 560(Mar) 519(Mar) 3229(Mar) No. Requests (Testing) 21496 1360 536172(FebSep) 204666(FebDec) 1565(6 weeks) 10698(6 weeks) 73662(6 weeks) 678(6 weeks) 642(6 weeks) 619(6 weeks) 678(6 weeks) Suspicious Requests NA NA 2920 5901 3 0 35 0 1 0 0 8860 U 8 7 32 32 27 0 0 0 0 0 0 106 T 2 2 0 0 0 11 0 0 0 0 0 15 Total Attacks 10 9 32 32 27 11 0 0 0 0 0 121

Table 2. Datasets Information


73

Recent advances in security technology

5 Evaluation and Discussion


Having discussed both our approach for detecting SQL injection attacks and the datasets we now describe the results of the testing of the four different length-based models on the two applications, phpBB and phpNuke. 5.1 phpBB Application The results using each of the four length models for the SMILIES and WORDS datasets is shown in Table 3. Table 3 includes the detection rate, the requests not seen in training (RNS) and the false positives (FP) for each dataset. Beginning with the UNION class of attacks, all except the QL model were able to detect all the UNION attacks in both the WORDS and SMILIES synthetic datasets. This is because UNION attacks tend to be much longer in length than normal requests. For the Tautology attacks, again all models except the QL model detected all the Tautology attacks. This confirms that the QL model is too coarse for the detection of SQL injection attacks, particularly for Tautology attacks.
UNION Models QL Datasets Smilies Words RAG Smilies Words AVL Smilies Words Smilies UAG Words ViewTopic 05 ViewTopic 06 Detection 1/8 7/7 8/8 7/7 8/8 7/7 8/8 7/7 32/32 32/32 RNS NA NA NA NA 0 0 0 0 354 1198 FP 0 0 0 0 0 0 0 0 1 0 TAUTOLOGY Detection 0/2 0/2 2/2 2/2 2/2 2/2 2/2 2/2 ------RNS NA NA NA NA 0 0 0 0 ------FP 0 0 0 0 0 0 0 0 -------

Table 3. phpBB results The results for the 2 VIEWTOPIC datasets which contained only UNION attacks showed that all 4 models detected all the 32 UNION attacks in both the VIEWTOPIC datasets. However, they each produced different results in terms of the false alerts generated. For space reasons we included only the results for the model with the best detection and false alert rates which was the UAG model. For the Viewtopic 05 dataset alerts triggered because their lengths were outside the lower and upper bounds was 1/533274 while requests not seen training was 354/533274 (0.066%). For the
74

Length Based Modelling of HTTP Traffic for Detecting SQL Injection Attacks

Viewtopic 06 dataset alerts triggered because their lengths were outside the lower and upper bounds was 0/187797 while requests not seen training was 1198/187797 (0.63%). Note that the QL and RAG model actually produced less false alerts than the UAG model. However, they also failed to flag requests not seen in the training dataset (which the UAG) does shown in Table 3 in the column RNS Requests Not Seen. The AVL and UAG models both achieved full detection with low false alerts; however, unlike the QL and RAG models they flagged requests with attributes/unique attribute combinations that were never seen in the training stage. The difference between the AVL and UAG model is that the UAG model is slightly more useful and more specific in the alerts flagged. In the AVL model a request is flagged if a single attribute in the request has not be seen in the training stage. On the other hand the UAG requires that the combined attribute combinations not be seen in the training stage i.e. alerts generated are more meaningful. Note that for both the AVL and UAG model the attacks in the both VIEWTOPIC datasets were detected because either the attribute encountered had not been seen in the training phase (AVL) or as in the case of the UAG model the unique combination of the attributes sidtopic_idview had not been encountered in the training phase. 5.2 phpNuke Application The darker shaded areas in Table 4 i.e. the SEARCH and YOUR_ACCOUNT had attacks in the testing dataset while the lighter shaded datasets did not but were included to examine the rate of false alerts generated. As shown in Table 4, we have included the detection rate, the requests not seen in training (RNS) and the false positives (FP) for each dataset. For the phpNuke application, the QL and RAG models produced the most number of alerts and again did not flag unseen requests in training stage in the testing stage.
Mode ls Datase ts SEARCH YOUR_ACCOU NT NEWS SUBMIT_NEWS SURVEYS FORUMS STATISTICS Det 27/2 7 6/11 NA NA NA NA NA QL RAG AVL UAG

FP 0 13 9 20 0 0 0 0

Det 21/27 11/11 NA NA NA NA NA

F P 0 0 9 0 0 4 0

Det 27/2 7 7/11 NA NA NA NA NA

RN S 0 6 9 0 2 0 0

F P 6 0 6 0 0 4 0

Det 27/2 7 7/11 NA NA NA NA NA

RN S 0 6 50 0 0 0 0

F P 4 0 2 0 0 0 0

Table 4. phpNuke results for all datasets.


75

Recent advances in security technology

The QL models detection of Tautology attacks was just over 50% while the RAG model did not detect all the different obfuscated variations of the UNION attacks. Both the more specific models; AVL and UAG models detected all the UNION attacks, however, neither detected all the Tautology attacks. Also, in the AVL and UAG models the 6 of the 7 Tautology attacks that were detected were done so because either the attribute (AVL) or the unique combination of the attributes (UAG) model was not seen in the training stage. 5.3 Suspicious Results As mentioned in Section 4, suspicious requests for each dataset were identified and extracted into a separate dump file. We wanted to also examine the effectiveness of the four models in the detection of these suspicious requests and the results are shown in Table 5. Note, only 5 datasets had suspicious requests. The UAG model which was able to achieve the best detection rate was able to also achieve full detection of all suspicious requests in all but one of the datasets (which was the NEWS dataset (91% detection rate)). This again confirms that the UAG model is the best length model in comparison to the rest of the length based models.
Models Datasets VIEWTOPIC05 VIEWTOPIC06 SEARCH NEWS SURVEYS 2920/2920 (100%) 5901/5901 (100%) 3/3 (100%) 29/35 (82%) 1/1 (100%) 2745/2920(94%) 5878/5901(94%) 3/3 28/35 1/1 (100%) (80%) (100%) 2920/2920 (100%) 5901/5901 (100%) 2/3 (66.7%) 25/35 (71%) 1/1 (100%) 2920/2920 (100%) 5901/5901 (100%) 3/3 32/35 1/1 (100%) (91%) (100%) QL RAG AVL UAG

Table 5. Detection of suspicious requests 5.4 Discussion To compare and contrast all the models, the QL and RAG models do not require complete datasets. However, they are crude and their detection and false alert rates are not good as the models tend to over generalise. Also, they do not flag requests not seen in training. Also, the QL and RAG models are not fine grained enough to detect SQL injection either the UNION or the Tautology attacks. While the AVL and UAG models are more granular, but this is at the expense of ensuring a complete training dataset. However, they allow for requests not seen in training to be flagged and if
76

Length Based Modelling of HTTP Traffic for Detecting SQL Injection Attacks

such numbers are high they can indeed be an indication that the training dataset utilised is no longer adequate and not an accurate reflection of the type of traffic the application is receiving. The AVL and UAG were able to detect all UNION attacks with zero or low false positives. However, they were still not fine grained enough to reliably detect all Tautology attacks. In some datasets detection of Tautology attacks was just over 50%. Also, the best length model; UAG model will only produce the best results if the training dataset is complete (or very close to that).

6 Conclusions and Future Work


In this work we proposed an anomaly based approach to specifically detect two subclasses of SQL injection attacks referred to as UNION and Tautology attacks by systematically evaluating four length based models. The approach operates at a higher level by examining HTTP requests and builds a model of normal HTTP requests. Our approach requires no access to source code, or modification of existing software modules. Also, our proposed approach does not require user interaction or the introduction of custom data types. We systematically evaluated the suitability of the four length based models in terms of their detection of novel attacks and false positive rates in relation to UNION and Tautology attacks which has not addressed in existing literature. We have exposed the advantages and limitations of the four models in the detection of UNION and Tautology attacks. The QL and RAG models are far too coarse to detect all attacks UNION or Tautology, while the best model in terms of detection and the rate of false alerts was the UAG model followed by the AVL model. However, the UAG model could only achieve complete detection with low alerts for UNION attacks only suggesting that other models are required to detect Tautology attacks. The advanced length models have proven to be effective in the detection of UNION attacks and the extracting and parsing using length models is simple and fast. However, it remains to be seen in future work how they compare against other more advanced models such as character distribution models. Nevertheless, while more advanced models will be likely to produce better results, we have shown that advanced length models are a sufficient method as a first line of defence in the detection of SQL injection attacks.

References
Almgren, M., H. Debar, et al. (2000). A lightweight tool for detecting web server attacks. Proceedings of the ISOC Symposium on Network and Distributed Systems Security, San Diego, CA, February. Almgren, M. and U. Lindqvist (2001). Application-Integrated Data Collection for Security Monitoring. Proceedings of Recent Advances in Intrusion Detection (RAID), LNCS, Springer. Anley, C. (2002). Advanced SQL Injection In SQL Server Applications. An NGSSoftware Insight Security Research (NISR) Publication.
77

Recent advances in security technology

Buehrer, G., B. Weide, et al. (2005). Using parse tree validation to prevent SQL injection attacks. Proceedings of 5th international workshop Software engineering and middleware, ACM Press New York, NY, USA. Chung, C., M. Getrz, et al. (1999). Misuse detection in database systems through user profiling. In Web Proceedings of the 2nd International Workshop on the Recent Advances in Intrusion Detection (RAID), West Lafayette, Indiana, USA. Debar, H., M. Dacier, et al. (1999). Towards a taxonomy of intrusion-detection systems. Computer Networks. Halfond, W., J. Viegas, et al. (2006). "A Classification of SQL-Injection Attacks and Countermeasures." Proc. of the Intl. Symposium on Secure Software Engineering, Mar. Kruegel, C. and G. Vigna (2003). Anomaly detection of web-based attacks. Proceedings of the 10th ACM conference on Computer and communication security, ACM Press New York, NY, USA. Krgel, C., T. Toth, et al. (2002). Service specific anomaly detection for network intrusion detection, ACM Press New York, NY, USA. Lee, S., W. Low, et al. (2002). Learning Fingerprints for a Database Intrusion Detection System. 7th European Symposium on Research in Computer Security (ESORICS), Springer. Maor, O. and A. Shulman (2004). "SQL Injection Signatures Evasion." Imperva, Inc., Apr. Valeur, F., D. Mutz, et al. (2005). A Learning-Based Approach to the Detection of SQL Attacks. Proc. of the Conference on Detection of Intrusions and Malware and Vulnerability Assessment (DIMVA), Vienna, Austria, Jul, Springer. Wang, K. and S. Stolfo (2004). "Anomalous Payload-based Network Intrusion Detection." Proceedings of the Seventh International Symposium on Recent Advance in Intrusion Detection (RAID), Sept.

78

Maritime Terrorism & Risks to the Australian Maritime & Resource Industries

7
Maritime Terrorism and Risks to the Australian Maritime and Resource Industries
Dr. Alexey D. Muraviev
Curtin University of Technology, Western Australia
Abstract
This paper will critically analyse terrorist threats in the maritime setting, current patterns in terrorist maritime operations, and possible security risks to Australias national interests. Many security analysts are concerned that the sea may become an alternative front of terrorist operations in the foreseeable future. Nevertheless, some analysts disregard this threat as insignificant, while there is still an overall under appreciation of this challenge. The paper will attempt to address the following questions: why should we investigate possible threat contingencies in the maritime environment? Which groups are capable of staging operations at sea, where and how could they do it? The second part will address the principal question: should Australia consider maritime terrorism as a serious security challenge and what needs to be done to counter possible risks.

Biography
Dr. Alexey D. Muraviev is an analyst in strategic affairs and a lecturer at Curtin University of Technology, Western Australia. He is an author over 20 publications, including two major monographs, and one of Australias leading researchers in naval studies. Alexey is a member of the Australian Member Committee, Council for Security Cooperation in the Asia-Pacific, member of the Editorial Board, SPC-A, research fellow, Contemporary Europe Research Centre (The University of Melbourne), member of the Research Network for Secure Australia, and other organisations and think tanks. In 2007, the Australian Research Council College of Experts has nominated Dr. Muraviev an expert of international standing.

79

Recent advances in security technology

The Background
Multiple waves of large-scale terrorist attacks that kept shaking the international community in the past seven years have exposed a new terrorist warfare paradigm in which attacks against the United States (US) and its western allies on one hand, and India, Indonesia, Russia, and other eastern nations on the other have vastly escalated in the lethality of weaponry and targeting. As a consequence, the new means of terrorist attacks we are likely to face will involve not only unorthodox conventional means to attack civilian and military targets on land and potentially at sea, but at a later stage possibly weapons of mass destruction (WMD), including critical elements of infrastructure and other soft targets with catastrophic loss of human life and disastrous economic consequences. The shock of the September 11 2001 (9/11) devastation created an anxiety among security analysts over what to expect next. The maritime environment was identified as one of the likely strategic areas of future terrorist activity. Despite extensive discussions and debates, particularly after the 2000 Cole and 2002 Limburg incidents, to date the international community remains largely unaware of threat at sea posed by a number of international terrorist organisations, including Al-Qaeda (AQ), with most counter-terrorist efforts today being focused on preventing a devastating attack on airport infrastructure or airlines, or another surprise attack on a public transport system. Yet, maritime terrorist attacks could be even more catastrophic than those from the air or carried out against a public transport network. When it comes to critical analysis of asymmetric security threats in the maritime domain it was piracy that has been identified as an obvious challenge. To date, terrorist offensive operations at sea are rare with the majority of acts being opportunity driven, and constituting only 2 per cent of all terror-related incidents that were registered since the upsurge in politically motivated violence (PMV) after 19681 However, since the 1970s, acts of what can be described as maritime terrorism have been occurring continuously. Among the most noticeable maritime terrorism incidents pre-2000 were the 1974 hijacking of the Greek freighter in the Pakistani port of Karachi by the Arab extremists; the 1975 attack on the Soviet cruise ship Maxim Gorkiy in the Puerto Rica port of San-Huon; the 1979 attack on the British yacht Shadow by the Irish Republican Army; the 1985 hijacking of the Italian cruise ship Achille Lauro in the Mediterranean by the Palestine Liberation Front; the 1996 hijacking of the ferry Avrasiya off the Turkish Black Sea coast by the Chechen extremists, and other incidents.2
1

Ed Blanche, Terror Attacks Threaten Gulfs Oil Routes, Janes Intelligence Review, 14 (12), December 2002, p. 8; RAND database, available at: http://www.rand.org/ise/projects/terrorismdatabase. Blanche, Terror Attacks Threaten Gulfs Oil Routes, p. 8; Captain 2nd Rank N. Rezyapov, Colonel A. Simonov, Mezhdunarodny Terrorism na More. Istochniki, Sostoyanie, Protivodeistvie [International Terrorism at Sea. Sources, State of Play, Countermeasures], Morskoi Sbornik [The Naval Herald], N 1, 2004, p. 73; Vladimir Krasnov, Flot i Natsionalnaya Bezonasnost Rossii [The Navy and Russias National Security], Nauka i Bezopasnot Rossii [The Science and Russias Security], Moskva: Nauka, 2000, p. 511.

80

Maritime Terrorism & Risks to the Australian Maritime & Resource Industries

These events triggered debates in the security and intelligence community and among academics working in the field whether terrorists would from now on demonstrate continuous interest in maritime operations. For example, in 1998, Russian naval analysts Sinetskiy and Buzmakov identified piracy and maritime terrorism as a major current threat to international commercial shipping.3 Similarly, an American naval analyst Charles Koburger argued: After World War II, a number of Third World governments - old and new - rediscovered the thought that terrorist tactics, even at sea, carried out directly or by proxy, were in some cases a seemingly useful means of gaining political ends.4 Some analysts on terrorism tend to underestimate the growing problem of terrorism at sea. Cindy Combs, for example, whilst acknowledging the problem of political piracy disregards maritime terrorism: Modern terrorism continues occasionally to take the form of piracy, but today the piracy is often of aircraft rather than sea vessels.5

Actors & Capabilities


The post-2001 interest in terrorism has intensified debates about the causes of PMV or other forms of violence as well as types of terrorist operations.6 The analysis of all post-1968 maritime terrorist attacks (MTA) enables to identify patterns in extremist operations in the maritime domain: MTA-Unexpected when the perpetrators seize on arisen opportunity and attack a maritime target without appreciating the specific nature of the incident; Political piracy when the perpetrators execute a for-profit attack against a maritime or coastal target but justify it by political or other causes; MTA-Proper when the perpetrators plan, prepare and execute an attack in the maritime domain against a maritime target by using maritime-related tactics and means. To date, about a dozen terrorist organisations have developed a maritime capability, fewer are capable of carrying out MTA-Proper.7 Nonetheless, there have been some

Captain 1st Rank (ret.) V. Sinetskiy, Captain 1st Rank G. Buzmakov, VMF i Bezopasnost Rossiiskogo Morskogo Sudokhodstva [The Navy and the Security of Russian Shipping], Morskoi Sbornik, N 6, 1998, p. 33. Charles W. Koburger, Jr., Sea Power in the Twenty First Century, Westport: Praeger, 1997, p. 21. Cindy C. Combs, Terrorism in the Twenty-First Century (2nd ed.), New Jersey: Prentice Hall, Upper Saddle River, 2000, p. 22.

4 5

For example, see Rhyll Vallis, Yubin Yang, Hussein A. Abbas, Disciplinary Approaches to Terrorism; A Survey, ALAR Technical Reports, available at: http://www.itee.adfa.edu.au/~alar/techreps/200611015.pdf.

By that I understand an attack capability to be the capacity of a terrorist group to execute offensive strike operations in the maritime domain, such as airborne, surface and subsurface attacks against maritime targets; the capacity to deploy navalised units at sea (armed patrol and attack craft,
81

Recent advances in security technology

notable exceptions. Throughout the 1970s and 1980s, several groups based in Europe, Latin America, South and Southeast Asia integrated use of maritime capabilities into their operational agenda. Currently, extremist organisations like Al Qaeda (AQ), the Abu Sayyaf Group (ASG) based in the Philippines, the Sri Lankan Liberation Tigers of Tamil Eelam (LTTE), Palestinian groups, the Revolutionary Armed Forces of Columbia (FARC) and some others possess maritime capability; some have acquired limited maritime attack capabilities (Table 1), with two actors attracting most attention the LTTE and AQ.
Actor Al Qaeda (AQ) Abu Sayyaf Group (ASG) Anti-Castro groups Basque Fatherland and Liberty Group (ETA) Columbian groups (FARC) Contras Free Aceh Movement (GAM) Hezbollah Irish Republican Army (IRA) Islamic Jihad Jemaah Islamiyah (JI) Liberation Tigers of Tamil Eelam (LTTE) Moro Islamic Liberation Front (MILF) Palestinian groups Yes Yes Yes Yes Yes Yes Yes Attack Capability Yes Yes

Table 1. Terrorist groups capable of operating in the maritime domain


Sources: Janes Intelligence Review (200007); Morskoi Sbornik, N 1 2004; Nezavisimoe Voennoe Obozrenie (200205); additional data is collected by the author. Note: ETA and IRA declared ceasefire.

The LTTEs Sea Tigers


The international maritime, security and intelligence community agrees that Sri Lankan littoral is one of the most vulnerable to MTA. The locally based LTTE has a notorious reputation of being the best in the terrorist community when it comes to

submersibles etc.); the ability to utilise specialised naval technology for offensive operations (sea mines, torpedoes, guided and unguided missiles, scuba diving expertise etc.)
82

Maritime Terrorism & Risks to the Australian Maritime & Resource Industries

maritime operations. The organisation has a history of frequent MTA, including seabased suicide attacks carried out by a specialised unit (the Black Sea Tigers).8 Since the formation of a naval arm (the Sea Tigers) in 1984, the organisation has been rapidly increasing its brown-water (coastal) and blue-water (open-ocean) maritime capability, which signals that the maritime theatre is of considerable significance to the LTTEs overall operational agenda.9 The 2004 Indian Ocean Tsunami proved to be a more effective one-off strike in the fight against the Sea Tigers than the continuous efforts of the Sri Lankan Navy by delivering a severe blow to the organisations maritime capability.10 Despite considerable losses to the human and physical infrastructure, the LTTE was quick in rebuilding its naval arm believed to be about 2,000 personnel operating a fleet of up to 15 attack units and about 12 commercial vessels known as Sea Pigeons for clandestine non-offensive transoceanic operations.11 The assets include patrol and escort vessels, auxiliary vessels, fast personnel carriers, and suicide (among them Stealth fast-attack) and multi-purpose craft (including one submersible). The LTTE experience and, more importantly, continuous successes in maritime activities, including offensive operations could and do provide inspiration to other terrorist groups across Asia and the Middle East.

AQs Phantom Force


In the five past years concerns were expressed that AQ was planning to extend its terrorist activity to sea and even launch a maritime campaign against the US and its allies and partners in the globalised struggle against terror. Whilst some of these reports were more of a sensationalist nature without any substantial evidence presented in support of the claims made, the majority of them, nonetheless, were based on open source intelligence data (OSINT-D) collected after 1999. A critical analysis of evidence collected from OSINT-D between 2000 and 2007 enables to identify over 17 incidents and plots with links to AQ and affiliates operating in the Middle East and Southeast Asia (Table 2). Interrogations of captured AQ senior operatives provided additional knowledge about the networks maritime agenda, among them Abd al Rahim al Nashiri (a.k.a Mulla Ahmad Belal, also known as the Prince of the Sea), believed to be responsible for
8 9

Peter Chalk, Tigers Evolve, Janes Intelligence Review, 19 (03), March 2007, pp. 1516.

Martin Murphy, Maritime Threat: Tactics and Technology of the Sea Tigers, Janes Intelligence Review, 18 (06), June 2006, p. 6. There were some conflicting reporting about the extend of the collateral damage the Tsunami caused to the Sea Tigers with estimates suggesting a destruction of a quarter of the overall potential, other talk about the complete destruction. See Chris Smith, Tamil Tigers Face Tough Choices in Wake of Tsunami, Janes Intelligence Review, 17 (03), March 2005, pp. 3639; Murphy, Maritime Threat, p. 8. Murphy, Maritime Threat, pp. 89; Peter Chalk, Training the Tigers, Janes Intelligence Review, 19 (01), January 2007, p. 26; data is collected by the author.
83
11 10

Recent advances in security technology

planning AQs maritime offensive operations.12 Similarly, the interrogation of another AQs senior operative, Omar al Faruq, among other revealed details of the networks intensions in collaboration with JI to consider targeting maritime assets in Southeast Asia.13 Conflicting reports about the so-called AQs navy, a fleet of illegally acquired merchant vessels operating under false names and manifests, and flying flags of convenience, is another point of concern. According to Michael Richardson, AQ began secretly buying these units as early as 1994.14 There is a continuous speculation about the actual existence of such force as well as its size. While some media reporting presented unrealistic estimates of up to 300 (!) units,15 it is more likely that this force may comprise between 10 and 20 medium-size merchants, primarily freighters, which were acquired between 2002 and 2003.16 According to an American scholar Mark Steinberg, this force may be subdivided into two tactical groups (Indian Ocean and Southeast Asian) tasked to operate within their zones of responsibility.17 However, to date, with the exception of a confirmed information that AQ used one of its units to transport the explosives in preparation for 1998 bombings of US embassies in Kenya and Tanzania,18 and the arrest of the 2,000-tonne freighter Baltic Sky in 2003, Bin Ladens fleet remains a mystery phantom force yet to reveal itself to the world. More likely, in the near future terrorists will continue to explore opportunities at sea by carrying out MTA using high-speed low profile sea-borne vehicles. Still, there is little doubt among security analysts that AQ is capable of targeting maritime assets by deploying light surface (high speed boats) and subsurface (scuba divers, sea
12

According to the released 9/11Commission Report, al Nashiri proposed to Bin Laden an attack against a maritime target in late 1998, a year before the attempt on USS Sullivans. The 9/11 Commission Report. Final Report of the National Commission on Terrorist Attacks upon the United States (Authorised edition), New York, London: W.W. Norton & Company, 2004, pp. 152, 190. In particular, in 2002 US intelligence apparently captured a 180-page dossier in Afghanistan, which detailed AQs plans to carry out acts of maritime terrorism, among them attacking targets of opportunity, and even launching improvised missile attacks against the US east coast by deploying SCUD tactical missiles on board merchant vessels. Mark Steinberg, Shakhidy Atakuiut s Morya [The Shakhids Attack from the Sea], Nezavisimoe Voennoe Obozrenie [The Independent Military Review], N 21, 2002, p. 7; Rezyapov, Simonov, Mezhdunarodny Terrorism na More p. 69; also see John C. K. Daly, Al Qaeda and Maritime Terrorism, Terrorism Monitor (by the Jamestown Foundation), 1 (4), 24 October 2003, pp. 13.

13

Zachary Abuza, Terrorism in Southeast Asia: Keeping Al Qaeda at Bay, Terrorism Monitor (by the Jamestown Foundation), 2 (9), 6 May 2004, p. 4. Michael Richardson, A Time Bomb for Global Trade: Maritime-related Terrorism in an Age of Weapons of Mass Destruction, Maritime Studies, January/February 2004, p. 1. Daly, Al Qaeda and Maritime Terrorism, p. 1, The Terrorism Maritime Threat, The Washington Times, 2 December 2003, available at: http://209.157.64.200/focus/f-news/1032276/posts, retrieved on 8 January 2004.
16 15 14

Mark Steinberg, Voenny Flot Ben Ladena [Bin Ladens Navy], Nezavisimoe Voennoe Obozrenie, N 1, 2005, p. 2. Ibid.

17 18

Richardson, A Time Bomb for Global Trade, p. 1; Joshua Sinai, Future Trends in Worldwide Maritime Terrorism, The Quarterly Journal, 3 (1), March 2004, p. 53.
84

Maritime Terrorism & Risks to the Australian Maritime & Resource Industries

mines etc.) capabilities as well as coastal (primarily through land-based attacks) and offshore infrastructure, thus having a capacity to engage in maritime terrorism.
Attack Failed sea-borne attack on USS Sullivans, Yemen Failed attempt to sink the royal yacht with King of Jordan Abdullah on board, the Mediterranean Sea Sea-borne suicidal attack on USS Cole, Yemen Uncovered plots to attack US warships in ports of Indonesia, Malaysia & Singapore Uncovered plot to attack HQ of the US 5th Fleet in Bahrain Failed sea-borne attack on USNS Walter S. Diehl, Persian Gulf Sea-borne suicidal attack on MV Limburg, off Yemen Failed sea-borne attack on the Ropucha I class LST Yamal, Russian Federation Navy, the Mediterranean Sea Reports about plans to attack US warships in Pearl Harbour On-board attack on SuperFerry 14, the Philippines Sea-borne suicidal attacks on the offshore Iraqi oil installations Attack on the oil refinery in Yanbu, Saudi Arabia Attack on the Al-Khobar Petroleum Centre, Saudi Arabia Reports about plans to blow up Panama Canal Uncovered plot to blow up Israeli cruise ships in Turkey Attacks on USS Ashland and USS Kearsarge, Jordan Date 3 January 2000 July 2000 12 October 2000 Late 2000-2002 Late 2001 23 April 2002 6 October 2002 28 October 2002 February 2003 27 February 2004 24 April 2004 1 May 2004 29 May 2004 July 2004 10 August 2005 19 August 2005

Uncovered plot to attack ships & other targets in Casablanca, 11 March 2007 Morocco

Table 2. Reported incidents & plots attributed to AQ & affiliated groups 200007
Sources: Janes Intelligence Review (200007), Morskoi Sbornik, N 1 2004, Nezavisimoe Voennoe Obozrenie (200205), Lenta.Ru (Russian on-line information service), Transnational Terrorism: The Threat to Australia, Canberra: Commonwealth of Australia, 2004.

Getting the Picture Right


When it comes to critical analysis of possible risks, let alone threats, of terrorist activities at sea, from the sea, or from land against targets afloat or ashore/offshore, one needs to exercise a balanced approach without resorting to extremes (either ruling it out full stop or being alarmist). Given the Australian circumstances, should a terrorist group plan an MTA against non-military targets, it is likely that the
85

Recent advances in security technology

perpetrators will consider targeting either national marine infrastructure (70 ports and 184 port facilities, 97 port service providers, 59 Australian flagged vessels) or coastal and offshore (58) assets of the resource sector.19 Based on the accumulated knowledge about terrorist intentions towards, and experiences in, the maritime environment the following lessons could be drawn. Lesson One. With the exception of a few geographical areas (Sri Lankan and Southeast Asian littoral, Persian Gulf) incidents of maritime terrorism are seldom. There are several factors that could explain this:

Lack of practical maritime experience; Lesser media exposure;20 Fixed land-based targets often offer greater ease of access; and Conducting terrorist operations at sea requires specialised equipment and skills.21

However, the current security environment may change the pattern of terrorist tactical methodology. Following multiple terrorist attacks on land some serious measures were undertaken to improve security of many critical elements of ground infrastructure; air and land transport communications, support infrastructure and means of transportation have come under tighter security scrutiny. It is reasonable to assume that because land-based assets are now better protected, in the near future international and local terrorist groups will concentrate on planning attacks against the maritime infrastructure, because of the profusion of soft targets and clear dividends. The Achille Lauro, Cole, Limburg, energy infrastructure targets in the Persian Gulf, and other incidents proved to terrorists that attacking targets in the maritime setting can be as lucrative as strikes on land. They demonstrated the vulnerability of sea-based assets and showed that these types of attacks could also generate enormous political capital. Lesson Two. The absence of a serious threat from a maritime terrorism to Australian interests today can not excuse complacency. Currently, there is no immediate threat either to the Australian maritime (including cruise line) industry or the resource sector, the most likely potential victims of maritime terrorism.22 LTTEs navy (the most potent terrorist force at sea) limits its operational activity to the Sri Lankan littoral, whilst AQ, a network identified by the 2004 Transnational Terrorism: The

Harley Sparke, Securing Australias Ports, Australian National Security Magazine, April 2007, p.22.
20

19

Attacking a vessel on the high seas is less likely to attract immediate media attention than more accessible land targets. Martin Murphy, Maritime Terrorism: the Threat on Context, Janes Intelligence Review, 18 (02), February 2006, p. 20; Alexey D. Muraviev, The Sea is Next, Australian National Security Review, 3 (1), January 2005, p. 5. This assessment does not apply to Australian commercial activities in high risk zones (Persian Gulf, the Horn of Africa, Sri Lankan littoral).
86
22 21

Maritime Terrorism & Risks to the Australian Maritime & Resource Industries

Threat to Australia as the most serious security threat to the nation and its interests,23 has so far limited its maritime offensive operations to the Persian Gulf area. The OSINT-D concerning terrorist risks in Australia suggests that known local extremist groups and individuals showed practically no interest in targeting maritime assets. In 2004, the Australian Security Intelligence Organisation (ASIO) carried out a comprehensive assessment of vulnerabilities and risks to the Australian maritime industry. Whilst launching these findings in April of that year Australias AttorneyGeneral Philip Ruddock noted that ASIO has concluded that the threat to many aspects of Australias shipping and port facilities is low or very low, although there are some areas that have been assessed as a medium threat.24 In addition to that, several government-driven initiatives bolstered security mechanisms aiming at offering better protection to both industries.25 However, the absence of such a threat today cannot provide immunity for tomorrow. The Kokoda Foundation report released in early 2007 clearly states that conventional terrorist attacks in Australia are nearly certain within the next few years.26 The maritime domain is critical in sustaining Australias economic well being. The bulk of trade exports comes to and leaves Australia by sea with ports across the country acting as strategic gateways linking the nation with clients overseas. Such high dependency creates a principal strategic vulnerability, which sooner or later terrorists will attempt to exploit. By denying them an opportunity to do that through target hardening and appreciation of a new threat environment, the industry-government tandem does not only strengthens national counter terrorist defences but also offers extra insurance to multi-billion dollar investments in these critical sectors of economy. Adding to that, strengthening security regimes at the water fronts and on offshore platforms, introducing more thorough person identification practices (56,000 new Maritime Security Identification cards),27 the commitment to invest $ 1 billion over the next 12 years to bolster surveillance capabilities in the countrys northern and north-western sectors, thus helping to offer better protection of the offshore installations in the area.28 Other measures will help to improve the overall security
Transnational Terrorism: The Threat to Australia, Canberra: Commonwealth of Australia, 2004, pp.6667.
24
23

Release of Maritime Threat Assessment: Australias Shipping and Port Infrastructure, Australian National Security Review News Update, N 21, 30 April 2004, p. 1. For details about Australias responses to potential acts of maritime terrorism see Anthony Bergin, Sam Bateman, Future Unknown. The Terrorist Threat to Australian Maritime Security, The Australian Strategic Policy Institute, April 2005, pp. 4769; Sparke, Securing Australias Ports, pp. 2225. Leigh Funston, The Future for National Security in 2020, Australian National Security Magazine, April 2007, p. 6. Ernie Davitt, Border and Maritime Security Get Sharper Teeth, Australian National Security Magazine, April 2007, p. 20.
28 27 26 25

Leigh Funston, Government Invests in Surveillance, Australian National Security Magazine, April 2007, p. 13.
87

Recent advances in security technology

climate, which enables the industry to tackle other challenges (cross border organised crime, for example), thus creating a positive force-multiplier effect. The most urgent current task is to improve the security culture within operating bodies, which is a key to better security practices in any organisation or enterprise. Despite the rarity of this form of terrorist activity, one could argue that we are entering a pre-operational phase of a future maritime 9/11. When it comes to analysing dangers of maritime terrorism the majority of security analysts agree that it is no longer a question of if, but rather, when and where. Terrorist activity at sea in the past seven years, including attempts to strike western naval and merchant assets combined with growing reconnaissance activities, plans to obtain effective sea-based strike capabilities, and a few successful attacks are indicators that more incidents could follow. Terrorism at sea is an unfortunate reality for many regions of the world and a potential risk to Australia; it must be confronted before any damage is done.

References
Abuza, Zachary. 2004. Terrorism in Southeast Asia: Keeping Al Qaeda at Bay. Terrorism Monitor. 2(9): 4-6. Bergin, Anthony. Sam Bateman, Sam. 2005 Future Unknown. The Terrorist Threat to Australian Maritime Security. Canberra: The Australian Strategic Policy Institute. Blanche, Ed. 2002. Terror Attacks Threaten Gulfs Oil Routes. Janes Intelligence Review. 14(12): 6-11. Chalk, Peter. 2007. Training the Tigers. Janes Intelligence Review. 19(01): 24-28. Chalk, Peter. Tigers Evolve, Janes Intelligence Review, 19(03): 15-19. Combs, Cindy C. 2000. Terrorism in the Twenty-First Century (2nd ed.). New Jersey: Prentice Hall, Upper Saddle River. Daly, John C. K. 2003. Al Qaeda and Maritime Terrorism. Terrorism Monitor. 1(4): 1-3. Davitt, Ernie. 2007. Border and Maritime Security Get Sharper Teeth. Australian National Security Magazine. April: 18-21. Funston, Leigh. 2007. The Future for National Security in 2020. Australian National Security Magazine. April: 6. Funston, Leigh. 2007. Government Invests in Surveillance. Australian National Security Magazine, April: 13. Koburger, Charles W. Jr. 1997. Sea Power in the Twenty First Century, Westport: Praeger. Krasnov, Vladimir. 1997. Flot i Natsionalnaya Bezonasnost Rossii. In Nauka i Bezopasnot Rossii, Moskva: Nauka: 498-515. Lenta.Ru. At: www.lenta.ru/ Muraviev, Alexey D. 2005. The Sea is Next. Australian National Security Review. 3(1): 5. Murphy, Martin. 2006. Maritime Terrorism: the Threat in Context. Janes Intelligence Review. 18(02): 20-25.

88

Maritime Terrorism & Risks to the Australian Maritime & Resource Industries

Murphy, Martin. 2006. Maritime Threat: Tactics and Technology of the Sea Tigers. Janes Intelligence Review. 18(06): 6-10. Rezyapov, N., Simonov, A. 2004. Mezhdunarodny Terrorism na More. Istochniki, Sostoyanie, Protivodeistvie. Morskoi Sbornik. N 1: 67-74. Richardson, Michael. 2004. A Time Bomb for Global Trade: Maritime-related Terrorism in an Age of Weapons of Mass Destruction. Maritime Studies. January/February: 1-8. 30 April 2004. Release of Maritime Threat Assessment: Australias Shipping and Port Infrastructure. Australian National Security Review News Update. N 21: 1-2. Sinai, Joshua. 2004. Future Trends in Worldwide Maritime Terrorism. The Quarterly Journal. 3(1): 49-66. Sinetskiy, V. Buzmakov, G. 1998. VMF i Bezopasnost Rossiiskogo Morskogo Sudokhodstva. Morskoi Sbornik. N 6: 33-38. Smith, Chris. 2005. Tamil Tigers Face Tough Choices in Wake of Tsunami. Janes Intelligence Review. 17(03): 36-39. Sparke, Harley. 2007. Securing Australias Ports. Australian National Security Magazine. April: 22-25. Steinberg, Mark. 2002. Shakhidy Atakuiut s Morya. Nezavisimoe Voennoe Obozrenie. N 21: 7. Steinberg, Mark. 2005. Voenny Flot Ben Ladena. Nezavisimoe Voennoe Obozrenie. N 1: 2. 2004. The 9/11 Commission Report. Final Report of the National Commission on Terrorist Attacks upon the United States (Authorised edition), New York, London: W.W. Norton & Company. 2003. The Terrorism Maritime Threat, The Washington Times, 2 December. At: http://209.157.64.200/focus/f-news/1032276/posts 2004. Transnational Terrorism: The Threat to Australia, Canberra: Commonwealth of Australia. RAND database. At: http://www.rand.org/ise/projects/terrorismdatabase. Vallis, Rhyll, Yang, Yubin, Yang, Abbas, Hussein A.. Disciplinary Approaches to Terrorism; A Survey. ALAR Technical Reports. At: http://www.itee.adfa.edu.au/~alar/techreps/200611015.pdf.

89

Recent advances in security technology

8
Polymeric Coatings for Enhanced Protection of Structures from the Explosive Effects of Blast
K. Ackland, C. Anderson and N. St John
Defence Science & Technology Organisation, Australia
Abstract
A number of new materials are becoming available that can reduce the severity of structural damage due to blast effects. Understanding how these materials mitigate blast is a key factor in developing and optimising their use and performance. This paper presents experimental results of blast testing conducted by DSTO on polymer coated steel plates, as well as results from preliminary modelling using AUTODYN. It also discusses DSTOs future work plan objectives in this area.

Biographies
Kate Ackland graduated from Monash University in 2005 with a Bachelor of Science and Bachelor of Engineering with Honours. She has been working at DSTO in the Maritime Platforms Division (MPD) since January 2006 specialising in blast protection and modelling. Chris Anderson completed a bachelor of Engineering in Mechanical Engineering with honours in 1998 and continued on to complete a Doctor of Philosophy degree in vibration control and Finite Elements Method (FEM) modelling before joining DSTO in 2002. He worked on countermeasures to protect vehicle occupants from landmine attack in Weapons Systems Division (WSD) until 2007. Chris now works in MPD on air blast and fragmentation protection of vehicles. He is currently the MPD National Security Task Leader. Nigel St John completed a PhD in Polymer Chemistry in 1991 at the University of Queensland. He has worked in the field of material science for 16 years at DSTO focussing primarily on composite materials and currently leads research in advanced materials and fabrication in the Maritime Platforms Division of DSTO.

90

Polymeric Coatings for Enhanced Protection of Structures from the Explosive Effects of Blast

Introduction and background


The threat of terrorism in recent years has prompted research into the use of lightweight materials to enhance the blast resistance of vehicles and structures. Various government and commercial organisations around the world are investigating the application of polymeric coatings as they show great potential to enhance blast protection and are also cost effective and time efficient to apply compared to other methods of enhancing blast resistance. Many of these products may be relatively easily applied in the field to existing vehicles and structures making them well suited to battlefield protection upgrades. Since 1995, the Air Force Research Laboratory (AFRL) in the US has been conducting extensive research into developing lightweight expedient methods of strengthening structures against blast loading (Davidson et al. 2004). Initially a range of stiff composite materials was investigated although these were deemed a poor choice for widespread use, because of factors such as cost and difficulty of efficiently applying such reinforcements to existing structures. In 1999 AFRL began experimenting with a commercially available spray-on truck bed liner (Knox et al. 2000) and found that the polymer coating overcame many of the issues associated with stiff composite materials, and reduced blast damage behind non-reinforced concrete masonry walls, retaining any debris. Subsequent research led to a total of twenty-one off-the-shelf prospective polymers being evaluated to determine the most suitable for further testing. A pure polyurea was selected as the material of choice for further testing based on factors such as strength, flammability and cost. Full scale tests later confirmed that polyurea coatings can be effective in reducing the vulnerability of structures subjected to blast loading (Davidson et al. 2005). Although it has been found that polymeric coatings can be effective in reducing blast damage, little is known about the mechanisms by which this is achieved. Yi et al. (2005) conducted testing to see how polyurea, as well as three other polyurethanes, behaved under high strain rate loading conditions similar to those experienced during blast loading. They found that the materials showed strong hysteresis, cyclic softening, and strong strain rate-dependence. It has also been found that the application process also affects the mechanical behaviour of the material. Chakkarapani et al. (2006) compared the behaviour of two polyureas under monotonic loading, one processed by casting moulds and one generated by a spray-on process. They found significant differences between the two application processes, indicating that the polyurea application process could affect the amount of blast protection enhancement that it provides. It is believed that the spray process induces fine-scale porosity, causing compressibility and inelasticity compared to the cast polyurea. Since the terrorist attack on the USS Cole, Figure 1, and attacks on military vehicles deployed on operations overseas, the Defence Science and Technology Organisation (DSTO) has been interested in how lightweight materials can be used to enhance the blast resistance of Defence platforms. This paper presents experimental results of
91

Recent advances in security technology

blast testing conducted by DSTO on steel plates coated with cast polyurea, as well as results from preliminary modelling using AUTODYN (2005).

Figure 1: Damage to USS Cole after terrorist boat filled with explosives was driven into the side of the ship. [Source: US Navy]

Experimental testing
Experimental testing was conducted by DSTO at the Armys Proof and Experimental Establishment which is a military range at Graytown, in Victoria. The main aim of the testing was to validate AUTODYN finite element models. The charge size and standoff were chosen to generate significant plate deformation without complete rupture. Complete plate rupture would create a discontinuity in deflection data. This would cause difficulties in quantifying the deformation when comparing results from subsequent tests. Test set-up The test set-up is shown in Figure 2. Bare steel plates were tested, as well as plates coated with two thicknesses of polyurea coating on the back surface of the plate. The explosive charge was a 0.5 kg pentolite sphere, positioned at a 61 mm standoff from the surface of the plate to the centre of the charge. The steel used for testing was D36 ship steel, with mechanical properties listed in Table 1.

92

Polymeric Coatings for Enhanced Protection of Structures from the Explosive Effects of Blast

Property Minimum Yield Strength Typical Yield Strength Ultimate Tensile Strength

Value 355 MPa 380 MPa 480 MPa

Table 1: Mechanical Properties of D36 steel. The polyurea material used was an in-house formulation and was cast directly onto the steel plates which had been previously grit blasted and primed. The coating was not applied in the region where the plates were bolted to the support frame.

Figure 2: Test set-up for blast testing. Experimental results The bare plate deformed the most with a permanent deformation of 143 mm at the centre point. The plate with the thin polyurea coating had a permanent deformation of 134 mm, and the plate with the thick coating deformed the least with a permanent deformation of 109 mm.

Finite element modelling


A 3D finite element model of the plate and explosive was constructed using the explicit non-linear dynamics program AUTODYN. Due to the symmetry of the setup, a quarter model of the plate was able to be used to reduce the calculation time required. The air surrounding the plate and the charge was modelled using an
93

Recent advances in security technology

Eulerian grid. A spherical area of the grid was filled with pentolite to represent the explosive charge. The plate was modelled using shell elements, and for the plates with polymer coatings, the polymer was modelled using Lagrange elements joined to the plate. The boundaries of the plate were fixed and material was allowed to flow out of the Eulerian grid. The model was run until the blast loading of the plate had completed, at which stage the air was removed, and the plate was allowed to deform accordingly. The Eulerian grid and the deformation of the plate are shown in Figure 3.

Steel plate with polymer coating

Air

Pentolite charge Figure 3: AUTODYN model showing the Eulerian grid filled with air on the left, and the deformed plate with the Eulerian grid removed on the right. Material Models The air and pentolite material models were available directly through the AUTODYN material library. The steel plate was modelled using Steel 4340 from the AUTODYN material library, which was modified so that its yield stress was equal to 355 MPa. The Johnson-Cook strength model was used without utilising a failure algorithm because the material was not tested to the point of failure. The polymer was modelled based on the work of Davidson et al. (2005). Results Table 2 compares the permanent deflections predicted using AUTODYN and the experimental results.

94

Polymeric Coatings for Enhanced Protection of Structures from the Explosive Effects of Blast

Plate Material Bare D36 steel D36 steel with thin polymer coating D36 steel with thick polymer coating

Measured permanent deformation (mm) 143 134 109

AUTODYN prediction of permanent deformation (mm) 145 126 109

Table 2: Comparison of Experimental and Modelling Results. An approximate comparison of the deformation contours has been made by copying the AUTODYN grid onto photographs of the plates, shown in Figure 4 to Figure 6. The deformation contours of the models are consistent with the experimental results, however the AUTODYN contours deviate from the plate surface near the edges of the plate, indicating that the boundary conditions require further refinement.

Figure 4: Bare D36 plate with AUTODYN grid overlay.

Figure 5: D36 plate with thin coating with AUTODYN grid overlay.

Figure 6: D36 plate with thick coating with AUTODYN grid overlay.
95

Recent advances in security technology

Concluding Remarks and Further Research


The application of polymeric coatings has been shown to enhance the blast resistance of steel plates. The preliminary numerical modelling results are reasonably accurate considering that only quasi-static material properties were used. However, further work needs to be undertaken to validate the modelling procedures over a wider range of experimental conditions. Model refinement in the areas of material properties (particularly high strain-rate properties), mesh size, boundary conditions and contact algorithms is planned for the future. Failure models for the backing plate and coating will also be included in future modelling work. While only one type of polymer coating was tested here, other coating materials will be tested in late 2007. It is hoped that additional experimental tests will provide sufficient test results to develop more robust finite element models to better predict the blast protection increase provided by polymer coatings. This may then also allow us to ultimately determine which polymer properties should be optimised to enhance blast resistance. Acknowledgements The authors wish to thank the following DSTO staff for their valuable contributions to this research program: Steve Pattie, Frank Griffo, Torie Thorn, Darren Wiese, Frank Marian, Chris Townsend, Mike Buckland, Steve Cimpoeru, Jeff Heath, Stefan Danek, and Lindsay Wake. Martin Jones of Tenix Defence is also thanked for kindly supplying D36 plate for this study. The helpful assistance of Joint Proof and Experimental Unit (JPEU) Graytown is gratefully acknowledged as well. References
AUTODYN. Version 6.1.00, 2005, Century Dynamics Inc: CA, USA. Chakkarapani, V., Ravi-Chander, K. & Liechti, K.M. 2006. Characterization of Multiaxial Constitutive Rubbery Polymers. Journal of Engineering Materials and Technology 128: 489-494. Davidson, J.S., Fisher, J.W., Hammons, M.I., Porter, J.R. & Dinan, R.J. 2005, Failure Mechanisms of Polymer-Reinforced Concrete Masonry Walls Subjected to Blast. Journal of Structural Engineering: 1194-1205. Davidson, J.S., Porter, J.R., Dinan, R.J., Hammons, M.I. & Connell, J.D., 2004. Explosive Testing of Polymer Retrofit Masonry Walls. Journal of Performance of Constructed Facilities: 100-106. Knox, K.J., Hammons, M.I., Lewis, T.T. & Porter, J.R. 2000. Polymer Materials for Structural Retrofit. Force Protection Branch, Air Expeditionary Forces Technology Division, Air Force Research Laboratory, Tyndall AFB, Florida. Yi, J., Boyce, M.C., Lee, G.F., & Balzier, E. 2005. Large deformation rate-dependent stressstrain behavior of polyurea and polyurethanes. Polymer 47: 319-329.

96

EMANZE: A comparative Study on Emergency Management in Australia, New Zealand and Europe

9
EMANZE: A comparative Study on Emergency Management in Australia, New Zealand and Europe
Andreas Meissner
Fraunhofer Institute for Information and Data Processing (IITB), Karlsruhe, Germany
Abstract
The 2006 EMANZE international survey study looked into Emergency Management in Australia, New Zealand and Europe, with Germany as the focus country in Europe. This paper reports and compares the findings obtained from expert interviews carried out in these countries, where emergency management officials at various levels were asked about responsibilities, technology use, perceived emergency preparedness, opinions on current issues, the handling of certain emergency scenarios, etc. The study suggests that, while recent technology advances have greatly improved the organizations ability to cope with emergencies, there is an unexploited potential for contributions by the academic community.

Biography
Andreas Meissner holds a German masters degree in computer science and business administration, a U.S. MBA degree, and a German PhD in engineering. He works at Germanys Fraunhofer Gesellschaft as a researcher and manager, focusing on the area of ICT support for emergency management. In 2006, he was a guest scientist at National ICT Australias lab in Brisbane, from where the major part of the EMANZE field work was carried out.

97

Recent advances in security technology

1 Introduction
Large-scale emergency events have the potential to put countries and states to the test, and authorities have to meet their responsibility to mitigate, prepare for, respond to, and recover from such events in an adequate manner. The approaches observed worldwide vary greatly in terms of technology use and organizational setup, with sometimes limited awareness among authorities on what issues their peers are dealing with, and a lack of data on where the academic community might help fill gaps with new technology. Thus, in 2006, Germanys Fraunhofer Gesellschaft and National ICT Australia funded the EMANZE survey study, looking into Emergency Management in Australia, New Zealand, and selected European countries. In the course of this study, experts and government representatives mostly at federal and state levels were interviewed and asked about their personal work and emergency management responsibilities, their organizations role and ICT use, their opinions on current issues in emergency management, and their handling of certain emergency scenarios. This paper reports on the findings for Australia, New Zealand and Germany, where interviews were carried out at both national and state (or comparable) level. It is organized as follows. Following this introduction, the methodology is discussed in section 2. Sections 3, 4, 5 and 6 mirror the EMANZE questionnaire structure and report on findings regarding personal and organizational matters, interviewees opinions, the handling of emergency events, and publicly debated issues, respectively. Where significant differences were detected between the countries and states, they are highlighted. Finally, section 7 concludes and gives an outlook.

2 Methodology
Structured interviews with experts are the basis for EMANZE findings. Experts were selected based on individual contacts with the target organization, with the objective to identify similar functions (generally including at least preparedness and response duties) in all countries and states covered, despite the different administrative and organizational setups encountered. An 8-page questionnaire was developed and, after its validation in pilot interviews, used with the same set of questions in all countries; a translation was offered to interviewees in Germany. The questionnaire was administered in personal interviews carried out by the author at the respective interviewees office. The questions, which were not disclosed to the interviewee beforehand, were grouped in five sections covered sequentially: (1) Questions on their personal responsibilities and work (2) Questions on their organization (3) Questions on their opinions on current issues in emergency management (4) Questions on their response phase handling of certain natural and man-made emergencies (5) General open questions
98

EMANZE: A comparative Study on Emergency Management in Australia, New Zealand and Europe

The questionnaire had pre-defined decision points: Questions rendered irrelevant by previous answers were skipped (e.g., those who said they had no volunteers were not asked to judge their importance); in particular, entire sections in part (4) had to be skipped if an individual interviewee did not identify a role for his/her organization in the response effort to the respective event. Moreover, for certain interviewees such as academics without any duty in emergency management, or experts (three) who opted to give background information only, the questionnaire was generally not administered beyond part 1, instead the interview was used to obtain qualitative results, e.g. on research agendas, the organizational background or indigenous affairs. Such interviews are not counted in this paper. The completion of an interview usually took between 45 and 150 minutes. In interviews outside Australia, the term state was translated to Civil Defense and Emergency Management Group (CDEMG) for New Zealand, and Bundesland for Germany. In this paper, state is used generically for briefness. Due to the limited number of interviews and the fact that it was impossible to select experts randomly, no claims are made on the datas fitness for generalization (especially at the local level, which is therefore stratified only very carefully), but EMANZE findings do yield an unbiased insight into the emergency management sectors of the countries researched.

3 Personal and Organizational Responsibilities and Work


In Australia, the federal level and all states were covered by EMANZE field work. Interviews were carried out at the following organizations, mostly at director or commissioner level: Federal: Emergency Management Australia; Protective Security Coordination Center Queensland (QLD): Emergency Management Queensland (EMQ); EMQ Far Northern Region along with six additional state and local government organizations New South Wales (NSW): NSW Office for Emergency Services; NSW Police - Sydney Western Australia (WA): Fire and Emergency Services Authority of Western Australia (FESA); WA Police; Royal Flying Doctor Service of Australia (RFDS) - Western Operations; Coonana Aboriginal Community Council; FESA Goldfields Region South Australia (SA): SA State Emergency Services Victoria (VIC): Victoria Department of Justice / Emergency Services Commissioner Tasmania (TAS): Tasmania Department of Police and Emergency Management Northern Territory (NT): Northern Territory Emergency Service (NTPFES); Maningrida Aboriginal Community Council; NTPFES Southern Region; RFDS - Central Operations (Note: Given its special role, RFDS is categorized as neither federal, nor state, nor local, but other.)

99

Recent advances in security technology

Due to time and travel constraints, the state (or respective) level was only partially visited in other countries covered by EMANZE. In New Zealand, apart from the National level, five out of 16 CDEMGs were visited, along with an in-depth look into the Rotorua region:

National: Ministry of Civil Defense and Emergency Management; NZ Fire Service Bay Waikato Rotorua Napier: NZ Fire Services and Department of Conservation Auckland, Bay of Plenty, Wellington, Canterbury, Otago CDEMGs

In Germany, the federal level and seven out of 16 states were covered: Federal: Federal Office for Civil Protection and Disaster Assistance (BBK) Berlin, Sachsen, Nordrhein-Westfalen, Schleswig-Holstein, Bayern, Hessen, Bremen State Ministries of Interior Affairs Overall, most interviewees indicated that their personal responsibilities include emergency preparedness and response work, while involvement in mitigation and recovery was less frequently mentioned. Apart from those in organizations with local scope, most interviewees described their work as strategic. In general, responsibilities of organizations or departments they represent are a superset of their personal responsibilities. Table 1 gives the details.

AUS Personal Responsibility Mitigation Preparedness Response Recovery Personal Type of Work Strategic Tactical Operational Support Organizations Responsibility Mitigation Preparedness Response Recovery Total Number of Interviewees

NZ

GER

20 (2/7/10) 22 (2/7/11) 25 (2/8/12) 18 (0/5/12) 15 (2/6/6) 11 (0/3/8) 18 (2/4/9) 10 (1/1/8) 25 (2/8/12) 25 (2/8/12) 25 (2/8/12) 22 (1/7/12) 25 (2/8/12)

10 (2/5/3) 10 (2/5/3) 10 (2/5/3) 8 (1/4/3) 9 (2/4/3) 7 (1/4/2) 6 (1/3/2) 5 (1/2/2) 10 (2/5/3) 10 (2/5/3) 10 (2/5/3) 8 (1/4/3) 10 (2/5/3)

5 (1/4/-) 8 (1/7/-) 8 (1/7/-) 4 (1/3/-) 7 (1/6/-) 1 (0/1/-) 2 (0/2/-) 2 (0/2/-) 5 (1/4/-) 8 (1/7/-) 8 (1/7/-) 5 (1/4/-) 8 (1/7/-)

Table 1. Personal and Organizational Responsibilities: total (fed/state/local); other omitted

100

EMANZE: A comparative Study on Emergency Management in Australia, New Zealand and Europe

Detailing their response duties, most interviewees suggested they had a coordinating role, assuring the availability of resources and facilitating the execution of the frontline response. Preparedness work often involved planning, training of staff, and getting communities aware of and ready for potential emergencies. In order to understand the nature of an organizations peacetime relationships, it was asked which partners the interviewees organization interfaced with in nonresponse mode. These were, overall, mostly local (18), state (26), or federal (11) governments, and emergency services including police, fire, and ambulance services (39). Less common answers included industry, lifeline organizations and the military. Most organizations had a cross-hierarchy interfacing pattern comprised of state and local governments and emergency services, with no significant preference for samelevel contacts. More than half (33) of the interviewees suggested that their organization was involved in multi-state emergencies, and 23 were even dealing with international emergencies, including 6 Australian state emergency management organizations. No New Zealand CDEM Group indicated an involvement in international operations, however, at national and, surprisingly, at local/regional level, NZ was in fact helping with such events, e.g. in the South Pacific. In Germany, all states dealt with international emergencies, and some German interviewees identified only foreign events when asked about emergencies their organization had been involved in since 2001. Apart from the Indian Ocean Tsunami, a major flood in Germany in 2002, and Northern QLDs Cyclone Larry in 2006, there were no major common events quoted by a large number of interviewees. Findings on how Queensland agencies collaborated during and after the latter event were reported in (Meissner 2006). A particular interest addressed in EMANZE was the use of critical infrastructures and ICT, i.e. information and communication technology, for emergency response. Addressing the former, interviewees were asked what critical infrastructures their organization relied upon for carrying out their response operations, i.e. it did not address the more general issue of infrastructure critical to the public or to industry. Table 2 shows what infrastructures were considered critical by Australian, New Zealand and German organizations interviewees.
Power Transport Water Communication / Telecom Fuel / Chemical Supply Crisis Management Center Fire Equipment Hospitals Computing / IT Facilities Lifeline Organizations (other) AUS 12 16 2 23 0 6 1 3 1 3 NZ 5 8 2 8 1 2 0 0 0 1 GER 4 2 0 6 1 4 0 0 2 0

Table 2. Critical Infrastructures

101

Recent advances in security technology

Interviewees were asked if and how they ensured the availability of these infrastructures, or if they had fallback solutions at hand. In most cases such arrangements were readily described, e.g. satellite phones or power generators. In remaining cases, others were identified as responsible for infrastructure resilience, e.g. flood proof bridge planning or road maintenance by other government departments. Another key question asked about ICT (going beyond regular office PCs or personal mobile devices) the organization had in use at either the headquarters or in the field. Apart from NSW and WA, Australian states did not report such ICT to be used at the headquarters, however in Queensland, an IT system appears to be used at the regional level. In New Zealand, 3 out of 5 groups do use ICT systems, which, as reported in (Meissner 2006), tend to suffer from incompatibility with each other. In a stark contrast, all German states as well as the federal government have deployed some kind of special IT application for their emergency management work, in many cases this is the deNIS-IIplus system (Pro DV Software AG 2007) recently contracted by the federal government as a national standard. deNIS-IIplus, which stands for German emergency information system has been adopted by some states who now feed their data into the common database for national use. Other German states, however, still choose to prefer their own traditional IT solutions. Overall, mobile ICT systems for use in the field are used more rarely, with very few Australian and approximately half of the German and NZ interviewees identifying such a system, mostly a laptop with a mobile data link deployed in command vehicles. Some interviewees used the opportunity to sketch a wish list of additional, commercially available ICT they would like to have at their organization. Answers included crisis information management software, satellite imagery systems, geographical information systems, connected spatial information systems, an electronic emergency operations center application with a notification module, and a flight tracking system (for RFDS). Not surprisingly, the German federal government would like to see its deNIS-IIplus system adopted by all states. For mobile use in the field, computers in fire trucks for mapping, digital data radio, video feed from the emergency site, and cell broadcast systems for warning the public were placed on the wish list. Going beyond commercially available systems, interviewees were also asked if they had a homework to academia, with a distinction between technological and other demands. The large variety of suggestions made for the former includes: an online information system in the field, managing live video with an analysis of what is going on, delivery of warnings to the public, 100% reliability of communication and computer systems for 24/7 availability, a holistic communication system that works anytime, anywhere, interconnectivity of systems, interoperability across levels, satellite imagery systems with daily updates for spotting fires, a system for tasking response groups in the field, monitor them, record what they are doing, and rapid detection of biohazards. Non-technological suggestions included: to develop ways to interface with a community, learn about their needs and expectations, to address information overload and the tendency to deviate from the big picture to small details, what are the functions that drive home grown terrorism, to come up with
102

EMANZE: A comparative Study on Emergency Management in Australia, New Zealand and Europe

schemes to encourage people to volunteer their time for emergency services, and community engagement projects, looking at the issue of indigenous multi-cultural inter-relationship and engagement. Many interviewees had difficulties with the distinction between commercially available systems (that could readily be acquired if funding was available) and demands that were so advanced that additional research effort by the academic community was required. A majority of Australian and NZ interviewees mentioned strong ties (ranging from hiring graduates to teaching classes or working on joint projects) their organizations had with academic institutions, predominantly in their state. This observation was most obvious at the Australian (6 out of 8) and NZ (4 out of 5) state level, whereas at the Australian local level, about half of the organizations had no close ties to academia. Only half of the German interviewees identified such ties, and instead a German state interviewee suggested that the federal government establish a center where research results could be obtained, facilitating contacts to academia.

4 Opinions on current Issues in Emergency Management


In this part, interviewees were asked to give their opinion on a number of current issues in emergency management. Generally, when confronted with questionnaire statements, they could either strongly agree agree, be neutral, disagree, or strongly disagree. Among those interviewees whose organizations work with volunteers, many feel that their organization cannot function without volunteers. In Australia, 9 out of 15 such interviewees at all levels agree or strongly agree to this statement; in New Zealand, 5 out of 6 strongly agree (including CDEM Groups), whereas in Germany, no interviewee had volunteers in his/her organization at all which is no surprise, as only state or federal ministries were visited, but no frontline response organizations such as the fire service. A number of questions addressed emergency categories considered likely by the interviewee for his/her country and state, respectively, and the degree of preparedness of the country or state for this type of emergency. Figure 1a shows the natural disasters considered most likely at national level for Australia and New Zealand, respectively. Australia is perceived to be well prepared for bush fires, with just two interviewees being only neutral towards the statement my country is well prepared for this. Similarly, for storms and cyclones, all but one agree or strongly agree to the respective statement. For New Zealands dominant natural hazard, flooding, agree is the median answer. In Germany, all interviewees identify flooding as the most likely hazard at national level, and again agree is the median. Looking at the state level, in Australia the hazards identified as most likely come as no surprise, e.g. all NT and most WA and QLD interviewees (13 in total) consider storms or cyclones as the main hazard, with a slightly better perception of their respective states preparedness compared to the perceived countrywide preparedness, but still giving agree as the median answer. In New Zealand, the most striking difference between the natural hazard perceived as most likely for the country and for the region is observed in Rotorua, which is prone to earthquake and volcanic events. Here, these
103

Recent advances in security technology

are quoted as most likely, with a fair degree of perceived preparedness.In Germany, state preparedness for flooding is about the same as countrywide preparedness.

Plant Disease; 1 Pandemic; 2 Drought; 1 Earthquake / Volcano; 0 Flood; 2 Fire; 10 Green House Effect; 0

Earthquake / Volcano; 1

Storm / Cyclone; 9

Flood; 9

Figure 1a. Likely Natural Disasters in Australia and New Zealand


Collapse, 1

Terrorism, 9

Accident, 13

Figure 1b. Likely Man-made Disasters in Australia

Looking at man-made disasters, accidents (of any type, including e.g. plane crashes and industrial accidents) are considered the most likely event in Australia (13), New Zealand (6) and Germany (5), with terrorist attacks scoring a distant second place in Germany (2) and New Zealand (1), whereas 9 Australian interviewees ranked it top; interestingly these interviewees mostly had responsibilities at regional or local level. Figure 1b illustrates the answers obtained. In no country, terrorism was quoted as most likely by interviewees at the federal level. In Australia, when confronted with the statement my country is well prepared for this, the median answer for accidents
104

EMANZE: A comparative Study on Emergency Management in Australia, New Zealand and Europe

was agree, for terrorist attacks it was neutral. In New Zealand, the median answer for accidents is only neutral; among German interviewees, the perceived preparedness for accidents is slightly better than for terrorist attacks. The 9/11 terrorist attacks in the United States have triggered a re-evaluation of security and emergency management procedures all over the world. In order to determine if the interviewees organizations have been affected by this development, and if their business was now back in continuous mode, they were confronted with a series of statements and asked for comments (again on the five-level scale strongly agree strongly disagree): (A) Since 2001, a major reorganization has taken place at my organization (B) My organization now has a stable legal basis for its work (C) My organization has a clear definition of its responsibilities (D) My organization has clearly documented procedures for handling emergency response operations (E) My organization has the funds available for the work it considers essential Figure 2 shows how, overall, interviewees in the three countries reacted to these statements. In Australia and New Zealand, major reorganization efforts appear to have taken place (in New Zealand, in fact, a new CDEM national plan was introduced, becoming effective shortly before the interviews), but most interviewees organization have apparently reached a fairly stable operation mode by now.

14 12 10 8 6 4 2 0 (A) (B) (C ) (D ) (E)

7 6 5 4

6 5 4 3 StrAg re e Ag re e N eu tra l D is a gree StrD is ag

3 2 1 0 (A) (B) (C ) (D ) (E) 2 1 0 (A) (B) (C ) (D ) (E)

Australia

New Zealand

Germany

Figure 2. Reorganization, Legal Basis, Definition of Responsibilities, Procedures, Funding Funding is not a major concern in New Zealand, however in Germany, and, even more evident, in Australia, organizations tend to lack funding for the work they
105

Recent advances in security technology

consider essential. Stratifying the results to federal, state and local levels, question (A) yields a common picture for all countries, with agreement or strong agreement dominating the answers at all levels, two German state interviewees (SchleswigHolstein and Bavaria) however strongly disagreed. For question (B) on the legal basis stability, there are remarkable differences: In Australia and Germany, there is strong disagreement at the federal level, but nearly all (7 out of 8; 7 out of 7, respectively) state interviewees agree or strongly agree. In NZ, all levels tend to, at least, agree. A clear definition of responsibilities (C) seems to prevail in Australia and NZ at all levels and in Germanys states, whereas the German federal level disagrees, which may have to be seen in the context of the federal levels constitutional responsibility for high-level coordination and partial funding for most emergency events that are otherwise handled under strict state-level lead. Stratifying (D) responses, in all countries the federal level obviously has no lack of documentation, however in Australia, three states (QLD, NSW, WA) appear to have issues according to the respective interviewees who disagreed. In QLD, state representatives are particularly concerned about funding, giving the only two strongly disagree answers to question (E); similarly, both NT interviewees disagree with the adequacy of funding levels. Among the interviewees representing the RFDS, which is partly dependent on donations, there is a neutral position or disagreement. As mentioned earlier, it is an obvious fact that the organizational structure of the emergency management sector varies greatly from country to country. While it is beyond the scope of EMANZE to make a detailed assessment of the hierarchical setup, inter-organizational relationships, and formal or informal command and coordination chains, it was decided to ask interviewees if, for handling major emergency response operations, they preferred a system that gives maximum responsibility to either the federal level, the state level, the regional level, or the local level. They were also asked to motivate their answer. As Table 3 suggests, there was some preference to manage such events at an intermediate level, i.e. to assign maximum responsibility to the state or the region. The federal level was suggested only once, and the local level is a favorite only among Australian local interviewees. Interestingly, looking at the motivation interviewees gave for their answer, the availability of local knowledge was the most frequently cited reason, even for those who suggested to concentrate the effort at state level. Other motivations given were:

for state level: they see the bigger picture for regional level: locals don't have resources, state/fed are too far away for local level: responsibility should be assigned to the lowest level
Federal Level State Level Regional Level Local Level AUS 1 9 6 7 NZ 1 4 2 3 GER 0 2 5 1

Table 3. Levels for Managing Response


106

EMANZE: A comparative Study on Emergency Management in Australia, New Zealand and Europe

In one of the most delicate questions, interviewees were asked to comment on the statement In my country, emergency management is equally well organized in all states. Table 4 shows the answers, along with those states that were most frequently identified as serving as a good example. A number of interviewees refused to identify such states or skipped this question entirely. For Germany, where the perceived heterogeneity is particularly large, no clear picture on good states evolved from the answers. On the contrary, in Australia, QLD, NSW and VIC were clear winners.
strongly agree agree neutral disagree strongly disagree States/CDEMGs serving as example AUS 1 8 3 5 2 QLD NSW VIC NZ 0 2 0 4 3 Auckland Canterbury GER 0 2 1 5 0 ---

Table 4. Emergency Management equally well organized in all States? Reasons given by interviewees for their selection include: QLD: high level of horizontal cooperation between emergency services, good risk management, strong population base, has resources required, good level of training NSW: good vertical integration of disaster services, good emergency management structure, control at the lowest effective level, large population, cities, funding base VIC: good "whole-of-government" approach, good business continuity preparations for the government during emergencies, strong population base, has resources required, good level of training, well funded emergency service function, well funded volunteers, good equipment, good HQs Being funded by two technology oriented research organizations, EMANZE attempted to identify the relevance of technology advances in the emergency management domain. Interviewees were asked to comment on the statement: In the past 10 years, technology has improved our ability to cope with emergencies. Across all countries, there was mostly strong agreement (21) or agreement (16) among the interviewees (43 in total for this question). Unless they strongly disagreed (which did not occur), they were additionally asked to name up to three of such technological breakthroughs. There was no pre-categorization limiting their answers, so the following list of the most frequently given answers represents a postcategorization: Mobile Communication (19), Communication Systems (general) (14), Internet (13), Geographic Information Systems GIS (9), Information Management Systems and Databases (8), Computer Capabilities (6), and Sensor Data Availability (5). Other answers include: Secure Communication, IT Security, Communication to the Public, and High-Tech Fire Service Vehicles. Thus, advances in ICT have significantly contributed to an improved ability of the organizations to cope with
107

Recent advances in security technology

emergencies. The opposite statement presented for comment, i.e. In the past ten years, we have adopted technology that has reduced our ability to cope with emergencies, triggered disagreement or strong disagreement by all but two interviewees (shared fire/police radio channels in parts of New Zealand appear to be a problem). Obviously, technology has not had a negative impact.

5 Handling of Emergency Scenarios


In this part, interviewees were asked to answer a series of questions on how their organization handled, if applicable, three common emergency scenarios, namely: a day-to-day emergency event (such as a vehicle accident with injuries and fire) a major natural disaster (with a choice of a cyclone hitting a coastal area, a widespread bush fire, a strong earthquake, or a major flood), making a later distinction between the event occurring in a metropolitan or a remote area a major man-made disaster such as a bomb attack on a public transportation system. The same set of questions was asked for all scenarios, provided that the interviewees organization had any role in the response phase of the respective scenario. For lack of space, only findings on the natural disaster occurring in a metropolitan area are discussed here. First, interviewees selected one of the natural disaster varieties mentioned above, forming a basis for the remaining questions. In Australia, 14 out of 23 picked cyclone (8 picked bush fire, 1 NT interviewee selected quake). In New Zealand, quake was the most frequently named variety (5 out of 10), and in Germany, all 7 interviewees who answered this questionnaire part selected flood. In the remainder of this paper section, only replies from those interviewees who selected the respective top-ranking disaster are considered. (In Australia, the number was further reduced to 13 as one interviewee, who manages a remote NT aboriginal community, was obviously unable to answer questions on this event occurring in a metropolitan area.) 10 of the Australian organizations have a detailed action plan for the event, and 4 German organization have either a specific plan or apply an allhazard plan, whereas 3 NZ organizations have neither. German organizations do adequately practice for their event; in Australia and NZ, almost half of the interviewees do not feel that there is adequate practice.

Still addressing the selected event, three bundled questions intended to determine what other organizations the interviewees organization would cooperate with, and through what means information was exchanged with them, and if there were any issues with obtaining required information from them. In all countries, state government agencies and emergency services were the most frequently listed cooperation partners; in Australia, local governments were additionally relevant. Most interviewees suggested that the same means of information exchange was used for all partners they quoted; these means generally included: telephone, fax, email, radio, and face-to-face contact, e.g. in emergency operations centers. Issues with being unable to obtain required information from partner organizations (either because they did not have it themselves, or because they were unwilling to disclose it) were
108

EMANZE: A comparative Study on Emergency Management in Australia, New Zealand and Europe

not generally observed, however those interviewees who did have such concerns mentioned mainly issues such as the impact on their resources, the extent of the area affected, low priority given to upstream situation reports, and privacy. Generally, interviewees feel that there is a smooth cooperation with same-state organizations, with no disagreement at all. The perceived cooperation smoothness with different-state organizations is slightly lower. Problems experienced include: many organizations cannot communicate by email, unclear who to talk to, lack of liaison protocols, and the military is unreachable after midday on Fridays.
strongly agree agree neutral disagree strongly disagree Private Sector Services used: AUS 2 6 0 2 2 Communication Transport Energy NZ 3 1 0 1 0 Communication Transport GER 0 2 2 3 0 Heavy Machinery

Table 5. Organization critically relies on Services provided by Private Companies? Asked to comment on the statement Those who are in charge of handling this event locally are generally well trained, the median answer was agree in all countries, so apart from individual problems, the level of training appears to be acceptable. Finally in this questionnaire section, the degree of dependency on private sector assistance was determined by presenting the statement In handling this event, my organization critically relies on services provided by private companies for comment. Replies were rather mixed, as shown in Table 5 along with some of the services mentioned for the respective country.

6 Current Debate in Emergency Management Sector


In the last part of the EMANZE questionnaire, with the objective to determine if there was a single big issue in the respective country, interviewees were asked what they perceived to be the most intensively debated issue in emergency management in their country. In Australia, a large variety of answers was given, ranging from resource allocation issues, the balancing between efforts against man-made and natural disasters, to volunteer support schemes and the role of the media. Thus, no single big issue was identified for Australia. In New Zealand, as reported in (Meissner 2006), the introduction of a new national CDEM plan in 2006 dominated the discussion, with many interviewees voicing concerns that confusion and unclear responsibilities have resulted. In Germany, where interviews were carried out in late 2006, a funding allocation struggle between state governments and the federal government was frequently cited. This debate concerns proposed new criteria for the allocation of federal funding in the
109

Recent advances in security technology

emergency management sector, reflecting the intent of the federal government to replace the traditional rule according to which the cause of an emergency determined who was obliged to provide preparedness funding. With the decreased likelihood of this cause being a war, the federal government intends to withdraw a major part of its traditional civil defense spending, resulting in gaps in the states budgets because of the need to finance fire and other vehicles previously paid for with federal money. When asked how this matter affected their own organizations (i.e. mostly state governments), many German interviewees expressed concern that they may be unable to provide the additional funding required after the federal withdrawal.

7 Conclusion and Outlook


The EMANZE study has yielded a number of insights into how emergency management authorities work at federal and state level in Australia, New Zealand, and Germany, what issues they face, and what the role of technology is. Based on questionnaire results obtained in 2006, this paper has reported findings and identified what the countries and states have in common, as well as what sets them apart in the emergency management domain. From the research communitys point of view, it is particularly interesting to understand what the deficiencies are and what needs to be done to help users capitalize on innovation. Some EMANZE findings could not be reported in this paper. This includes results obtained in other European countries, including Austria, Switzerland, the UK, and the supra-national European Union Monitoring and Information Center for emergency coordination, as well as results obtained in the South Pacific. Regarding the latter, it is planned to complement the data already gathered in the Cook Islands and Fiji by expert interviews in the Solomon Islands, where an in-depth look is to be taken at the handling and lessons-learned of the April 2007 Western Province tsunami.

References
Meissner, Andreas 2006. Emergency Management and Security Research in Australia, New Zealand and the South Pacific: Survey and Comparison to Europe, in: Recent advances in security technology, Proc. Safeguarding Australia - 2006 RNSA Security Technology Conference, Canberra/Australia, 21-23 Sept 2006, pp 372-381. Pro DV Software AG 2007. United we stand deNIS-IIplus The IT Standard for Civil Defense and Disaster Protection. Available at http://www.prodv.com.

EMANZE findings are partly based on unpublished material obtained from interviewees.

110

Practical Crypto-Biometric Systems: What Can a Fuzzy Vault Do?

10
Practical Crypto-Biometric Systems: What Can a Fuzzy Vault Do?
Marianne Hirschbichler, Wageeh Boles, Colin Boyd and Greg Maitland
Queensland University of Technology, Brisbane, Australia
Abstract
The fuzzy vault construction proposed by Juels and Sudan has been the focus of numerous research publications that seek to realise a crypto-biometric system. In essence the fuzzy vault is used to protect a committed secret which can be recovered only when the correct biometric is presented. This paper examines the applicability of the fuzzy vault within various scenarios where the secret value is used for security and authentication applications. Claims in the literature of the vaults security and biometric privacy enhancing properties will be identified and explored. Much previous research using the fuzzy vault makes assumptions that seem to undermine the provision of practical security and privacy protection issues. These observations stem from two fundamental challenges faced by biometric systems: (i) biometric samples are, in general, not secrets and (ii) biometrics typically require the presence of hints ("helper data") before verification can take place.

Biographies
Marianne Hirschbichler is a PhD candidate at Queensland University of Technology (QUT). Her research topic is the combination of cryptography and biometric systems. Wageeh Boles is Associate Professor in the Faculty of Built Environment and Engineering at QUT. He is a recognized researcher in image processing, computer vision and their applications. Colin Boyd is Professor in the IT Faculty at QUT. His research specialism is in cryptographic protocols and their applications. Dr. Greg Maitland is Lecturer in the IT Faculty at QUT. His research expertise is in all aspects of security protocols and cryptography.

111

Recent advances in security technology

1. Introduction
Various governments, including the Australian government, have sought to establish authentication frameworks for the purposes of encouraging opportunities to engage in the secure delivery of e-government services. Central to the design of an authentication framework is the identification of appropriate assurance levels and the corresponding authentication mechanisms considered adequate for the perceived level of risk exposure. Typically, traditional passwords and PINs are categorised as being suitable for minimal to low risk transactions. For high risk transactions, recommended authentication mechanisms may involve the combined use of cryptographic technologies along with hardware tokens and biometrics. It is not surprising then to find that there is contemporary research towards strongly combining cryptographic techniques with biometrics to form so-called crypto-biometric systems. The most prominent crypto-biometric system proposed so far is the fuzzy vault of Juels & Sudan (2002). According to various literature, this new system has two main aims: 1. to eliminate the loss of privacy within common biometric systems by avoiding template storage in the clear; and 2. to enable biometric protected storage and retrieval of a committed cryptographic key which can be used for further security applications or high risk transactions. The conceptual idea of crypto-biometric systems will be introduced within the paper. The basic operation and properties of typical biometric identification schemes will be explained within section 2. The intersection of biometrics and the fuzzy vault scheme of Juels & Sudan (2002) will be addressed within section 3, describing the establishment of a crypto-biometric system. Section 4 analyses the claimed contributions of the fuzzy vault within the crypto-biometric framework while identifying dedicated attacks and privacy breaches. How to avoid such problems leads to several interesting research questions. In section 5 we conclude with suggestions for possible future research to address these questions.

2. Biometrics
Biometrics is the science of measuring and analysing biological human traits such as fingerprints, voice, irises, facial and signature patterns. Biometric traits, that satisfy the properties of being universal, distinct, sufficiently time invariant, and collectible, allow identity verification of an individual (Jain 2005). Biometric systems have already gained popularity within corporate, governmental and public security systems, as biometric data are unique biological authentication credentials. Furthermore they require the verifier to be present at time of verification, and they cannot be distributed, shared, lost or forgotten. Those properties allow biometrics to go beyond providing just authentication to become a useful tool within the cryptographic domain. Section 3 will address this idea further.
112

Practical Crypto-Biometric Systems: What Can a Fuzzy Vault Do?

A typical biometric system (for example as outlined by US Information Technology Industry Councils Ad Hoc Group (2006)) is illustrated in Figure 1. During the enrollment phase, the biometric system requires the user to submit her/his biometric trait to the systems acquisition device. The essential characteristic features are extracted from the acquired biometric sample and stored as a reference template with the users ID on either a central database or a portable storage device.

Figure 1. Biometric system For subsequent verification, the user is required to submit the biometric trait again. The extracted biometric features are then compared against the respective reference template. As the acquisition of biometric data is subject to variations within position and physical execution, two biometric measurements are rarely identical which gives rise to the inherent fuzziness of biometric data. The comparison procedure within the matching algorithm requires a suitable alignment of the reference template and the verification biometric data. Particular data patterns are selectively utilized for alignment, and accordingly all matched pairs of features are identified. Consequently the performance of the matching algorithm, as illustrated in Figure 2 (Maltoni et al. 2003), relies heavily on the accuracy of the alignment.

=>

Figure 2. Fingerprint-alignment for matching extracted fingerprint features


113

Recent advances in security technology

The comparison within the matching algorithm produces a score that quantifies the similarity between two sets of biometric data (Maltoni et al. 2003). Based on a predetermined minimum degree of accordance, termed the threshold, the comparison is deemed a match or non-match. Accordingly the system decides to positively authenticate or reject the applicant. Biometric systems have to face a number of security and privacy vulnerabilities with the most severe being attacks on the stored reference templates. Once a biometric reference template is compromised, it cannot be revoked or reissued due to its biological nature. Furthermore, as soon as identical templates are deployed in multiple separate databases, performance of cross matching and tracking of an individual without the users compliance or awareness is possible. The storage of the reference template is a key factor in preserving security and personal privacy. Secure template storage, or avoiding template storage altogether, makes up an essential research aim within the biometric environment.

3. Crypto-biometric System based on Fuzzy Vault


Recent research has investigated how to strongly combine biometrics and cryptography in order (1) to enhance security and privacy with respect to the storage of biometric reference templates, and (2) to employ biometrics as a wrapper mechanism for securely storing and revealing cryptographic keys. However, this approach is challenged by the fuzziness within biometric data, yielding different cryptographic values of the same biometric trait. Thus, cryptographic techniques cannot be directly employed onto biometrics. However, by tolerating slight variability within its unlocking and locking sets, the fuzzy vault allows to intersect biometrics with a cryptographic concept, resulting in a crypto-biometric system. The fuzzy vault scheme of Juels & Sudan (2002) is a novel cryptographic concept that allows an unordered set of values to be used to lock a secret value inside a fuzzy vault. The fuzzy vault scheme allows the secret value to be unlocked by sets that substantially overlap with the original locking set. This makes the fuzzy vault scheme useful in circumstances where precise recall of the values that constitute the locking set is difficult or impossible. The fuzzy vault scheme uses a polynomial to embed the vaults secret as its coefficients. The polynomial is evaluated for each value in the locking set. The resulting set of points that lie on the polynomial are referred to as the genuine points. Knowledge of a sufficient number of these genuine points will allow reconstruction of the polynomial and hence will reveal the vaults secret. A number of additional randomly generated chaff points that do not lie on the polynomial are selected and merged with the genuine points. The chaff points are intended to hide the genuine points from an attacker. Figure 3 (Uludag et al. 2005) illustrates the fuzzy vault construction, symbolising the genuine points as empty squares and the chaff points as filled squares.
114

Practical Crypto-Biometric Systems: What Can a Fuzzy Vault Do?

Figure 3. Fuzzy vault construction The entire collection of genuine and chaff points are randomly reordered to remove any stray information that can be exploited for separating one from the other (Uludag et al. 2005). The fuzzy vault consists of the entire collection of scrambled points along with the degree of the polynomial. The security of the scheme is based on the polynomial reconstruction problem. The feasibility of determining the polynomials coefficients requires at least degree + 1 genuine polynomial points. As a result, any set which overlaps substantially with the original locking set, can feasibly identify the required number of genuine vault points and thus be used to reconstruct the polynomial and reveal its embedded secret. To set up a crypto-biometric system, each value of the locking set is chosen such that it represents a component of the users extracted biometric features. A polynomial that embeds a secret value is selected and a scrambled set of genuine and chaff points is generated. Any subsequent set of extracted biometric features that overlaps substantially with the locking set will succeed in unlocking the vault. The extraction of the users biometric features is undertaken as per common biometric systems. Instead of storing the biometric features as a reference template, crypto-biometric systems utilize these biometric features as the vaults locking set. The resulting fuzzy vault may be stored within a central database or on a portable device. For authentication and secret retrieval, the users features are extracted from a freshly submitted biometric sample and are treated as the vaults unlocking set.

Figure 4. Fuzzy vault unlocking Coinciding vault and unlocking set points are used for polynomial reconstruction, as illustrated in Figure 4 (Maltoni et al. 2003). Successful reconstruction will reveal the embedded secret. For positive user authentication, this secret may be hashed and compared against a hashed version generated and stored at time of the vaults establishment. Where the secret is a cryptographic key, it can be utilized within further cryptographic applications.

115

Recent advances in security technology

4. What Can a Fuzzy Vault Do?


According to the literature, a crypto-biometric system, based on the fuzzy vault construction, achieves the aforementioned desired aims of template protection and secure key storage. The following section will critically analyse these claims. Exploiting recent developments which demonstrate privacy breaches and security attacks on fuzzy vault implementations, we show that practically achieving these benefits may not be an easy task. How to overcome these problems still needs to be addressed. 4.1. Template protection According to Chung et al. (2005) and Uludag et al. (2006), the fuzzy vault concept can be viewed as achieving a non-invertible transformation of the respective biometric features into a 2D point cloud concealed within the set of vault points. Without revealing or having to store the biometric samples in the clear, authentication can still be undertaken. Thus, this cryptographic framework brings the idea of secure biometric template storage further into the realm of practical implementation and application, corresponding to Juels & Wattenbergs (1999) predictions. However, the following flaws still need to be considered within the claimed template protection. The alignment issue Recent application approaches of fuzzy vaults, such as the fingerprint vault system proposed by Clancy et al. (2003), have assumed that biometric features are prealigned. In other words, the features extracted at the time of enrollment are properly aligned with those extracted at the time of verification. According to Chung et al. (2005), pre-alignment is not a realistic assumption for biometric authentication schemes and furthermore the fuzzy vault point set does not carry sufficient information for alignment as there is no reference data in the clear. To address this issue, more recent approaches make use of auxiliary helper data to assist with alignment. Although those helper data are recovered from the locking biometric features, Uludag et al. (2006) and Jain et al. (2006) claim that they do not leak any information about the biometric features and thus are assumed to complement the increased users privacy afforded by the fuzzy vault. It would be useful to have a mathematical model to prove that this is indeed the case. However, we observe that even if this point is proven, it remains questionable whether the template and privacy protection is adequately maintained.

116

Practical Crypto-Biometric Systems: What Can a Fuzzy Vault Do?

Figure 5. Points of maximum curvature in the flow curves constitute the helper data For subsequent alignment, the generated helper data are stored together with the respective vault. Bearing in mind that current helper data within fuzzy vault schemes by Uludag et al. (2006) and Chung et al. (2005) are derived from the biometric sample of the locking biometric features, they can be regarded as individual related parts of a biometric template illustrated in Figure 5 (Uludag et al. 2006). Therefore the system stores again unprotected biometric-related data next to the fuzzy vaults. Although this data does not leak information on the locking biometric features themselves, their biometric origin encompasses linkable and traceable features that can be exploited within systems employing fuzzy vaults. Currently there is no practical fuzzy vault implementation that can avoid this problem. It seems that further investigations are required to identify biometrics or biometric features that do not require alignments to such an extent as within the abovementioned fingerprint applications or even to establish techniques that do not require alignment data at all. Privacy threat in case of fuzzy vault compromise The claimed privacy protection of biometric data within crypto-biometric systems can be compromised if the same biometric features are used for constructing different vaults with different polynomials and chaff points. According to Jain et al. (2007), any attacker, who gets access to at least two vaults obtained from the same biometric features, will be capable of identifying the genuine biometric data by correlating the abscissa values in the two vaults. As the vaults are built on the same biometric data, they will have a certain number of abscissa values in common. Consequently the attacker is capable of filtering biometric related data from the vault point set. Due to this security breach, the vault is not revocable. In particular, in case of compromise, a new vault cannot be generated from the same biometric data by merely binding it with a different secret. This vulnerability destroys the promising privacy-enhancing property of the fuzzy vault framework, as cross-matching of biometric features across different systems is still possible. Key-inversion attack In the crypto-biometric system based on the fuzzy vault the stated goal is the release of a secret key. Boult et al. (2007) highlight that as soon as the secret leaves the vault in plaintext format for its application, it can be intercepted by an attacker or becomes available to any insider. Through its possession, an attacker would gain the possibility to identify the polynomial and consequently identify the users biometric features
117

Recent advances in security technology

from the vaults point set. Extending this threat idea, it would be possible to establish the users reference template from the fuzzy vault which may be in use by common biometric systems. 4.2. Cryptographic key storage Current cryptographic algorithms can provide a high level of proven security, but they require a solution to the key management problem. A typical assumption is that the respective keys are kept in absolute secrecy, are very long and random. According to Uludag et al. (2005), Chung et al. (2005) and Dodis et al. (2003), the fuzzy vault contributes to a key management solution by taking on the role of secure key storage and restricting its access to appropriate biometric feature sets. The fuzzy vault can thus be viewed as providing fuzzy key storage which allows recovery of the secret key from a fuzzy biometric reading sufficiently similar to the genuine one. The secret can be regarded as a committed value usable as a symmetric key for security applications. Thereby the fuzzy vault enables the biometric trait to act as a tight wrapper around the secret cryptographic key, such as a 128-bit AES key. Unlike traditional password authentication schemes, the unlocking/locking biometric data need not to be stored nor remembered while allowing a tight linkage between secret key and legitimate owner. However, the following security threats still need to be considered in order to achieve the claimed secure key storage. Fuzzy vault embeds secret value utilizing non-secret biometric data Concealing a cryptographic key through biometrics raises the question of whether the fuzzy vault can really deliver on its secure storage. Bearing in mind that biometrics are human traits, which can be collected without the users awareness, the locking and unlocking feature sets of the vault can be regarded as publicly obtainable data. Consequently the embedded cryptographic key within the vault cannot be regarded as sufficiently securely concealed. Related to its practical applicability, the fuzzy vault therefore either requires a supervised environment at time of locking and unlocking attempts or incorporation of extra secret information in addition to the biometrics. Weaknesses of chaff points and leak of randomness The vault security is based on the chaff points, concealing and hiding the genuine biometric related points. As the chaff points have been generated later in the vault establishment process, they tend to have smaller free area around them than the genuine ones. The notion of free area thereby relates to the number of neighbouring points in the vault point set. Investigating chaff point identification, Chang et al. (2006) raise the issue that chaff points have more neighbouring points, yielding smaller free area. In accordance to Adler (2005) this observation makes the nonrandomness of the fuzzy vault set obvious, that can be exploited by dedicated attacks.

118

Practical Crypto-Biometric Systems: What Can a Fuzzy Vault Do?

Blended substitution attack In a substitution attack, the attacker alters the users biometric template by injecting another set of biometric features. In a traditional substitution attack the attackers data overwrite the user-specific features and ends up in the database. This allows the attacker to authenticate as the legitimate user but simultaneously will produce a denial of service to the legitimate applicant. Combining the users and attackers data in a single reference template, referred to as blended substitution, is not practical within common biometric systems, since a match with 50% of biometric features not matching would likely be rejected. According to Boult et al. (2007) the fuzzy vault allows a blended substitution attack. By knowing the vaults secret and thus its polynomial, the attacker obtains the capability of generating his own malicious biometric polynomial evaluations. Through replacing a certain number of chaff points by his malicious polynomial evaluation, both the legitimate owner and the attacker can be authenticated against the users record or employ the embedded symmetric key. The fuzzy vault construct obviously prevents detection of a user-attacker template blending by not causing any denial of service. Limited dynamic security adaptability Within common biometric systems the matching score derived from comparison between the template and the extracted features is evaluated against a threshold value. This value indicates the required amount of correlation between the template and the extracted features to be deemed a match. The threshold thereby acts as a parameter of the decision algorithm and can be dynamically adjusted by the systems administrator to the specific security or convenience requirements. Within the fuzzy vault this threshold value can be interpreted as the required amount of polynomial degree + 1 coinciding biometric unlocking and locking features, in order to succeed with interpolation. Consequently, this value is pre-determined due to the polynomials frame characteristic within the vault construction. Cryptographic implementations are thus limited within their dynamic and adaptability to different security and convenience requirements. As opposed to ordinary biometric systems, the vault threshold should be considered as public data, because the publicly available vault includes the underlying polynomials degree. Consequently different vaults can be established addressing different levels of secure cryptographic key storage. Depending on the application area of the fuzzy vaults, polynomials of higher or lower degree can be chosen appropriately.

5. Discussion
This paper has introduced the major achievements of crypto-biometric systems based on the fuzzy vault concept of Juels and Sudan. Current approaches have been summarised and critically examined in regard to privacy breaches and security
119

Recent advances in security technology

attacks. The outlined vulnerabilities need to be addressed in order to deliver on the claimed literature promises of template protection and secure cryptographic key storage. Overall there is a lack of a precise security model in most of the papers dealing with fuzzy vault security this makes it hard to evaluate accurately whether implementations are really secure or whether specific attacks are valid. In general we need to be more precise about what the fuzzy vault is intended to achieve. Future research may consider exploiting some of the identified attacks to achieve instead possible advantages. In particular the blended substitution attack has shown that an attacker can add extra features into the vault in addition to the legitimate user. However, in some scenarios it may be a benefit to allow different users to share one vault. Extending fuzzy vaults to general secret sharing access structures presents some interesting challenges. Another promising research direction is the incorporation of a key as an input to the fuzzy vault in addition to biometric data, in order to enhance privacy and security within crypto-biometric systems. This strategy changes the fuzzy vault into a multifactor authentication mechanism requiring both biometrics and a key for its unlocking process. This would help address many of the issues that we have highlighted; for example the vault could be randomised using a key to avoid leaking information and linking of fuzzy vaults from the same biometric data. This idea can also be used to protect the integrity of the fuzzy vault, so that it becomes resistant to tampering.

References
Ad Hoc Group. 2006. Ad Hoc Group on Biometric in E-Authentication. Study report on biometrics in e-authentication. InterNational Committee for Information Technology Standards. Washington DC. Adler, A. 2005. Vulnerabilities in Biometric Encryption Systems. In Autio- and Video-Based Biometric Person Authentication. Springer Berlin/Heidelberg : 1100-1109. Boult, T. & Schreier, W. 2007. Cracking fuzzy vaults and biometric encryption. Unpublished report. Colorado: 1-13. Chang, Ee-chien & Shen, Ren & Teo, Francis Weijian. 2006. Finding the original point set hidden among chaff. In Ferng-Ching Lin. ASIACCS. ACM: 182-188. Chung, Yongwha & Moon, Daesung & Lee, Sungju & Jung, Seunghwan & Kim, Taehae & Ahn, Dosung. 2005. Automatic alignment of fingerprint features for fuzzy fingerprint vault. In CISC, volume 3822 of Lecture Notes in Computer Science. Springer: 358-369. Clancy, Charles & Kiyavash, Negar & Lin, Dennis. 2003. Secure smartcard based fingerprint authentication. In Proceedings of the 2003 ACM SIGMM workshop on Multimedia, Biometrics Methods and Applications: 4552. Dodis, Y. & Ostrovsky, R. & Reyzin, L. & Smith, A 2003. Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data. Eurocrypt: Advances in Cryptology: 1-42. Jain, Anil. 2005. Biometric Recognition: How Do I Know Who You Are? ICIAP, Lecture Notes in Computer Science, Vol. 3617. Springer: 1926.
120

Practical Crypto-Biometric Systems: What Can a Fuzzy Vault Do?

Jain, Anil & Nandakumar, K. & Pankanti, S. 2006. Fingerprint-based Fuzzy Vault: Implementation and Performance. Michigan State University: 1-43. Jain, Anil & Nagar, Abhishek & Nandakumar, Karthik. 2007. Hardening fingerprint fuzzy vault using password. To appear in proceedings of ICB: 1-10. Juels, Ari & Wattenberg, Martin. 1999. A Fuzzy Commitment Scheme. In ACM Conference on Computer and Communications Security: 2836. Juels, Ari & Sudan, Madhu. 2002. A Fuzzy Vault Scheme. In IEEE International Symposium on Information Theory: 118. Maltoni, D. & Maio, D. & Jain, A. & Prabhakar, S. 2003. Handbook of Fingerprint Recognition. Springer-Verlag: 1348. Uludag, U. & Pankanti, S. & Jain, A. 2005. Fuzzy Vault for Fingerprints. In AVBPA: 310319. Uludag, U. & Jain, A. 2006. Securing fingerprint template: Fuzzy vault with helper data. In Privacy Research in Vision: 2006.

121

Recent advances in security technology

11
Describing asset criticality as an aid for security risk management
Allen Fleckner MSc, PSP
Critical Risk Pty Ltd, Melbourne
Abstract
Todays security and safety management paradigm demands a sustainable national transportation system. Transport owners and operators must choose carefully where to spend scarce funding to secure systems and associated decision making may need to be defendable many years from now. Risk management has evolved to fill the void, but are current methodologies robust enough to withstand the attrition of time? One way to enhance current risk management practices suggests adding an element to the iterative risk management process to describe the criticality of assets to support resource deployment. This paper discusses the outcomes of empirical testing and situational application at an operator level of the Science Applications International Corporation (2003) criticality assessment methodology step. The study utilised ranked, ordered questions, pre and post methodology, with a non-probability selected sample pooled from both decision makers within the relevant industry sector and from security practitioners. Data analysis endeavoured to confirm or disconfirm model suitability.

Biography
Starting his career in 1979, Allen has amassed a strong strategic, tactical and operational background in counter intelligence and policing, encompassing consequence and emergency response management, defusing terrorist operations and securing hi-risk critical infrastructure. Most recently, he has combined authoring a number of major projects in the establishment of risk, security, emergency response and business continuity for Australian Critical Infrastructure, and is completing a Master of Science in Security Management, applying practitioner based research in security and risk management.
122

Describing asset criticality as an aid for security risk management

Introduction
Jenkins & Gersten (2001) espouse the notion that aviation, marine and surface transportation holds attractiveness for terrorists and of the three industry sectors, surface transit systems, present as the most challenging environment in which to enhance the protection of assets. In reply, Rabkin et al. (2004) states that strategies must be found and developed to allow increased protection without unnecessary trespass upon the normal operating environment and that these measures need to be scalable for deployment, practical and effective to implement. However, operators are pressed to fund transport capital improvements, and security orientated projects have come under increasing pressure to demonstrate a valueadding proposition. With Governments driving for an increase in the levels of critical infrastructure protection, transport system owners and operators need to find a method to prioritise their critical assets to ensure coherent alignment with stakeholder expectations. Stovall and Turner (2004), when researching asset criticality posed the question What do we need to protect? to which they answered that we need to protect everything. Nevertheless, the typical transport environment dictates that there are not finite resources available; therefore, a suitable method is required to prioritise critical infrastructure assets so that the most important can be protected first. One method has been put forward to possibly address the gap. Developed by the Science Applications International Corporation for the American Association of State Highway and Transportation Officials (AASHTO) they have presented a specific method that was designed to be simple to operate and deliver by levering off a sixstep process to sequentially evaluate and identify critical assets as the first step before conducting a vulnerability and consequence assessment.

AASHTO Methodology
The models pre-assessment tool identifies those assets infrastructure, facilities, equipment, and personnel deemed critical for maintaining the business process and continuance of service and the process begins by organising a cross-functional analysis team consisting of experienced employees that are most familiar with the organisations infrastructure and having them devise an all-inclusive list of assets. The analysis team next decide criteria to develop a series of critical determination factors, this data is used later in the process to identify and prioritise criticality. Collectively, these factors are an indication of the conditions, concerns, consequences, and capabilities that might cause an executive management team to label an asset critical. Each factor is assigned a binary value based on the importance of the factor in establishing an asset as critical SAIC (2003). As an outcome, an all-inclusive list of critical assets is produced.

123

Recent advances in security technology

Research Overview
The research project examined how assessing asset criticality as an additional step to the risk management process, could allow the ranking of the most significant assets relative to their importance within the business process, to evaluate if this could provide additional granulation to aid security decision making by providing scaleable, justifiable implementation of effective risk controls. The study applied the model practically at a local operator level to evaluate its effectiveness, identify deficiencies and then to develop recommendations for improvement so that it could be utilised at a practitioner level within local industry. In support of Drews (1980) suggestion, the research was undertaken in an iterative manner by use of structured question and enquiry, to form an effective means to solve problems and expand knowledge. An action research approach was utilised in the hope that rigorous data collection has provided evidence to support claims for action Lomax (2002). For the purpose of the study, the following research questions were formed and the study has attempted to answer them, namely: How well do cross-functional managers with various degrees of security and risk management knowledge perform at:

Assessing the factors to determine criticality of the asset function? Recognising asset importance? How effective would the AASHTO methodology be at ranking assets for criticality by individual operators who operate at a State level?

To better test model capability to gather data for analysis and evaluation, respondents were selected from decision makers within the public and private transport industry sector and from security consultants working with surface transport operators. By making a comparison of the data analysis from each group, differences were explored and confirmation or disconfirmation of model suitability finalised. This was achieved by examining the interactions between the available theory and concept of the model and the collected data. Respondent performance for utilising area narrowing to recognise asset criticality factor importance. The investigation of the criticality factor weighting determination pilot and main study part one survey data, produced a mean and standard deviation measurement for the sums of each of the three variable categories explored. Sixteen variables were presented to the respondents and they were asked to evaluate in terms of deciding of significance, what amount of weighting (on a 5 point ranked ordered scale) should be attributed to each criticality factor, to cause the later determination of the degree of criticality for a set of transport assets. Of the variables evaluated in the pilot study, the critical factors of intermodal attractiveness, employee or passenger casualty opportunity and risk of public casualty achieved a total weighting mean value from a dense set and was supported by a tight
124

Describing asset criticality as an aid for security risk management

diffusion of data spread m=[5] SD=[0.500]; m=[5] SD=[0.000]; m=[5] SD=[0.0]. Within the main study survey results, a shift of total mean weighting value was experienced in that target selected as a weapon displaced intermodal attractiveness within the three top tied pilot results. The variables in the middle range for the pilot had little variance between each other and twelve of the factors fell within a close range, m range=[1.50] from each other which suggested that the majority of the respondents held a reasonably neutral opinion of the relevant criticality factor weighting m=[3.50] to m=[4.50]. This was reinforced by the main study that displayed a similar trend in that ten of the variables achieved a slightly tighter mean cluster, recording a closer variance range m=[3.00] to [4.00]. This equated to consistently high significant weighting being attached to vulnerability to attack; incident response capability; business continuity impact opportunity; direct financial impact; safety perception impact; depletion of essential service provision; economic impact; public dependence on service and iconic value. Generally, within the deterrence factors group, less significance rating was given to preventative measures, and that higher criticality scoring gravitated upwards especially around the containment and recovery aspects, which held more significance for respondents. Total study criticality factor weighting determination group sum analysis, established that the relationship between the private industry respondents and the public industry respondents had a degree of opposing evaluation views when applying judgement to deciding the criticality factor weighting determination value. Private respondents preferred to place higher amounts of significance against the criticality factors to their public industry peers rho=[-0.50]. This trend was supported by the stance expressed by the industry respondents, who in a similar manner to the private participants, consistently placed a higher weighting on criticality factors than that of the public sector and this group achieved a slightly elevated but similar mean cluster to the private sector rho=[-0.60]. The area narrowing relationship between that private and consultant survey sample managed to demonstrate alignment in how they each calculate the significance weighting. The size of the variance between these two samples and the public sample emphasised that the public respondents are less willing to add weighting, thus creating less significant criticality factors, which will in turn influence the ranking of assets for criticality. As a result, it is likely that a disparity will occur between like assets if compared between public and private ownership and the trend creates a reverse of the premise that a basic difference exists between private and public owned companies in that public owned companies are perceived as being state operated and therefore citizen owned. The perception supports the notion of a higher public organisational expectation over the privately operated company Walmsey and Zald, cited in Rainey, Backoff and Levine (1976). Other factors may also apply and the notion that the private sector, in preference to the public sector has a higher interest in maintaining the asset operational capability to maintain their financial viability and their shareholder value may hold some value.
125

Recent advances in security technology

In support, Lachman (1985) has found, time span of discretion has reduced duration for the privately owned companies operating in the public owned sphere as against state owned enterprises that have a high degree of public accountability. He states that they have a longer period of discretion available to them, thus contributing to the notion that they are not as vulnerable to the criticality factors as their private sector colleagues may be. Similar to the criticality factor weighting determination analysis, the asset criticality evaluation method displayed a marked difference between the respondent groups, maintaining a closer relationship in the measurement of how the private and consultant sector evaluated asset criticality, whilst demonstrating that it is possible for another group to demonstrate a reduced appreciation for the level of criticality.

Respondent effectiveness at defining asset criticality


Once the data from part one of the instrumentation had been collated, respondents undertook the last part of the survey and applied their judgement to a weighted questionnaire that recorded a criticality measurement for a series of eight transport assets against each of the sixteen-criticality factor variables. As a variance to the original binary AASHTO model, an additional layer of decisionmaking was placed on the respondents and they were asked to decide if each criticality factor had either high applicability, applicability, low applicability or no applicability when compared against the individual asset and a weighting was given to each scale of applicability. To permit a comparison, this data was then re-calibrated and had the applicability weighting removed to express the data in the same form as if it had been collected in the AASHTO model. Analysis of the sum values for the total study for each asset revealed several measurements that were expressed in three groups of public transport operators, private transport operators and industry consultants. Results expressed a tightly grouped the mean for the private, public and consultant groups and followed the trend established by the criticality factor weighting determination data analysis in that the consultant mean was ranked higher than the private group who ranked higher than the last placed public sector m=[52.98] SD=[11.50]; m=[52.73] SD=[7.67]; m=[53.63] SD=[11.51]. Public sector under evaluation of criticality presented in the asset criticality evaluation data received from respondents and data spread mirrored the preceding analysis by achieving a similar diffusion hierarchy but had a wider diffusion for each of the industry variable values. In line with the supporting criticality factor weighting determination data, correlation trends maintained a positive medium relationship between the private and public sector weighted data rho=[0.484] and a very small positive relationship was recorded between the consultant and public sector group rho=[0.037]. When the analysis was performed on the re-calibrated data that was prepared to represent the AASHTO model, measurement variances occurred. In this instance the sum value means for the industry groups generally increased in strength, displaying a
126

Describing asset criticality as an aid for security risk management

increase in the relationship between the private and public group rho=[0.484]; a increase for the public and consultant group rho=[0.145] but a decrease for the private and consultant sector rho=[0.164]. The variances reported between the compact AASHTO model measurement scales and the more expansive weighted data that benefited from an additional layer of screening, made a notable difference to the definition of the strength and direction that the groups achieved whilst deciding the asset criticality factor. The weighted data better aligned with the preliminary outcomes recorded from the criticality factor weighting determination analysis, maintaining ranking trends that were evidenced in the asset criticality evaluation analysis, continuing the connection between private and consultant group criticality evaluation capabilities. However, a contradiction has arisen between the manner in which the AASHTO derived criticality factor weighting determination landed and the method that the model utilises to calculate asset criticality when weighting for the factors have been obtained. From pre research perceptions, it was expected that both the criticality factor weighting determination and the asset criticality evaluation components of the ASASHTO model would demonstrate similar correlation and type of relationship, other wise the integrity of the model may be thought to be in doubt.

AASHTO methodology effectiveness at ranking assets for criticality


The study concluded when the asset criticality evaluation data received from respondents was summarised and the selected transportation assets ranked for criticality, in an attempt to provide decision makers with a prioritised list that can be justifiably deployed against available resources to reduce and mitigate the associated risk after the risk assessment process has been applied. Comparative analysis of the weighted criticality and a re-calibrated criticality summary revealed some movement amongst sum means for each of the assets evaluated. Weighted data reported an increased scattered outcome from the unweighted sum mean range, and values were diffused over a wider range than the deviation experienced with the same data set when exported into the ASHTO model. In the weighted and unweighted median values (Tables 25 and 26), major station achieved the highest ranking for criticality but also had the largest diffusion of values attached m=69.90 SD=[120.26]. The weighted and unweighted mean values also placed this asset in the same ranking but the unweighted median value gave the same degree of criticality to minor stations, but the weighted and unweighted mean values respectively placed minor stations close to the median with the fourth most criticality. Just as major station was placed with the most criticality, passengers consistently ranked as the asset with the least criticality significance across both the weighted and unweighted median and mean tables. The other human asset in the group, drivers, faired similarly coming a close second to passengers except for the weighted mean
127

Recent advances in security technology

data, which increased the degree of criticality positioning the asset in the second to bottom quartile.
Ranking of pilot assets by criticality weighted Asset description Major stations Rolling stock Stabling facilities Minor stations Control centre Corporate office Drivers Passengers Factor value sum (FVS) 67.25 65.00 57.50 56.75 53.31 50.31 47.38 40.25 Ranking 1 2 3 4 5 6 7 8

Table 1 Pilot study criticality ranking of assets with weighting Ranking of pilot assets by criticality not weighted Asset description Major stations Rolling stock Minor stations Corporate office Stabling facilities Control centre Drivers Passengers

Factor value sum 59.75 58.75 58.75 58.56 58.00 56.31 49.13 37.25

Ranking 1 2 2 4 5 6 7 8

Table 2 Pilot study criticality ranking of assets without weighting Both the weighted and AASHTO model when applied by the respondents managed, with reasonable consistency to place the assets in similar ranking positions and had identical outcomes for the asset with most and least degree of criticality. Likewise, the asset placed within the quartiles either side of the normal line reported a similar trend and had slight difference to the ranking listed. What the weighted method demonstrated over the unweighted model was that an extra level of granulation allowed the assets to receive improved sequential ranking, whereas the ASHTO model recorded two assets with the same measurement. The additional functionality of having the ability to further define the degree of criticality factor evaluation that was expressed in the weighted study, resulted in an improvement to the method to rank for criticality. A significant variance occurred between median and mean values when determining ranking and a skewed result was reported for both the weighted and unweighted data. Although both weighted an unweighted sets recorded major stations and rolling stock with the highest criticality, drivers, and passengers with lowest criticality, stabling
128

Describing asset criticality as an aid for security risk management

facilities ranked higher in the weighted data whereas minor stations and corporate offices were relegated with lesser criticality significance from the unweighted to the weighted data set. Generally, the results achieved have indicated heightened criticality for physical infrastructure in preference to human assets, which have uniformly been demoted to having the least criticality across either the weighted or unweighted median or mean values. This result presents as a contradiction in that industry security bodies promote employees as amongst the most valuable assets that a business has (American Society of Industrial Security). In support, Joseph (2006) suggests that critical infrastructure operators may not yet be capable of identifying key assets that pose the greatest opportunity for loss of life, and that they are still grappling with how to assess criticality so they will know exactly what assets to protect and determine the appropriate level of protection.

Study outcomes
This section reviews the study investigation outcomes and discusses the findings of the research undertaken. These were centred around the relationship between the questions that were asked of respondents to ascertain how effectively they recognised asset criticality factor importance to later determine the criticality of a particular asset and when respondents did apply the model, how well did they define the asset criticality.

Recognising asset criticality factor weighting


Empirical testing of the AASHTO model, designed to assess asset criticality as part of its methodology to identify and describe highway asset vulnerability, utilised median calculation, data spread width and consistency testing and the assessment by statistical analysis to investigate association and strength of relationship between variables. Generally, the AASHTO model, which requires consensual risk assessment participants to decide criticality factor weighting determination, has been found to be reliable and study results displayed good internal consistency. The investigation has been able to demonstrate a relationship between each of the three participant industry groups that was replicated in the analysis of the second part of the study. Importance of criticality factor median measurement reported that respondents were able to consistently recognise the significance of the criticality factor weighting determination value in a balanced way. This suggests that the model achieved a successful outcome in terms of defining the significance of the critical factor as a precursor to applying the weighting within the asset criticality evaluation to decide the criticality of an individual asset.

129

Recent advances in security technology

Recognising asset criticality importance


The study utilised frequency measurement of the median, standard deviation to define central tendency cluster and coefficient to understand the direction and strength of any relationship between study variables. This permitted an evaluation and a metric to be developed around the examination of how research respondents applied subjective evaluation to assimilate the degree of criticality factor applied to the asset to arrive at a measure that represented their judgement of the asset importance to the organisation. The outcomes suggest that respondent groups had a similar level of appreciation of the assets criticality, particularly for major stations, rolling stock and minor stations and for drivers and passengers. The study also illustrated that all of the respondent groups placed infrastructure assets at the most critical point on the criticality spectrum and that they positioned human assets at the lowest ebb of the spectrum.

Methodology asset ranking effectiveness


Research participants evaluated asset criticality by utilising a modified AASHTO model that required an additional step in the evaluation process to decide how applicable the criticality factor was when applied to a particular asset. This created additional granulation over the original AASHTO model that although binary in function, restricted evaluators to decide if the criticality factor applied or not. The revised methodology assisted the assessment process to determine distinctly different asset set criticality ranking for the middle asset range. The investigation revealed that the ranking of the assets by sum mean created a varying prioritisation to the asset criticality hierarchy than that arrived at by utilising the median. As supported by other researchers Cohen et al. (2003) sample size has a direct influence on the degree of skew and the AASHTO models utilisation of mean may cause a misrepresentation of the asset ranking and resultant risk treatment prioritisation.

Conclusion
The research project attempted to examine how assessing asset criticality as an additional step to the risk management process could be applied practically at a local operator level and how well managers performed at using the method to assess the factors to determine criticality of the assets function and criticality. Results revealed that the criticality factors of intermodal attractiveness, employee or passenger casualty opportunity and risk of public casualty scored a total weighting, and employee or passenger casualty opportunity and risk of public casualty retained consistency across both studies. Critical factor variables in the middle range for both the pilot and main study reported little variance, displaying a balanced measurement that placed a higher criticality respondent attachment to factors held in corporate and stakeholder domains as
130

Describing asset criticality as an aid for security risk management

opposed to public socio-economic factors or deterrence aspects. Less significance rating was given to preventative measures, and a higher criticality scoring around containment and recovery aspects. Of the three groups sampled, the consultant group presented criticality factor weighting values with a slightly higher mean to the private respondents, establishing a relationship between the private and the public industry respondents who recorded opposing evaluation views. Further area narrowing activities illustrated that public respondents are less willing to add weighting, creating less significant criticality factors. This influenced later ranking of assets. To assess effectiveness to rate and rank assets, respondents applied the previously determined sixteen criticality factor values to a sample of eight assets and data was recorded by asking participants to decide if each criticality factor had either high applicability, applicability, low applicability or no applicability. To permit a comparison, this data was then re-calibrated by removing the applicability weighting to align with the AASHTO format. When analysed the mean for the private, public and consultant groups followed the trend established by the criticality factor weighting determination data analysis. The public sector continued to under evaluate criticality. The trend maintained a positive medium relationship between the private and public sector weighted data rho=[0.484] and a small positive relationship between the consultant and public sector group rho=[0.037]. From the AASHTO formatted data analysis sum means for the industry groups generally increased in strength, displaying an expansion on the relationship between the private and public group rho=[0.484]; and increased for the public and consultant group rho=[0.145]. A decrease for the private and consultant sector rho=[0.164] was recorded, which was a notable difference to the definition of the strength and direction that the groups achieved whilst deciding the criticality factor. This suggests that the weighted data better aligned with the preliminary outcomes recorded from the criticality factor weighting determination analysis. Overall, it was felt that the individual respondents capability and performance at identifying asset criticality was been maintained whilst utilising the weighted model. The analysis illustrated that although there has been a change of polarity to the rankling of the three groups with the public sector displacing the consultant group for top ranking in the AASHTO model; both the weighted survey and the re-calibrated unweighted data displayed a reduced disparity between the two groups. AASHTO formatted data that ranked asset criticality lacked granulation when compared to the weighted research model and the revised model expressed distinctly differing asset ranking sets for the middle range of the sample assets. The AASHTO ranking of assets by sum mean created a varying prioritisation to the asset criticality hierarchy to utilising the median. In both studies, major station scored the highest criticality and passengers the least. The other human asset in the group, drivers, faired similarly coming a close second to passengers except for the weighted mean data, which moved this asset further up the quartile. In the second and third quartile, stabling facilities, control centre, and
131

Recent advances in security technology

corporate offices were positioned reasonably consistently across the weighted and unweighted median and mean data sets.

References
Cohen, L., Mainion, L., & Morrison, K. 2003. Research methods in education, 5th ed. London and New York. Routledge Falmer. Drew, C.J. 1980. Introduction to designing and conducting research (2nd ed). Missouri, CB. Mosby Co. Jenkins, B.M., & Gersten, L.N. 2001. Protecting surface transportation against terrorism and serious crime: continuing research on best security practices. Mineta International Institute, San Jose, CA 95192-0219. Joseph, A. 2006. Critical business elements and key assets. Security; Aug 2006; 43, 8; ABI/INFORM Global pg. 40 Lachman, R. 1985. Public and Private Sector Differences: CEOs' Perceptions of Their Role Environments. The Academy of Management Journal, Vol. 28, No. 3. (Sep., 1985), pp. 671-680. Stable URL: http://links.jstor.org/sici?sici=00014273%28198509%2928%3A3%3C671%3APAPSDC%3E2.0.CO%3B2-M Lomax, P. 2002. Action Research, Chapter 8 in M. Coleman, M. and A.R.J. Briggs (eds) Research methods in educational leadership and management. London. Paul Chapman Publishing Rabkin, M., Brodesky, R., Ford. F., Haines, M., Karp, J., Lovejoy, K., Regan, T., Sharpe, L., & Zirker, M. 2004. Transit security design considerations. U.S. Department of Transportation, Federal Transport Administration, Washington, DC 20590. Rainey, H.G., Backoff, R. W., & Levine, C. H. 1976. Comparing public and private organizations. Public Administration Review, Vol. 36, No. 2. (Mar. - Apr., 1976), pp. 233-244. Stable URL: http://links.jstor.org/sici?sici=00333352%28197603%2F04%2936%3A2%3C233%3ACPAPO%3E2.0.CO%3B2-J Science Applications International Corporation (SAIC) American Association of State Highway and Transportation Officials (AASHTO). 2003. Guide to highway vulnerability assessment for critical asset identification and protection. Transport Policy and Analysis Centre, Vienna. VA. Stovall, M.E., & Turner, D.S. 2004. Methodology for developing a prioritized list of critical and vulnerable local government highway infrastructure. University Transportation Centre for Alabama, PO Box 870205, Tuscaloosa, Alabama, 35487-0205.

132

Optimized design of a simple supported one-way RC slab against airblast loads

12
Optimized design of a simple supported one-way RC slab against airblast loads
C. Wu and D.J. Oehlers
School of Civil and Environmental Engineering, The University of Adelaide

W. Sun
Department of Civil Engineering, Huaiyin Institute of Technology, China

M. Rebentrost
VSL Australia Pty Ltd, Australia
Abstract
Current guidelines such as TM5 and ASCE use a trial and error procedure to design RC slabs subjected to airblast load. When the maximum response of the assumed slabs is less than allowable deflection, the selected section size and reinforcement ratio is considered to be appropriate. Although the trial and error procedure is easy to use, it may not result in a design having maximum capacity to resist airblast loads. As slabs reinforced with steel bars at different reinforcement ratios will absorb energy differently, the optimized selection of member size and reinforcing steel is based on the maximum energy absorption capacity which can be found by calculating the area under the resistance-deflection curve of the designed RC slab. In this paper, a layered analysis model which allows for the varying strain rates to be experienced over the cross-section is used to determine the resistance-deflection curve of RC slabs with different reinforcement ratios. The relationship between the optimal reinforcement ratio and a given geometric size for a simple supported RC slab is derived. The derived relationship is useful to facilitate a design with maximum capacities to resist airblast loads.

Biography
Dr. Wu is currently working as a lecturer in School of Civil and Environmental Engineering, the University of Adelaide. He obtained his Ph.D from School of Civil and Environmental Engineering, Nanyang Technical University, Singapore in 2002.
133

Recent advances in security technology

His research interests include structural response to blast-induced excitations and airblast pressures, retrofitting of structures against blast loading, rock blasting, and traffic barrier response to vehicle impact, etc. So far, he is author or co-authors of more than 60 referred international journal papers and conference papers.

1. Introduction
Accidental detonations from storages and suppliers of chemicals, petroleum and ammunition will endanger the surrounding civilian buildings. Due to the increasing necessity on blast safety more accurate design techniques are essential. Current guidelines such as TM5 (1990) and ASCE (1997) using trial and error procedure based on the concept of displacement-controlled theory to design one-way reinforced concrete structure members against airblast loads, that is, if the maximum response of a member is less than the allowable deflection, it is considered to be a proper design. These design procedures actually do not produce a member with maximum blast resistant capacity although it is easy to implement. Single-degree-freedom (SDOF) equivalent system is currently recommended by guidelines such as TM5 and ASCE to perform dynamic analyses. In the SDOF system, a member is simplified to a lumped mass at midspan and it is assumed that a plastic hinge forms at midspan of the simple supported one-way member, resulting in a bilinear resistance-deflection curve as shown in Figure 1. If the maximum displacement of a member under airblast loads is less than the ultimate deflection yu, the design is considered to be safe. Actually the area under the resistance-deflection curve in Figure 1 is the total energy absorption capacity of a member. For the same size of a member, the energy absorption capacity corresponding to different reinforcement ratios is varying, thus its blast resistant capacity is different. The reinforcement ratio of a member that can achieve the maximum blast resistant capacity is called the optimal reinforcement ratio. In order to make a RC member resist blast loads effectively, it should be designed in the optimal reinforcement ratio, that is, to maximize the area under the resistance-deflection curve of the member. To determine resistance-deflection curve of a member, dynamic increase factors (DIF) are recommended by ASCE and TM5 to enhance the material strengths of concrete and steel in calculating the resistance capacities of a member and the DIF is assumed to be a constant at far range or close-in range airblast loads. Since loading rate profile over a cross-section is not a constant, the DIF value is also not a constant over a cross-section. Thus in this study, a layered analysis model that allows for varying strain rates with time as well as along the depth of the member is used to calculate the resistance capacities of a simple supported RC slab. It is then used in a parametric study to calculate an optimal reinforcement ratio that can produce maximum blast resistant capacity for different size of RC slabs subjected to airblast loads by varying spans and depths of RC slabs. The relationship between optimal reinforcement ratio and geometric size is then derived. It should be noted that in this study, only flexural failure design is taken into consideration in the study.

134

Optimized design of a simple supported one-way RC slab against airblast loads

Ru Ry
Resistance (kN)

Kp

K
yy
Deflection (mm)

yu

Figure 1 Bilinear resistance-deflection curve

2. Resistance-deflection function
Under the action of external loads, a structural member is deformed and internal forces set up. The sum of these internal forces tending to restore the member to its unloaded static position is defined as the resistance. The resistance function of a member can be defined as the resisting capacity the member provides at a given deflection as shown in the Figure 1. Initially it behaves elastically until a plastic hinge forms at the midspan of the member and the yield resistance Ry (calculated from moment capacity) is reached. As the section continues to deflect, it will keep on resisting load until the ultimate resistance Ru is reached. For a SDOF system undergoing elastic deformation, the resistance function will have a stiffness K until the tensile reinforcement yields at Ry and it is given by (TM5 1990)

R y = 8M y / L

(1)

where My is the yield moment capacity calculated using layered analysis model and L is span of the member. Its corresponding deflection is given by:

y y = Ry / K
where

(2)

K =

384 EI a 5 L3

(3)

in which E is Youngs modulus and Ia is the average moment of inertia which is equal to (Ig+Ic)/2, and Ig is moment of inertia of the gross concrete cross section (neglecting all reinforcing steel) and Ic the moment of inertia of cracked concrete section which can be determined from layered analysis model. After tensile reinforcement has yielded, the resistance function of the member will change in slope due to varying strain rate effect over the cross section and reach ultimate resistant capacity of the
135

Recent advances in security technology

member (Mu) that can also be obtained from layered analysis model. The ultimate resistance is given by

Ru = 8M u / L

(4)

The ultimate deflection yu of the member in Figure 1 are calculated by applying the curvature at ultimate failure over the length of a plastic hinge at which all of rotation in the member takes place over this length as shown in Figure 2. The rest of the length of the member is assumed to remain rigid. It is assumed in the analysis that the plastic hinge length is 0.75 times the depth of the member (Waner et al. 1998), the rotation in the member can be calculated by integrating over the length of the hinge

= ult dx
0

Lh

(5)

where ult is the curvature of the section at ultimate capacity which will be discussed later. This integration leads to:

= ult Lh
Then ultimate deflection in Figure 2 is given by

(6)

yu =

L 2

(7)

Lh Lh yu

Plastic Hinge

2Lh

Figure 2 Plastic hinge analysis of simply supported member Due to strain rate variation over a cross section, traditional guidelines are not appropriate to determine the yield and ultimate moment capacity. Therefore, a layered analysis technique is chosen to accurately calculate the moment-curvature relationships of the section at yield and ultimate capacities My and Mu in Eqs (1) and (4). A layered analysis model is demonstrated in Figure 3. As shown in Figure 3(a) the cross-section is sliced into numerous layers and the stress and strain are assumed to be constant within each layer. Since there is currently limited knowledge about the strain and strain rate profiles of a RC cross-section subjected to dynamic loading, it is assumed that the maximum strain and strain rate profiles of a RC cross-section subjected to dynamic loading are linear as shown in Figures 3b and 3c. Using the LSDYNA program, the maximum strain rate profile of the specimens under airblast
136

Optimized design of a simple supported one-way RC slab against airblast loads

loads was estimated as = 66.2p1.18 , where pmax is the peak pressure for a given max airblast load and is strain profile over a cross-section of the slab (Day et al. 2006). Since there is an increase in the load carrying capacity of a section that consists of concrete and steel when subjected to dynamic loading, the dynamic increase factors (DIF) for concrete and steel determined by Li & Meng (2003) and Malvar & Crawford (1998) are used in this study. Then, the corresponding force and moment caused in each layer as shown in Figure 3d can be calculated.

(a)

(b)

(c)

(d)

Figure 3 Layered analysis of a cross-section To determine the yield capacity of the section in Figure 4(a), the strain in the tensile steel is first set at its yield strain and an estimate of the neutral axis depth kud is made as in Figure 4(b). From the strain diagram with a DIF value, a stress profile as in Figure 4(c) can be derived from which the forces acting at all layers are calculated as in Figure 4(d). If the sum of these forces is equal to zero, the section is balanced and the moment capacity and curvature can be calculated. If the sum of the forces is not equal to zero, a new value of the neutral axis depth is estimated and the process continues until a balanced solution is found. When calculating the ultimate capacity of the section, there are two possible failure mechanisms that can govern the ultimate capacity: (1) crushing of the concrete on the compression face at a strain c of about 0.003; (2) failure of the steel at a fracture strain of fr which is assumed to be 0.06 in this study. A solution is found by pivoting about each of these failure strains in turn to find the pivotal failure strain and its associated failure strain profile which lies within the two remaining failure strains.

137

Recent advances in security technology

(Strain)

(Stress)
k

F (Force)
Fp Fs Fc Fs Fp

bar= y
(a) (b) (c) (d)

Figure 4 Flexural analysis of yield capacity

3. Energy absorption
When calculating a members ability to resist airblast load, the amount of energy the member can absorb should be greater than the impulse applied to it. Considering the load-time and resistance-time functions as shown in Figure 5, it can be shown that the summation of the areas (area A as positive and B as negative) under the load-time curves up to any time ta divided by the corresponding effective masses Me is equal to the instantaneous velocity of that time based on Newtons equation of motion (TM5 1990):

va =

ta

( P R) dt Me 0

(8)

For a member to be in equilibrium at its maximum deflection, its impulse capacity must be equal to the impulse of the applied airblast loads. Assuming the airblast impulse I is applied instantaneously at time t = 0, the expression for the maximum deflection is derived by integrating va in Eq. (8) as

Xm =

2 2 Itm Ry t y ( R y + Ru )(tm t y ) M e 2M e 2M e

(9)

where time ty is the time at which the yield deflection Xy is reached, and time tm is the time at which maximum deflection Xm occurs. Recognizing that the instantaneous velocity at equation (8) at tm equals zero, it has

Rt ( R + Ru )(tm t y ) I yy y =0 M e 2M e 2M e
Then, the general response equation becomes

(10)

2M e

Ry X y 2

( R y + Ru ) 2

(X

Xy)

(11)

where Me is the average of the effective elastic and plastic masses. The Eq. (11) shows that kinetic energy (left side of the equation) delivered to the system is equal to the strain energy (right side of the equation) absorbed by the member in deforming
138

Optimized design of a simple supported one-way RC slab against airblast loads

which is given by the area under the dynamically enhanced resistance-deflection function as shown in Figure 1. This concept is extremely useful in making comparisons of members capacities to resist airblast loads which will be discussed later. The effective mass (Me) of the SDOF system in the elastic and plastic response zones can be calculated from:

M E = K LM M
where

(12)

K LM =

KM KL

(13)

in which M is the total mass of the member, KM and KL are the Mass and Load Factors for SDOF system. Table 1 lists KM and KL values for a simply supported structure under a uniformly distributed load (Biggs 1964). The average value of KLM factor for the two response regions is 0.72, indicating that 72% of the total mass of the member is assumed to be lumped at the midspan of the simply supported member. It should be noted that in this study a simple triangle blast load history as shown in Figure 5 is used and the duration of the airblast loads t0 is assumed to be less than one-third of natural frequency of the slabs so that the slab will respond to the impulse alone rather than to both the pressure and impulse. The natural period (tn) of the member is given by ASCE (1997)

t n = 2

ME

(14)

in which K can be determined by Eq. (3).

P Load Blast impulse I A Ru Ry B ty t0 tm time

Figure 5 Load-time and resistance-time curves for members responding to impulse

139

Recent advances in security technology

Mass Factor, KM Load Factor, KL Load-Mass Factor KLM

Elastic Region 0.50 0.64 0.78

Plastic Region 0.33 0.50 0.66

Table 1 SDOF Transformation Factors

4. Optimal reinforcement ratio


The importance of resistance-deflection curves is that the total energy absorption capacity of a member can be found by calculating the area under the curve. Different members with different reinforcement ratios will absorb energy differently. For example, a RC slab with a span of 2000 mm, width of 1000 mm, and uniform depth of 100 mm which is reinforced on both the tension and compression faces. The concrete strength is 30 MPa with Youngs modulus of 27GPa and reinforcement yield strength 300 MPa with Youngs modulus of 200 GPa. The cover thickness is 10 mm in both faces. Figure 6 shows a comparison of resistance-deflection curves of the slab under different reinforcement ratios (only count for tension face). As can be seen, there is a significant increase in the resistance capacity of the slab with increase of the reinforcement ratios, however, it can also be seen that this is at the cost of a loss of the ultimate deflection capacity, that is, the ductility of the slab. A comparison of the total energy absorption capacity of the slab under different reinforcement ratios is made in Figure 7. It can be seen in Figure 7 that there is an increase in area under the resistance-deflection curves until it reaches the maximum energy absorption value at which its corresponding reinforcement ratio is defined as optimal reinforcement ratio. That indicates that the slab with the optimal reinforcement ratio is of the maximum blast resistant capacity. Beyond this point, the total energy absorption capacity of the slab is actually decreasing although its reinforcement ratio increases. This is because of the gains in energy absorption by the increase in strength of the slab being negated by the loss of its ductility. Since a slab with optimal reinforcement ratio is of maximum blast resistant capacity, it should be designed at its optimal reinforcement ratio so that it can resist airblast loads effectively. This concept is extremely important in designing a slab appropriately.

140

Optimized design of a simple supported one-way RC slab against airblast loads

300.0 250.0
Resistance (kN)
0.50% 0.79% 1.00% 1.54% 2.01% 2.54% 0.28%

200.0 150.0 100.0 50.0 0.0 0.0

Deflection (mm)

10.0

20.0

30.0

40.0

Figure 6 Resistantdeflection curves of the slab under different reinforcement ratios


2000.0
Maximum energy absorption

Energy absorption (kNmm)

1500.0 1000.0 500.0 0.0 -0.20%


Optimal Reinforcement Ratio

0.60%

2.20% 3.00% 1.40% Reinforcement Rate

Figure 7 Energy absorption capacity of the slab under different reinforcement ratios

5. Parametric Study
With the layered analysis model and the above mentioned concept of maximum blast resistant capacity, a parametric study into the effect of span and depth of slabs on the optimal reinforcement ratio is undertaken. The same material properties as in section 4 are used in the parametric studies. It should be noted that ultimate deflection in any case in the parametric studies should satisfy two degree rotation limit at support (ASCE 1997). Figure 8 shows optimal reinforcement ratios for slabs with different spans and depths when the reinforcement ratio on compressive face is set to be 0.5 of that on tensile face. As shown, when the depth of the slab is less than 250 mm, the optimal reinforcement ratio will decrease with the increase of span. However, when the depth of the slab is more than 250mm, the optimal reinforcement ratio is almost the same regardless of span of the slabs and meanwhile the optimal reinforcement
141

Recent advances in security technology

ratio will decrease with the increase of depth. Based on Figure 8 it is easy to determine the optimal reinforcement ratio if the span and depth of a slab are given. Optimal reinforcement ratio (%)
2.50 2.00 1.50 1.00 0.50 2000mm 2500mm 3000mm 3500mm 4000mm 4500mm 5000mm

50

150

depth (mm)

250

350

450

Figure 8 Optimal reinforcement ratios of slabs with different span and depth Figure 9 shows maximum energy absorption of slabs with different span and depth corresponding to optimal reinforcement ratios. As shown, the maximum of energy absorption of slabs is only a function of slab depth regardless of its spans. This is because with the same depth the loss of the slab in energy absorption due to a decrease in strength with increasing span is compensated by the increase of its ultimate deflection. As shown in Figure 10, the total energy absorption of the slab with long span is almost same as that of the slab with short slab when the depth of the slab is 300mm.
Energy absorption (KNmm)
70000 60000 50000 40000 30000 20000 10000 0 100 150 200 250 300 350 400 2000mm 2500mm 3000mm 3500mm 4000mm 4500mm 5000mm

depth (mm)

Figure 9 Maximum energy absorption of slabs with different span and depth With energy absorption of a slab in Figure 9, it is easy to determine the maximum blast load impulse that can be applied to the slab based on Eq. 11, that is,
142

Optimized design of a simple supported one-way RC slab against airblast loads

I = 2 K LM MEn
where En is the energy absorption which can be obtained in Figure 9. The unit impulse capacity of the slab is given by

(15)

i=I/A
where A is area of slab loaded by impulse.
1200
3000 3500 4000 4500 5000

(16)

Resistance (kN)

1000 800 600 400 200 0 0 20 40

deflection (mm)

60

80

Figure 10 Resistantdeflection curves of the slab with depth of 300mm The above derived curves are very convenient to be used in designing a slab against airblast loads. For example, it is required to design a simple supported slab with a span of 3000 mm subjected to an airblast load under peak pressure of 0.2 MPa and duration of 1 ms. The design step is (1) Design loads: determining blast load impulse I for unit width of slab; (2) Trial size: assuming depth of slab is d , En is calculated based on Eq. (15); if the calculated En is less than energy absorption at depth of d as shown in Figure 9, the assumed depth d is fine; if the calculated En is larger than energy absorption at depth of d in Figure 9, a new value of the slab depth is estimated and the process continues until a proper d is found; (3) Select optimal reinforcement ratio: with the depth and span of the slab, it is easy to determine its optimal reinforcement ratio using Figure 8. Using design procedure here, it is very convenient to design a slab under blast loads against flexural failure. It should be noted that the derived curves here are only suitable for material properties in section 4.

6. Conclusions
Combined with plastic hinge theory a layered analysis model that allows for varying strain rates at different levels of the cross-section is used for calculating a bilinear resistance-deflection curves of slabs under different reinforcement ratios. Based on the concept that the area under the resistance-deflection curve is the total energy absorption capacity of a slab, the optimal reinforcement ratio is determined by achieving the maximum energy absorption capacity of a slab. The relationship
143

Recent advances in security technology

between optimal reinforcement ratio and slab size is then derived graphically by parametric studies. The derived curves are very convenient for use in designing a slab which is of maximum capacities to resist airblast loads.

References
American Society of Civil Engineers (ASCE). 1997. Design of Blast Resistant Buildings in Petrochemical Facilities. ASCE, USA. Biggs J.M. 1964. Introduction to Structural Dynamics. McGraw-Hill Book Company, New York, USA. Day I, Glynn C, Merrigan M & Spencer A. 2006. Retrofitting reinforced concrete members against blast loading using near surface mounted FRP. Final Year Research Project Report, School of Civil and Environmental Engineering, the University of Adelaide, Australia. Li Q. & Meng H. 2003. About the dynamic strength enhancement of concrete-like materials in a split Hopkinson pressure bar test. International Journal of Solids and Structures 40: p 343360. Malvar LJ, Crawford JE. 1998. Dynamic increase factors for steel reinforcing bars. TwentyEighth DDESB Seminar, Orlando, USA. TM5-1300. 1990. Structures to Resist the Effect of Accidental Explosions. US Department of the Army, Navy and Air Force Technical Manual. Warner R.F., Rangan B.V., Hall A.S. & Faulkes K.A. 1998. Concrete Structures. Addison Wesley Longman Australia Pty Ltd., South Melbourne, Aust.

144

Intelligent Evacuation Models for Fire and Chemical Hazard

13
Intelligent Evacuation Models for Fire and Chemical Hazard
D.J. Cornforth, H.A. Abbass and H. Larkin
Defence and Security Applications Research Centre (DSARC), University of New South Wales, Australia
Abstract
Evacuation of a building is a complex process that involves many variables. Computer models allow scenarios to be tested, but questions remain concerning how such models should be implemented. We examine some of these questions, for example, modelling scale and resolution, the behavioural model of people, path finding and hazard avoidance, and synchrony of model states. We conclude that multi agent models can support not only the interaction of people with each other but also the interaction of people with static and dynamic objects. In particular, there is benefit in an integration of the evacuation simulation with the cause of the evacuation: i.e. in the case of fire the fire can be modelled as a dynamic object. Experimental data indicate the importance of the issue of synchrony in state update, often overlooked in simulation models. Such attention to detail has the potential to provide greater confidence in such models.

Biographies
David Cornforth is Senior Lecturer at the University of New South Wales. He has been an educator for twenty years, including teaching a variety of discipline areas at four different universities. He has worked as a programmer and data analyst in the automotive, tourist, financial, health, water supply and electricity supply industries. He has been involved in research in Complex Systems for the last seven years, and has published a large number of research articles on this and related topics. Hussein Abbass is the Director of the Defence and Security Applications Research Centre (DSARC), at the University of New South Wales at the Australian Defence Force Academy (UNSW@ADFA) in Canberra. A/Prof. Abbass has 170+ fully refereed research papers in complex adaptive systems, artificial intelligence, and modelling
145

Recent advances in security technology

and simulation. He has extensive expertise in defence and security problems with an emphasis on modelling, simulation and data mining. Henry Larkin completed his PhD in 2005 in the area of Predicting Connectivity in Wireless Ad Hoc Networks, and has since taken up a position of Lecturer at the University of New South Wales. He has been an educator at the tertiary level for over 9 years. He is an active researcher in the area of multi agent systems and has numerous publications in the area of network modelling.

Introduction
On a daily basis, we are part of a crowd: at the bus stop, in the bus, in a stadium, theatre or cinema, in a school, university or work place. Being a participant of a crowd is almost unavoidable. In these situations, our safety is not just a result of our own behaviour, but is substantially based on the other individuals in the crowd and the surrounding environment. History has taught us that a large crowd can become a dangerous weapon. The many examples include the high casualty rate among pilgrims at Mecca during the El-Haj feast, and the traffic jam that obstructed the evacuation of New Orleans during cyclone Katrina. Modelling crowd behaviour is a significant task that can contribute to many problem domains including defence, security, and share markets, to name a few. Understanding the dynamics of crowds can help in designing better evacuation systems for emergency situations and in designing better rescue plans. These areas have been the focus with varying degrees of much research in the literature. More recently, understanding crowd behaviour is becoming more vital in security applications, such as in how to handle mass gatherings, identifying the potential risks associated with a critical national event, and how to mitigate this risk. Our motivation in this work is to investigate how collective behaviour arises from the interaction of people with each other and with their environment. This perspective is also known as the complex systems view. An added complication occurs when that environment is not assumed to be static. This assumption is rapidly seen to be invalid when the cause of the evacuation is considered. Whether the evacuation is the result of fire, chemical or biological factors, the environment changes rapidly during the evacuation and is also affected by the choices made by people attempting egress. The current work has applications in the evaluation of evacuation scenarios and building planning, especially where the cause of the evacuation is an integral part of the model.

Existing models
Crowds have traditionally been modelled as passive particles. In such macroscopic models, movement is derived as a mean quantity. Such an approach is common, for example Fanga et al., (2003), but these authors themselves admit that the human behaviour under motion is different from that of physical particles. In contrast, treating the individuals as agents in a multi agent system has brought great rewards. This alternative approach builds on the emerging discipline of Complex Systems,
146

Intelligent Evacuation Models for Fire and Chemical Hazard

which postulates that the collective behaviour of many interacting systems is different to that which can be observed in individual components. In this modelling scenario, individuals in the crowd are assumed to have goals, and preferences relating to the resolution of those goals. Individuals also experience repulsive forces as they attempt to maintain a comfort space between themselves and their peers. Such models have been able to reproduce commonly observed patterns, such as the spontaneous creation of separate lanes of pedestrians moving in opposite directions (Helbing et al., 2001b). In further work, such models of crowd behaviour have shown great merit in being able to predict collective behaviour, and have implications in the design of spaces where people congregate (Helbing, Farkas and Vicsek, 2000). While the models of Helbing and co-workers use continuous space, an alternative modelling scenario uses a discrete environment, where space is divided using a grid of non-overlapping cells (Kirchner et. al. 2003), where only a single person is allowed to occupy a grid cell at any one point in time. More recently, the model of Helbing et al. has been criticised because it assumes that all persons are homogenous, while that of Kirchner has been criticized for not incorporating forces (Henein and White, 2007). These authors propose a hybrid model that retains the spatial discretisation and force model while allowing individual decision making capabilities. A number of commercially available software packages offer facilities relevant to this area of investigation. For example, CFAST (Consolidated Model of Fire Growth and Smoke Transport) provides a high fidelity model for the simulation of fires and smoke, but does not provide evacuation modelling (Peacock et al., 2005). FASTLite builds on CFAST, but both of these are targeted at fire safety rather than evacuation. EvacNet is designed for evacuation planning, and requires a network model of a building (Kisko & Francis, 1986). It is necessary to manually transform a floor layout to a network model. There is no modelling for fires or toxics. This is in contrast with the model presented in this paper, which produces a real-time simulation of people attempting to leave a building. Exodus is a family of models that includes variants for air and maritime evacuation scenarios (Gwynne et al., 2005). It has a graphical interface including a virtual reality-type play back capability. It is designed as a tool for improving building design by including typical human choices during use of public spaces such as rail stations and airport terminals. Myriad is a crowd simulation and analysis suite principally used for building design and exit route analysis. Simulex allows evacuations to be modelled through multi-story building. STEPS is a micro-simulation tool using agent-based systems and 3-D virtual reality, but includes no consideration for fires, toxics, etc. In addition to these there are a number of open source multi-agent software, as well as tools that can be used to write a custom simulation, offering fast prototyping. Our model has a different emphasis to all of the above, and concentrates on integrating the evacuation simulation with the cause of the evacuation. In addition, we challenge the assumption of synchronous update. In such an approach, the simulation proceeds in discrete time steps. Between each time step, the new states for all agents are calculated and applied simultaneously. In previous work (Cornforth et al. 2005) we have shown that not only is this assumption unrealistic in modelling natural
147

Recent advances in security technology

systems, it produces behaviour different from that produced by an assumption of asynchrony. In this work we compare different update schemes for crowd modelling, and show how the assumption built into the model can have an effect upon the results obtained. This has implications for the estimates of the numbers of person injured during an emergency situation, and therefore for the design of public spaces.

Model design issues


In this paper we wish to address the following questions about how a model can be built to provide a range of behaviour and simulation options, where software agents are used to simulate people. Some of the questions are discussed below. Scale and Resolution The majority of existing systems are built for a particular scale and level of abstraction. The scale defines the size of the smallest unit to be modelled, while the level of abstraction defines the fidelity or the resolution of the model. For example, one may model a human in a building as a particle in a flow, rather than model the behaviour of the individual. The scale in this case is the building level. The resolution of the model is defined by modelling the human as a passive particle, subject to forces such as attraction to an exit and repulsion from other people and from walls. This is a low fidelity (high abstraction) model. One can change the fidelity of the model by modelling the human senses, cognitive capacity, and behavioural characteristics. This changes the fidelity requiring more data without changing the scale. The former approach has the advantage of computational simplicity when compared to the second approach, but sacrifices modelling fidelity. Our modelling approach is a multi-scale multi-resolution approach. The user can change the scale from looking at a building to looking at a particular floor or at a city. The user can also change the resolution by requesting the simulation to ignore the psychological factors and only use the physics. The primary advantages of this multiscale multi-resolution approach are: it uses whatever data are available, so can operate even when a particular piece of data is not available; and it is flexible enough for multiple applications. Also, we relax the assumption that environmental data does not change during the evacuation, as this is often not the case. A good example is the evacuation of a town due to severe weather. It is beneficial to be able to incorporate live data feed during the model evolution. Path finding In our model, people are represented by software agents. These must be provided with a means to find the exit. Three options are considered. In option 1 (Cognitive Model), a person has memory of the route they used to enter the building, and retraces their steps using this mental model. However, this presents difficulties, for example, how to choose alternatives if the known route is blocked. In option 2 (Local Search), a person searches from their current position. This is an option usually preferred in
148

Intelligent Evacuation Models for Fire and Chemical Hazard

computer games, as the environment is changing rapidly. It allows alternative routes, but does not always provide very realistic behaviour. Furthermore, it introduces a level of computational complexity that is not necessary. As certain features of the environment are static and known in advance (the layout of walls in the building), it involves too much searching. In option 3 (Global Direction), we assume that a person follows signs (sometimes called waypoints) to the nearest exit. This allows a person to choose alternative routes, and is likely to result in more natural behaviour.

Figure 1. A gradient field for path finding superimposed upon a building floor plan. The gradient field is represented by the grid structure with each grid cell containing an integer, while darker lines represent walls. The third option is the one chosen for our simulation. It is similar to solving a maze, and requires artificial intelligence search techniques to implement it in a simulation. However the solution has to be done once only, and can be performed offline, before the start of the simulation. The solutions to this problem are based on well-known Graph Theory, and there are many algorithms available. The building plan is overlaid with a regular grid of cells. Each cell is assigned a number that indicates closeness to the exit. The numbers are automatically generated at model initialisation. During the simulation, at each step, an agent moves to a higher number in a neighbouring cell. One method of generating the gradient numbers is as follows. The cell containing an exit (grey disk at upper left in Figure 1) is assigned an arbitrary large number, in this case 200. Each neighbouring cell is assigned the same number, minus 1, unless a line drawn from the current cell centre to the centre of the other cell passes intersects a wall. The algorithm proceeds in a recursive manner until every cell has been assigned a number. If there is more than one exit, the process begins at each exit and searching is done in parallel. People are not able to enter cells containing 0, but as these are close to walls that would not be possible anyway. In our experiments, all agents always found their way out of the building during the simulation. In order to provide realism in the simulation, we determine that both phenomena (fire or air-borne chemicals) may spread over time. A second gradient field is used, representing airflow through the building, in order to determine the direction that
149

Recent advances in security technology

these hazards spread. People are able to avoid the hazard and choose an alternate route to their goal (i.e. the exit). However, people may sustain injury and may be incapacitated. Adding this avoidance behaviour means that agents may have to follow a route against the gradient when their optimum path to the exit is blocked. In this case, we add waypoints to the model. These are markers that provide the direction to all exits. When an agents path is blocked by hazard, they switch behaviour from following the gradient to finding the nearest waypoint, and follow directions to an alternative exit.

Model details
The physics model used in the program is based on that specified in Helbing et al. (2000). This model treats people as particles, which are subject to a number of forces. The acceleration of a person is determined by three force components: Desire force: A person desires/wishes to move in a particular direction at a particular speed. This force comprises a persons self-driven element. Psychological forces: A person tries to keep a certain amount of space between itself and other people/walls. Physical forces: When touching/colliding with other people and walls, a person is subject to physical forces (young forces and frictional forces). If the young force exerted on a person becomes too great, the person is injured. Specifically, the acceleration of a person is given by the following equation, where the three right hand components of equation 1 refer to person is desire force, forces resulting from interactions with other people, and forces resulting from interactions with walls, respectively:

mi

dv i v 0 ( t )ei0 ( t ) v i ( t ) = mi i + dt i

f + f
ij j(i) W

iW

(1)

where, mi = mass of person i; vi(t) = desired speed of person i; ei(t) = desired direction of person i; vi(t) = actual velocity of person i at time t; and

i = reaction time of person i.


Equations 2 and 3 specify the psychological, young and frictional forces exerted on person i as a result of interactions with person j, and wall W respectively:

f ij = Ai exp (rij dij )/ Bi + kg(rij dij ) n ij + kg(rij dij )v tji t ij

(2) (3)

n f iW = {Ai exp[(ri diW ) /Bi ]+ kg(ri diW )} iW kg(ri diW )(v i t iW )t iW


150

Intelligent Evacuation Models for Fire and Chemical Hazard

where, Ai, Bi, k and k are constants; rij = sum of the radii of persons i and j; dij = distance between persons i and j; nij = normalised vector from person j to person i; tij = tangential direction; vji = tangential velocity difference.

Synchrony in models
Models of multi-agent systems are essentially modelling many processes that occur in parallel, but parallel does not necessarily mean synchronous. Implementations using synchronous updating divide updating into two phases. The first phase, interaction, calculates the new state of each element based on the neighbourhood and the update function. These state values are held in a temporary store. In the second phase, update, the new state values are copied back to the original elements. In contrast, asynchronous updating does not separate these two phases: changes in state are implemented immediately. We can summarise this difference as follows:
t i N : i( t +1) = f ( k()Ki )

Synchronous (interaction): (update): Asynchronous:

(t +1) = (t +1)
t i N : i( t +1) = f ( k()Ki )

where,

(t) = states of the elements at time t; (t) = temporary copy of states used in updating;
i = index to an individual element; N = total number of elements in the model; f() = function that calculates the new state of an element; and Ki = set of elements that affect the new state of element i so that |Ki| N. Models of synchronous updating use a temporary store to hold new values of state variables offline until calculation of all new states is complete. They also require a global clock, ensuring all updates occur simultaneously. However, the requirement for a clock signal to be propagated to all agents in a system instantaneously is
151

Recent advances in security technology

unrealistic in the case of a group of people interacting in a space. Several authors (e.g. Thomas, 1979; Kanada, 1994; Di Paolo, 2000) have argued that asynchronous models are viable alternatives to synchronous models. Little discussion exists concerning update schemes in crowd simulation models. Schreckenberg et. al. (1995, cited in Helbing 2001a) found that using a synchronous update scheme resulted in a more realistic traffic simulation model when compared with a random asynchronous scheme. Contrary to this, Klupfel (2003) argues that a cyclic or asynchronous scheme is more appropriate, as in such a scheme people can be seen as being able to avoid conflict. Kirchner et al. (2003) take the opposite stance by arguing that a synchronous scheme is necessary, as the other schemes understate the number of conflicts that would occur. In this work we use the synchronous scheme, a random asynchronous scheme, and a clocked scheme. In the latter scheme, each agent has its own clock or timer that controls the time of its update. As the period and phase of all these clocks differ, agents update their state at different times relative to one another.

Experiments
The purpose of the experiments is to determine whether asynchrony makes any difference to the number of people injured. We compare the number of injuries for the synchronous, random asynchronous and clocked update schemes. We consider a building with a number of exits. At the start of the simulation, a critical incident has occurred and the building should be evacuated. Each simulated person proceeds to the nearest exit, applying the model of physics to determine their position at any time during the evolution of the model. The environment consists of a number of objects such as Person, Wall and Exit. Two scenarios were examined, both comprised a room containing 200 persons. The model was simulated a total of 60 times, for 100,000 iterations each, where each iteration represents 1/1000th of a second. Thus, the total simulation time was 100 seconds. In keeping with the model described by Helbing et. al. (2000), the simulation parameters were as follows:

Person mass: 80kg Person reaction time: 0.5 seconds Person radius: uniformly distributed between 0.25 and 0.35 metres. Persons desired space radius: 2 metres Persons desired speed: 5 metres/second Psychological coefficient (A): 2 * 103 N Psychological coefficient (B): 0.08 m Young coefficient: 1.2 * 105 kg.s-2 Frictional coefficient: 2.4 * 105 kg.m-1.s-1 Injury threshold: 200 N

152

Intelligent Evacuation Models for Fire and Chemical Hazard

Experimental environment with one exit.

Experimental environment with two exits.

For each simulation a random seed was used to determine update scheme attributes such as selection during a random independent update, clock periods and initial phases for the clocked update, and so on. Update periods for the clocked scheme were uniformly distributed between 0.0009 seconds and 0.0011 seconds (i.e. +/- 10% of the time step).

Results
Summary results from the first experiment are shown in Figure 2. Figure 2(a) shows the number of people injured in the single room environment. Both the synchronous and random methods of updating result in approximately 3 persons injured, while the clocked method shows an average of 4.5 injured per simulation run. This difference is statistically significant at the 95% level.
6 5 4
1.5 2.5 2

3
1

2 1 0 Clocked Random Sync


0.5 0 Clocked Random Sync

(a)

(b)

Figure 2. Average number of injuries obtained in environment with (a) a single exit and (b) two exits. Average values are shown by a square, and the 95% confidence intervals are shown by short horizontal bars.
153

Recent advances in security technology

Figure 2(b) shows the average number of persons injured with 2 exits. Notice that there are fewer injuries than those shown in Figure 2(a). This is to be expected, as there is twice the number of exits. For the synchronous and random methods, the average number of injuries is about one, a third of the number of the environment with a single exit. The average number of injuries for the clocked method is about 2, again less than half that shown in Figure 2(a). There is a clear difference between synchronous and random on the one hand, and clocked on the other. Again, this difference is statistically significant at the 95% level.

Conclusions
In this work we have examined several existing models of evacuation of a building, reviewing some of the design philosophy behind these models. We have posed several questions about the possible design of such models, and we have concluded the multi agent approach offers benefits in terms of increased modelling fidelity, including the ability to model the interaction between persons, and between persons and static objects (e.g. walls). This can lead to a realistic assessment of injuries sustained during evacuation. However, even in these models, evacuation is still limited to a flow of persons towards the exits of the building, and does not address the cause of the evacuation and how it affects the actions of those trying to escape the building. However, the multi agent approach allows people to be given a cognitive model, or behaviour that simulates the decision-making ability of humans. In this way the model can allow people to avoid fire and other hazard, choose alternate routes. Although the multi agent approach is likely to involve higher computational cost than a passive particle model, it should lead to greater modelling fidelity, which has the potential to provide greater confidence in such models. The importance of such models is their ability to provide assessments of the degree of hazards for planning and design of buildings. A number of simulations exist in the literature for evacuation modelling. We have presented a system that we developed at DSARC, UNSW@ADFA, which is flexible. The system can model different environments, and different type of hazards that may occur within an environment, such as fire or air-borne chemical hazard. We have shown that very simple assumptions in evacuation modelling such as the type of synchrony in agent updates can lead to different results. We demonstrated clearly that these assumptions can mislead the statistics; hence the design decisions. The paper is the first of its type to challenge these assumptions. We are currently adding more features in the system to enable large scale modelling. This should enable us to identify the vulnerabilities inherent in existing methodologies for evacuation modelling.

154

Intelligent Evacuation Models for Fire and Chemical Hazard

References
Fanga, Z. Lo, S.M. & Luc, J.A. 2003. On the relationship between crowd density and movement velocity, Fire Safety Journal 38(3): 271283. Gwynne, S. Galea, E.R. Owen, M. Lawrence L. & Filippidis L. 2005. A systematic comparison of buildingEXODUS predictions with experimental data from the Stapelfeldt trials and the Milburn House Evacuation. Applied Mathematical Modelling 29: 818-851. Helbing, D. 2001a. Traffic and Related Self-Driven Many-Particle Systems. Review of Modern Physics 73: 1067-1141. Helbing, D. Molnar, P. Farkas, I. & Bolay, K. 2001b. Self-organizing pedestrian movement. Environment and Planning B: Planning and Design 28(3): 361383. Helbing, D. Farkas, I. & Vicsek, T. 2000. Simulating dynamical features of escape panic. Nature 407: 487490. Henein, C. & White, T. 2007. Macroscopic effects of microscopic forces between agents in crowd models. Physica A 373: 694712. Kirchner, A. Nishinari, K. & Schadschneider, A. 2003. Friction effects and clogging in a cellular automaton model for pedestrian dynamics. Physical Review E 67. Kisko, T.M. & Francis, R.L. 1986. Evacnet+, A Building Evacuation Computer Program, Fire Technology 22(1): 75-76. Klupfel, H.L. 2003. A Cellular Automaton Model for Crowd Movement and Egress Simulation, PhD Thesis, Universitat Duisburg Essen. Peacock, R.D. Jones, W.W. Reneke, P.A. & Forney, G.P. 2005. FAST: Consolidated Model of Fire Growth and Smoke Transport Version 6 User's Guide, National Institute of Standards and Technology Special Publication 1041. Schreckenberg, M. Schadschneider, A. Nagel, K. & Ito, N. 1995. Discrete stochastic models for traffic flow, Physical Review E 51(4): 2939-2949.

155

Recent advances in security technology

14
High assurance communication technologies supporting critical infrastructure protection information sharing networks
J.F. Reid, S.Corones, E. Dawson, A McCullagh and E. Foo
Information Security Institute (ISI), Queensland University of Technology, Australia
Abstract
Due to the high degree of mutual interdependence between critical infrastructures, active and timely information sharing between infrastructure operators is acknowledged as an indispensable element of an effective protection strategy. Current face-to-face sharing arrangements can limit the efficiency of information dissemination among operators. The use of information and communications technology (ICT) for information sharing could potentially improve the efficiency of this activity. However, it is vital that such an ICT-based virtual information sharing network (VISN) protects the confidentiality of the information being shared. This paper examines the functional, legal and security requirements of a VISN. It explores the conflict between the potential benefits that a VISN offers and the risks that it presents in providing a forum for potential anticompetitive conduct. It also highlights the significant challenges inherent in deploying a secure information sharing environment based on currently available technology.

Biography
Jason Reid is a Research Fellow at the ISI whose research interests include trusted computing and access control in distributed systems. Stephen Corones is a Professor of Law at QUT specializing in competition and consumer law. Ed Dawson is the
156

High assurance communication technologies supporting critical infrastructure protection information sharing networks

Research Director of the ISI and the leader of the information security node of RNSA. Adrian McCullagh is a researcher at the ISI whose research interests include theories of property, digital signature technologies and security policy frameworks. Ernest Foo is a researcher at the ISI whose research interests include network security, cryptography and crypto-protocols.

1. Introduction
The nations critical infrastructure (CI) is heavily interdependent with facilities operated by both public and private sector entities. Such interdependence increases the importance of sharing information and coordinating responses to cyber threats among various stakeholders. Information on threats and incidents experienced by others can help stakeholders identify trends, better understand the risks, and determine possible preventative measures (USGAO 2001, p.1). Responding to the recommendations of the Business-Government Task Force on CI, the Commonwealth Government announced on 29 November 2002 that it would facilitate the formation of the Trusted Information Sharing Network (TISN). TISNs principally operate via face-to-face meetings of Infrastructure Assurance Advisory Groups (IAAGs) which have been formed for each CI sector (e.g. transport, energy, health, communications etc.). IAAG members are typically competitors in the same market segment. The TISN framework provides a confidential forum for interaction, which can raise valid anticompetitive concerns with the Australian Competition and Consumer Commission (ACCC). To mitigate this risk, a representative from the Federal Attorney Generals department attends IAAG meetings partly to ensure that competitors do not engage in discussions that represent or are likely to lead to anticompetitive conduct. The premise of this paper is that the effectiveness of information sharing could be improved by an increased adoption of ICT to reduce the current reliance on face-toface meetings and to expand the level of participation to a broader group. However, an ICT-based VISN introduces challenges from both security/technical and legal perspectives. This is due to the interaction between two factors; the sensitivity of the information to be shared and the diverse complex relationships that exist between participants. This paper contributes an analysis of the impact of Australian competition law on the operation of a VISN, including a consideration of how the protection against anticompetitive conduct that is present in current face-to-face arrangements might be mirrored in a virtual setting. It highlights a range of information disclosure risks that arise due to vulnerabilities in currently available information technology products and anticipated threats to the VISN that are capable of exploiting them. A VISN is an ICT facility for sharing sensitive CIP information electronically among a closed, authenticated group of users according to a defined usage and dissemination policy. The information may include security incident reports, new attack strategies that have been observed, vulnerability reports, threat mitigation strategies and fixes, and adverse event management plans. A VISN system (Figure 1) spans organizational boundaries, as its users are representatives of different CI operators. Thus, nodes that
157

Recent advances in security technology

are connected to the VISN will be under the administrative control of a diverse range of organizations. This introduces a number of security and legal challenges.

Figure 1: VISN components cross administrative domains

2. Legal Issues
Considerable uncertainty exists as to whether current laws expose firms sharing information to pecuniary penalties and private damages. This uncertainty may deter information sharing. For example, Personick & Patterson (2003) identified fear of freedom of information legislation and antitrust or competition law concerns as the two main factors often invoked as the reasons for lack of progress on information sharing. This section briefly outlines competition and freedom of information legal issues that are relevant in Australia. 2.1 Competition Law A key objective of the Trade Practices Act 1974 (Cth) (TPA) is the promotion of competition for the benefit of all Australians, especially consumers. The potential anti-competitive risks of a VISN can be broadly classified as: 1) Use of the VISN as a medium for engaging in anti-competitive conduct such as price fixing or market sharing; and 2) Anti-competitive effects resulting from the exchange of commercially sensitive information via the VISN; which can be further classified as: a) Intentional misuse of information gained by a VISN to harm competition, by engaging in a dirty tricks campaign; and

158

High assurance communication technologies supporting critical infrastructure protection information sharing networks

b) Unintentional chilling of the competitive process of rivalry, which occurs because, via the VISN, competitors gain information about each other that they would not otherwise have. 3) Anti-competitive effects resulting from de facto industry standards in connection with the VISN which can be exclusionary on those unable to interoperate with the standard. 2.1.1 Use of the VISN as a medium for anti-competitive conduct This involves competitors using the VISN as a means of communication about unlawful anti-competitive activities, such as price-fixing, bid-rigging, boycotts, or a misuse of market power. Price signalling involves public pronouncements of future price increases. It occurs when one competitor (firm A) signals or indicates in advance its intentions regarding a proposed price increase in the expectation that firm As competitors will respond prior to the price increase actually taking effect. If the competitors announce that they too will increase their prices in line with firm A, firm A can proceed with the announced price increase without fear of losing market share. If competitors do not follow the announced price increase, firm A can abandon the announced price increase before it takes effect, and thereby avoid any loss of sales and market share that would have resulted from the increase. In the case United States v Airline Tariff Publishing Co (1994) it was alleged that major United States airlines used a computer system in which all fares were entered 14 days in advance of their effective date. This allowed an airline to signal its desire to increase prices and allow other airlines to signal their assent or disagreement. In Australia, it seems that it would be difficult to infer an arrangement or understanding from such unilateral conduct. It is likely that the ACCC would require some type of technical monitoring capability in a VISN to mirror the oversight function currently performed by the Attorney Generals representative at IAAG meetings. To be effective and more efficient than the current TISN structure, a VISN should be available to participants 24 hours a day, 7 days a week. Active real-time monitoring for anticompetitive communication by a human is unlikely due to costs. More realistic monitoring options include secure logging of all message exchanges, document transfers etc. An Attorney Generals representative could periodically review these logs. Automated real-time monitoring on the basis of key words and theme analysis may also be an option worth investigating. The knowledge among members that monitoring was undertaken would make the use of a VISN for such unlawful communications extremely dangerous, and extremely incriminating if such communications were detected. Hence, it is unlikely that a VISN would be used as a medium for exchanging commercially sensitive information that would have or would be likely to have an anticompetitive effect. Many alternative and more effective methods of communication are available (e.g. telephone, and personal contact), which offer significant advantages compared to using a VISN to facilitate anti-competitive conduct. Clearly, a VISN with delayed monitoring would fit comfortably within the self-regulatory framework that the Federal government has promoted in recent times (Dept. of Treasury, 2000).
159

Recent advances in security technology

2.1.2 Intentional misuse of information gained via a VISN Such conduct may have the purpose of substantially lessening competition and, if so, would contravene s 45(2) of the TPA. VISN participants could misuse information gained about their competitors, customers, and suppliers, to improve their competitive position or to harm the position of their competitors. They could do this by mounting a dirty tricks campaign (see Re Verizon 2003). For example, after receiving an incident report about a denial of service attack on an Internet Service Provider or a report of a vulnerability in an Internet banking site for which there is no currently available fix, a competitor may run a product disparagement campaign e.g., Does your telecommunications provider have sufficient protection? Does your on-line banking provider have a secure system? This risk is dependent on the type of information exchanged, the scope of its distribution, and the nature of the market or markets affected. Aviram and Tor (2004, p.236) have noted that information exchange can have a dark side in that it can facilitate anti-competitive collusion or unilateral misuse of market power. 2.1.3 Unintentional chilling of the competitive process of rivalry Section 45(2) of the TPA catches not only conduct that has a subjective anticompetitive purpose, but also conduct that regardless of intention, has, or is likely to have, an effect of substantially lessening competition. Competition is synonymous with independent rivalry or striving to produce the goods and services that are most highly valued by consumers at the lowest price and at the highest quality. Competition should lead to economic efficiency which advances consumers interests. Competitors compete on three different dimensions: price, quality and service. Independence presupposes independent decision-making as regards the price-qualityservice package that is offered to consumers. The competitive process necessarily involves some uncertainty and risk, in part caused by a lack of information as to the capabilities, capacity, intentions, and activities of competitors, customers, and suppliers. Participation in a VISN may require competitors, customers and suppliers to divulge commercially sensitive information to each other, which they would not otherwise divulge. In such circumstances, where companies are required to disclose sensitive information, the element of risk concerning lack of business information is eliminated. The participants in the VISN may be disinclined to compete aggressively or engage in risky courses of action such as making significant investments in new plant. It seems likely that the intentional misuse of information gained via a VISN for commercial advantage will be prohibited, and that substantial sanctions would apply for so doing. This may cause competitors to refrain from actions that they would otherwise have performed, for fear they will be accused of having performed that action in reliance on information gained via a VISN. This may also cause competitors to become more risk-adverse and to only engage in activity which can be objectively justified by information which was demonstrably available to them outside the VISN. The end result may be a lessening of competition.
160

High assurance communication technologies supporting critical infrastructure protection information sharing networks

Personick & Paterson (2003, p.31) identified the following factors that can serve as guideposts in trying to determine whether information sharing is likely to facilitate collusion: Who is receiving the information? If only one side of the partnership is using the information, then collusion concerns might have merit. How old is the information? Sharing contingent or future information is generally more troubling than sharing historical information, since it could be used to help in achieving agreement among competitors or in coordinating their conduct. How specific is the information? Information that identifies the conduct of individual firms is likely to raise greater concern than information that is aggregated. . Finally, how accessible is the information? Sharing unique information is more likely to raise concerns than sharing information that is already publicly available. As has been previously noted, the potential effect of a VISN on competition depends on the specific circumstances including the type of information disclosed via the VISN, who has access to the information and how they act on it. Without detailed factual scenarios, a concluded view of the competition law implications of a VISN cannot be reached. However, it can be observed that a VISN is likely to expose a different degree of anti-competitive risk depending on the nature of the competitive market that its members operate in. For example, wholesale electricity prices are dynamically determined via competitive bidding in a spot market. If a supplier learned via a VISN that a competitor was planning to temporarily shut down generation, they could respond by restricting their own output to cause a price spike. Even when there is no arrangement or understanding between competitors and this occurs as a result of unilateral action it may still constitute a misuse of market power contrary to s 46 of the TPA. 2.1.4 Authorisation or statutory exemption In order to allay the fears of the participants in a VISN that they may inadvertently breach the TPA it will be necessary to apply for an authorisation from the ACCC or to obtain a Commonwealth statutory exemption from the TPA. Authorisation is available if the participants in the VISN can demonstrate to the ACCC that the VISN gives rise to a public benefit, and that the public benefit outweighs any anticompetitive detriment. The quantification and relative assessment of public benefit and anticompetitive detriment will not be a simple exercise. It is unlikely that the ACCC would give a blanket authorization. Framing an appropriate authorization that protects competition and yet covers the disclosure of sensitive information for CIP will not be an easy task. Freedom of Information and Privacy Law There is some risk and a perception that proprietary CIP-related information shared between private sector firms and federal government entities may be disclosed to third parties under the Freedom of Information Act 1982 (Cth) (FOIA). There may
161

Recent advances in security technology

also be concerns that CIP-related information protected under the Privacy Act 1988 (Cth) will be revealed as part of information exchange arrangements under a VISN. Since the Attorney Generals representative is present at IAAG meetings, any written information may be subject to the FOIA. Since electronic information is clearly a record, once the Government has access to such information the FOIA may equally have application. It may be possible to tag information electronically as commercial in-confidence to overcome this issue but further investigation is required.

3 VISN functional requirements


The functional requirements of a VISN mirror the supported information sharing methods. Three broad VISN functionality classes are proposed: basic, medium and advanced.

Basic functionality includes the ability to store and make selectively available, static reports and memoranda in commonly used formats such as pdf files, Microsoft Word documents, html, and text files. The basic level does not include an authoring and editing environment for these documents only a means of distributing and viewing files. Medium functionality adds support for more dynamic forms of interaction such as instant messaging and chat, web based collaborative workspace and discussion board functionality. The medium level also includes an authoring environment for static documents referred to at the basic level. Advanced functionality includes support for providing large-scale situational awareness to individual CI operators with a real-time monitoring capability over CI on which they are reliant or interlinked, but which is managed by other entities. This might include geospatial or network analysis tools for consolidating and interpreting sensor data in order to identify and respond to simultaneous attacks against multiple CI components under the care of different operators.

4 VISN threats and security requirements


This section briefly and informally considers some of the key threats a VISN faces, focusing particularly on those against client platforms. Threats against client platforms are emphasized because they are very difficult to mitigate with current technology. Providing the basic and medium types of functionality identified above is not technically difficult. The challenge lies in ensuring that the functionality is provided securely so that the confidentiality, integrity and availability of shared information is safeguarded when it is being communicated and stored. Any solution developed must also simultaneously satisfy the legal issues detailed in Section 2. The information that will be shared among VISN participants is highly sensitive and the success of the VISN concept will be dependent on the level of member confidence in the security of the supporting infrastructure, systems and processes. Without this confidence, members could be reluctant to share relevant information that can improve CIP, for fear of unauthorised disclosures that may be prejudicial to
162

High assurance communication technologies supporting critical infrastructure protection information sharing networks

commercial interests and public confidence. Campbell et al. (2003) have noted the negative commercial impacts that can follow public disclosure of security incidents. The main difficulty in deploying and operating a VISN lies in making sure that the right people have quick access to the right information without exposing a risk that information might leak to unauthorised persons. 4.1 Access control requirements Because a VISN presents an attractive target to a broad range of adversaries ranging from curious hackers to state-funded agencies, the information assurance requirements are quite demanding. Information must be protected against illicit access and modification when it is being stored, transferred and manipulated on client platforms. Information flow among authorised VISN users must also be controlled. The requirement for a very flexible method of granting access to both individuals specifically, and groups, based on job functions and organisational affiliations, suggests that role-based access control (RBAC) approaches may be an appropriate starting point for defining access policy. Jin & Ahn (2006) have recently proposed an RBAC architecture for collaborative information sharing. 4.1.1 Usage and dissemination control Traditional access control enforcement architectures are inadequate for a VISN because they fail to provide ongoing control over information once it is transferred to the client platform of an authorised user (Park & Sandhu, 2002). For example, most current distributed information sharing architectures do not prevent an authorised user from downloading a controlled document and emailing it to an unauthorised user in violation of the access policy. Due to the high sensitivity of the information, a VISN requires usage control over actions including printing, further network transfers and making local copies of protected documents to ensure that the access policy is always enforced, even when the document resides on a client platform. Effective usage and dissemination control is important even if authorised users are assumed to be trustworthy. This is because current client platform processing environments, which act as a proxy for the user, cannot be trusted. The increasing prevalence and sophistication of spyware and root kits underlines the fact that often, the simplest way for and attacker to gain unauthorised access to protected resources is to compromise an authorised client platform. For example, an attacker can bypass authentication measures by getting a user to click on a malicious attachment in an email. The malicious code can then co-opt the authorized privileges of the user to access protected information via the VISN. 4.1.2 Weaknesses in operating systems undermine usage control Usage and dissemination control (Park & Sandhu, 2002) is a type of DRM which places heavy demands on the client platform. Current operating system architectures cannot reliably enforce usage and dissemination controls due to weaknesses in their
163

Recent advances in security technology

protection architecture (Reid & Caelli 2005). Current commercial operating systems including Unix, Linux and Microsoft Windows implement a protection model known as identity based discretionary access control (DAC). Under this model a user cannot control what their programs can do with their documents and files because a program executes with all the permissions of the associated user (Tanenbaum et al. 2006). Thus a program will be free to delete or change a users files or establish a network connection to send them to another computer, or to impersonate the user. Because of these problems with the protection architecture of mainstream operating systems, and the rising incidence of malware that exploits the weaknesses, we consider threats to client platforms connecting to the VISN to be the most difficult to deal with. These threats include the following: 1. A malicious program accesses authentication secrets of a VISN user and stores them on the local machine or sends them over the network to another machine to enable impersonation of the user at a later time. 2. A malicious program running on a user machine modifies information stored on the VISN (includes deletion) while a valid user session is active by exploiting the VISN privileges of the user. 3. A malicious program running on a user machine accesses information stored on the VISN (includes deletion) while a valid user session is active, by exploiting the VISN privileges of the user and stores it locally or sends it over the network to another machine. 4. A malicious program running on a user machine connected to the VISN network sends a high volume of traffic degrading or denying service to other users of the VISN Client platforms used by authorised users to connect to the VISN will need to be able to deal with these threats. This will demand a shift to higher assurance operating system architectures that can be trusted to enforce the required policies. Sandhu et al. (2006) typifies an approach that is gaining acceptance wherein protected information will be stored and transferred in encrypted form and will only be able to be decrypted in a trusted environment which enforces usage controls. A security architecture of this type warrants further investigation to support a VISN that can resist the above threats. 4.2 Technical support for VISN monitoring and oversight As was noted in Section 2.1.1, it is likely that a VISN will need to support third party monitoring of all information exchanges (for example, by a representative of the Department of the Attorney General) to deter and detect use of the facility for anticompetitive purposes. If monitoring is not done in real-time, it will be necessary to reliably archive information transfers for later review. Secure logging of all accesses and important system events will also be necessary to ensure that VISN users are accountable for their actions.

164

High assurance communication technologies supporting critical infrastructure protection information sharing networks

4.3 Current technical approaches to information sharing The recently announced Secure Information Sharing Architecture (SISA)1 is focused on connecting existing computing infrastructure across organizational boundaries, among so-called communities of interest. SISA is principally concerned with the needs of government, military coalitions and emergency response communities. It pursues a defence in depth strategy, relying on well-understood technologies such as VPNs, firewalls and VLANS to secure lower layers of the communications stack. The problem SISA aims at solving is different to the VISN problem. SISA is concerned with large-scale integration of existing systems and information processing activity across organizational boundaries. It needs to support a complex range of legacy protocols and applications with a range of assurance requirements. In contrast, information sharing in a VISN represents a new and somewhat simpler activity with very high assurance requirements. The components of the SISA architecture concerned with securing communications links, directory access and network storage are likely to be applicable and useful for a VISN however, we argue that the SISA approach to client platform security is inadequate because it does not address the client operating system weakness identified earlier. To mitigate the threats against clients we identified in Section 0, SISA relies on a software solution called Cisco Security Agent an intrusion prevention system that lies between applications and the operating system kernel. CSA intercepts operating system calls issued by applications and determines whether they are consistent with a predefined application security policy. There are two main weaknesses with this approach, firstly that the security agent software is complex and likely to contain exploitable bugs, and secondly, that it is difficult to specify in advance the minimal and complete set of rules to control application access to the operating system. Consequently, the rule sets tend toward permissive to ensure that applications work reliably. Software-only approaches to policy enforcement on client platforms that are under the administrative control of another domain are unlikely to be successful, because there is such a bewildering variety of ways to attack software (Sandhu et al. 2006). For example, the SISA approach to Network Admission Control (NAC) which examines whether patches have been applied and anti-virus signatures are up to date before allowing connection are unlikely to address the malware problem adequately since they do not address zero-day exploits and published exploits for which there is no available patch. Where high assurance of policy enforcement is required, there is a growing acceptance of the type of approach typified in Sandhu et al. (2006). This relies on the Trusted Computing Groups TPM hardware and a trusted operating system that leverages the hardware-based memory protection instructions that have been recently added to processors sold by Intel and AMD. Software-only approaches are inadequate and should not be adopted for a VISN.

Informal descriptions and white papers on SISA available at http://www.sisaalliance.com/


165

Recent advances in security technology

5 Conclusion
Information sharing among CI operators using ICT has advantages over face-to-face meetings in terms of timeliness and frequency. However, this paper has argued that before the VISN concept can be realised, a number of identified legal issues and technical vulnerabilities need to be addressed, in many cases necessitating further research and development. The paper examined the potential impacts of Australian competition law on a VISN. Competition law does not prevent the private sector from sharing CIP information within and between sectors. However, there is a need for education and awareness among VISN participants to ensure that they do not unintentionally breach anti-competition provisions of the TPA. This paper has argued that post hoc monitoring of information exchanges by a representative of the Federal Attorney Generals Dept. would provide sufficient disincentive against use of the VISN by members for communications of an anti-competitive nature. However, government access to VISN communications opens the possibility of unintended information dissemination via FOI requests. The technical design and procedures of a VISN should be carefully considered to ensure that sensitive information falls within appropriate FOI exemption categories. The ICT components and procedures of a VISN should provide a robust capability to control the usage and dissemination of CIP information. This DRM capability should be based on a design that ensures protected information is only available in unencrypted form in a trusted environment. This paper has argued against implementing such a DRM-style of protection on client platforms that use currently available commercial operating systems because software-only solutions are not sufficiently robust against motivated and knowledgeable attackers. A VISN should plan to adopt emerging hardware-based trusted computing technologies in combination with operating systems that leverage the recently added security and memory isolation features of Intel and AMD processors.

6 References
Aviram, A. & Tor, A. 2004. Overcoming impediments to information sharing. Alabama Law Review 55(2) 231--279. Campbell, K., Gordon, L.A., Loeb, M P. & Zhou L. 2003. The economic cost of publicly announced information security breaches: empirical evidence from the stock market. Journal of Computer Security, 11(3): 431--448. Department of Treasury 2000, Industry Self-Regulation in Consumer Markets, Taskforce on Industry Self-regulation, Consumer Affairs Division, Department of Treasury, Canberra. Jin, J. & Ahn, G.J. 2006. Role-based access management for ad-hoc collaborative sharing. In SACMAT '06: Proceedings of the eleventh ACM symposium on Access control models and technologies, ACM Press, New York, NY, USA, 200--209. Park, J. and Sandhu, R. 2002. Originator control in usage control. In Proceedings of the 3rd international Workshop on Policies For Distributed Systems and Networks (Policy'02). IEEE Computer Society, Washington, DC, 60.
166

High assurance communication technologies supporting critical infrastructure protection information sharing networks

Personick, S & Patterson, C. (eds) 2003. Critical Information Infrastructure Protection and the Law An Overview of Key Issues. National Academies Press. Re: Verizon 2003. Me. PUC LEXIS 181. Reid, J. F. and Caelli, W. J. 2005. DRM, trusted computing and operating system architecture. In Proceedings of the Australasian Information Security Workshop (AISW 2005) - Volume 44 Conferences in Research and Practice in Information Technology. ACS, 127--136. Sandhu, R., Ranganathan, K. and Zhang, X. 2006. Secure information sharing enabled by Trusted Computing and PEI models. In Proceedings of the 2006 ACM Symposium on information, computer and communications security, 212. Tanenbaum, A.S., Herder, J.N. & Bos, H. 2006. Can we make operating systems reliable and secure? Computer 39(5): 4451. United States General Accounting Office 2001. Information Sharing: Practices That Can Benefit Critical Infrastructure Protection. GA-02-24, available at http://www.gao.gov/cgibin/getrpt?GAO-02-24 United States v Airline Tariff Publishing Co. 1994 (2) Trade Cases (CCH) 70,687 (D.D.C 1994) (Competitive Impact Statement), available at http://www.usdoj.gov/atr/cases/f4700/4797.pdf

167

Recent advances in security technology

15
Gen E (Generation Extremist): The significance of youth culture and new media in youth extremism
K-J. Lombard
Curtin University of Technology, Australia
Abstract
There is growing evidence that extremists are using youth culture and new media to recruit, communicate their views and incite violence. In 2001 Steven Simon and Daniel Benjamin wrote: sociologists have shown that youth correlates with violent behaviour... [governments] will have to manage the discontent of a cohort of unemployable, passionate young men. It is difficult to predict how this discontent will be expressed (13-14). One of the ways in which this discontent is currently being expressed is though videos posted to websites like YouTube, video games, and heavy metal and rap music. This paper discusses the ways in which the internet and the subculture of hip hop are allowing those with radical views to communicate and recruit. The potential of youth culture and new media as bridging mechanisms between youth and extremist suggests the Australian government should take this into consideration when designing and reviewing counter terrorism approaches.

Biography
Kara-Jane has a PhD in Communication and Cultural Studies from Murdoch University, Western Australia. She has an active interest in youth issues and has carried out volunteer work with the Perth Youth Advisory Council where she advised the Perth City Council on youth issues and social policy, the Inner City Agency Network, a drug intervention and awareness program, and the Salvation Army. She has published in the areas of subculture, youth policy, globalisation and surveillance.
168

Gen E (Generation Extremist): The significance of youth culture and new media in youth extremism

Introduction
There is growing evidence that contemporary extremist and terrorist groups are using youth culture and new media as a powerful recruitment tool, to communicate their views and incite violence. In 2001 Steven Simon & Daniel Benjamin wrote that, sociologists have shown that youth correlates with violent behaviour, and that there is an enormous youth bulge within the overall tide of overpopulation [governments] will have to manage the discontent of a cohort of unemployable, passionate young men. It is difficult to predict how this discontent will be expressed (13-14). One of the ways in which this discontent is currently being expressed is though videos posted to websites like YouTube, video games, and heavy metal and rap music. At the same time, youth are being targeted by extremist groups in order to ensure their continuation. As Joseph Schafer (2002, 78) explains in his study on webbased hate propagangda by extremist organisations: social movement organizations have long recognized the importance of appealing to youth in order to perpetuate their existence. Consequently, youth cultures and new media are increasingly functioning as bridging mechanisms (Barkun 1997) between youth and extremist organisations and the ideologies they promote. In this paper I will focus on two of these mechanisms: the youth subculture of hip hop and the internet, accounting for the ways in which they are being used to channel youth extremism.

Connecting youth with extremism


In order to connect youth and radical ideologies, extremists are utilising new modes of communication and recruitment. Neo-Nazis and other right wing groups were pioneers in the application of popular culture and computer technology. Contemporary extremist and terrorist groups are increasingly utilising more mainstream forums in targeting youth, including popular culture. This not only allows them to reach a new generation and circumvent and counter the censorship and distortion they see in mainstream mass media, but also alternative ways of radicalizing their target audience that does not require face-to-face interaction (Gruen 2006, 2). Concurrently, individuals with no formal ties to extremist groups but sympathetic to their cause and doctrine are increasingly expressing what might be considered radical views through mainstream forums. Thus it appears that mainstream popular culture is functioning as a bridging mechanism between youth and extremist organisations/their ideologies.1
These bridging mechanisms take several forms. The potential of video games for extremist purposes was first exploited by white supremacist groups, which released numerous first person shooter games with violent graphics like Shoot the Blacks, Nigger hunt, Rattenjagt. Similarly, Umnah defence I and II, and Hezbollahs Special force were produced by Islamic radicals. Music is also used as a bridging mechanism and Neo-Nazi groups like the US-based National Socialist Movement often use concerts to recruit youth.
169
1

Recent advances in security technology

In this paper I focus on two of these mechanisms: the youth subculture of hip hop and the internet. Extremists have long used music as a tool for indoctrination and/or recruitment. For instance, in Selling Volksgemeinschaft: The normality of the extreme, Bjorn Jesse & Gabriel Matera (2003, 37) explain that music can pave the way into the right-wing scene, and outline how the majority of young people first had contact with the rightwing scene through listening to music. While heavy metal, oi and punk music are often associated with white supremacy movements, hip hop has come to be the bridging mechanism for youth and Islamic extremism. Islam and hip hop have had a connection since the subcultures formation in New York in the 1970s, and there is evidence of Nation of Islam and Five Percenter doctrine in many rap lyrics. Recently, hip hop lyrics suggesting support for Islamic extremist causes and groups have raised concern in the public sphere.2 While terror rap first came to the attention of the mainstream media in 2004, in 2001 Ted Swedenborg published a study into hip hop activism and the Islamic tendencies of several bands which merits attention. In particular, he examined FunDaMental whose song Meera Mazab enunciates Jihad in my mind. While the group did not come to the attention of authorities at the time, in 2006 Aki Nawaz of FunDaMental garnered criticism because of the launch of his album All is war (the benefits of G-had). Tracks explore the immorality of the west and envision the demise of America at the hands of Islam. One song includes a statement of reason and explanation of impending conflict by Osama bin Laden, while in Cookbook DIY Nawaz raps about a suicide bomber at work, and equates this to a White House scientist developing a bomb. The difficulty in assessing whether music like this warrants concern or constitutes a security risk is that boundaries between hip hop activism or socially conscious rap and terror rap are problematic. Hip hop artist Immortal Technique claims he is not a jihadist, yet his music embodies many of the elements endemic to terror rap. In a recent rap song in which he collaborated with Eminem and Mos Def, the trio claim Bush knocked down the two towers and Eminem adds: I dont rap for dead presidents, Id rather see the president dead. Immortal Technique elucidates: Im strapped like Lee Malvo, holdin a sniper rifle, these bulletsll touch your kids, and I dont mean like Michael. Popular US rappers often call the boundaries between these genres of hip hop. For instance, Da Lench Mobs 1994 song Goin bananas
The most commonly cited example of this terror rap (or jihad rap, as it is sometimes known) is the Soul Salah Crews 2004 song featuring Sheikh Terra Dirty kuffar. Against a beat borrowed from Sean Pauls popular song Get busy, English lyrics encourage listeners to throw kuffars into the fire and allege I am going to bomb, bomb, bomb peace to Hamas and the Hizbollah, OBL [Osama bin Laden] crew be like a shining star, like the way we destroyed the two towers. The video for the song cuts between dancing masked men with a gun in one hand and a Quran in the other, and various other scenes a Russian POW getting shot repeatedly by a Chechen militant, doctored news footage showing US soldiers killing an Iraqi civilian, and footage of the plane shattering the twin towers. Initially distributed in Britain by Mohammed al-Massari, the video clip is now widely available through the internet, as are other examples of this genre.
170

Gen E (Generation Extremist): The significance of youth culture and new media in youth extremism

declares that they are having thoughts of overthrowing the government and the song ends: Allah, have mercy. Im killing them devils because theyre not worthy to walk the earth with the original black man Its time for Armageddon, and I wont rest until theyre all dead. The complexity of the boundaries of these genres is also exemplified in Swedenborgs study of hip hop activism and hip hop group FunDaMental. Swedenborg (2004, 4) claims that the group attempts to educate white youth and leftists and to incorporate them within the anti-racist struggle. However, it is clear that FunDaMental are involved in more than just hip hop activism as Nawaz feels that youth who join organisations like the Al Qaeda connected group the Kalifah Network are not wrong (Swedenborg 2001, 4). He adds that a lot of fundamentalist groups are like freedom fighters and then the people in power come along and paint them with a different and more negative brush (4-5). Swedenborg even notes that the group seems to enjoy shaking up young whites (5) or in Nawazs words almost terrifying the shit out of them (5). The difficulties in clearly defining the boundaries between activist rap, which aims to raise political consciousness and shape opinions, and terror rap, which incites violence, means that music can be used as a powerful tool for recruitment and indoctrination as it tends not to violate laws or draw notice from authorities (Gruen 2006). However, as Schafer (2002, 78) points out in relation to the music of the white supremacy movement, once a youth begins to listen to the music and interact with other fans, however, he or she may slowly be desensitized to racist images and messages, and may be more accepting of the beliefs of extremist organizations. It must be stressed though that, excepting groups like Arab Legion and An-Nasr Productions, often these musicians have no formal links with extremist organisations.3 However, even music which does not seem that radical (or does not aspire to incite extremist views) can be interpreted in that way. Soldiers of Allah, who expressly state they are not a jihad group, found that their music was being interpreted in ways they could not control: after a while, I noticed that the music was increasing the emotions of the people, instead of making them think. This was not my intention and I decided that the tool of music may not necessarily be the best tool to use since peoples emotions can easily be attached to it. (Soldiers of Allah interview, 2007) On a MySpace page for the group, users state kill kufars and start fight for muslim! start jihad for 4JJ! Others in the Soldiers of Allah social network express extremist sentiments.

Hizb ut-Tahrir has stressed that it has no formal links with Blakstone or any other hip hop group.

171

Recent advances in security technology

Another bridging mechanism between youth and extremism is the internet.4 As intelligence analyst Madeline Gruen (2006, 12) points out, the internet has quickly become the backbone of terrorist and extremist-group propaganda and recruitment programs. It also presents a number of unique challenges for authorities as it is difficult to regulate and, as Director of the Australian Federal Polices high tech crime unit Kevin Zuccato, points out, online identity is not related to real life identity (Kerr 2007, 8). The internet has certainly proved effective: as a teenager American Adam Gadahn discovered Islam on the internet and wound up an Al Qaeda operative. Those captured in relation to the October 2002 Bali bombings admitted to being inspired by the Internet, while Egyptian Al-Jihad group Jundallah (Soldiers of God) used the internet to recruit members (Katz & Devon 2003).5 Importantly the internet allows extremists to introduce their cause to a new generation. In the words of Gabriel Weimann (2006, 55): the Internet has become an increasingly prominent tool for recruiting a younger generation of terrorists, in large part because these younger generations are more tech savvy than their elders. While the internet is used to target youth globally, its use by extremist groups is particularly significant in relation to Western youth and those in developed countries. Furthermore, the possibility for global communities created by the internet also renders extremist connections and links more global. As Lia (2006, 17) argues, online jihadism brings geographically scattered and isolated militants together in virtual transnational extremist communities that bind the global jihadist movement together (p. 17). To better understand the way in which the internet channels extremism it is necessary to recognise the dynamic nature of the internet. There are two aspects of this to consider. Firstly, extremist websites are dynamic in that the content and format of sites is frequently modified, and URLs are changed often to avoid detection. Secondly, extremism on the internet is increasingly fashioning a mainstream presence, with jihad videos leaking into popular culture and those with radical views using Yahoo groups and mainstream sites like MySpace and YouTube. A Sydney Morning Herald article explains, until recently, videos shot by terrorist groups were posted predominantly on specialist internet forums, which often only those knowing what to look for could find. But more are turning to mainstream sites like

The advantages of using the internet for extremist and terrorist purposes have been explored at length in other publications (Schafer 2002; Weimann 2006; Lia 2006) and include: costeffectiveness; ability to reach a wide and global audience thus ensuring massive publicity; convenience; capacity for groups to manage their image to appear more acceptable to persons in the mainstream; the unregulated nature of the internet; ability to circumvent traditional media; anonymity; interactivity; multi-linguistic capacity. While some question the degree to which the internet increases the membership and efficiency of extremist groups, recent research indicates that the use of the internet by extremist organisations is on the rise. Furthermore, the promotion of extremist ideologies on the internet may not result in formal affiliations with extremist groups but in sympathy for extremist causes which is harder to monitor and control.
172
5

Gen E (Generation Extremist): The significance of youth culture and new media in youth extremism

YouTube, which draw millions of visitors around the world each day. (YouTube is also a hit with terrorists, 2007)

Analysis of hip hop & the internet as bridging mechanisms


In assessing the potential that youth culture and new media provide to extremists and the factors governments should take into account when designing and reviewing counter terrorism approaches, a number of issues need to be considered. This paper explores three: the ways in which the internet is used to channel extremism, young peoples relationship with the internet, and the cultural history of hip hop. The ways in which the internet is used to channel extremism Firstly, it is useful to consider the various ways in which the internet is used to channel extremism and the variety of functions it performs. Studies have identified that extremists and those who support extremism use many of the features that the internet incorporates, including internet forums like electronic mailing lists; message boards; chat rooms; websites; and email. This research also recognises that the internet performs a number of functions in relation to extremism, including: communication, planning and coordinating operations, recruitment, fundraising, the spreading of propaganda indoctrination, information retrieval, instruction, researching the mainstream media, delivering threats and spreading disinformation.6 A more nuanced understanding is provided by Brynjar Lia (2006) who explains that jihadist websites can be categorised into three broad categories: mother sites such as the official websites of jihadist groups; distributors which copy and upload new jihadist material on other sites; and producers like self-styled jihadist media companies reproducing material in more attractive forms (p. 17). Lias research is useful because it demonstrates that it is not just radical groups which use the internet for extremist purposes. It is possible to delineate at least three functions that the internet performs in relation to extremist purposes: radical organisations seeking recruits and promoting their ideologies through the world wide web, would-be recruits like Ziyah Khalil and Ryan G. Anderson advertising themselves on the internet, and individuals with no formal links to extremist groups promoting radical ideologies and demonstrating support for radical causes. It appears that social networking websites like MySpace and video sharing websites like YouTube are being used to endorse radical views and actions. It is not difficult to find evidence of sympathisers on websites like this: there are countless pages with images

While some question the degree to which the internet increases the memebership and efficiency of extremist groups, recent research indicates that the use of the internet by extremist organizations is on the rise. Furthermore, the promotion of extremist ideologies on the internet may not result in formal affiliations with extremist groups but in sympathy for extremist causes which is harder to monitor and control.
173

Recent advances in security technology

and graphics glorifying violence, terrorists, weapons, masked gunmen, and the flag of Islamic jihad. Often this is intertwined with religious imagery. Videos posted to such sites include religious teachings and homemade extremist videos (suicide bombings, sniper shootings, music videos in the style of jihad rap and teaching manual style videos are all popular). Accompanying text expresses hatred for particular groups and leaders, or sympathy for jihad and martyrdom.7 Consider the following recent examples: Mohammed Adil, a 19 year old from Arizona, cites his interests as jihad and mudahideen, while his heroes include The True martyrs [sic] that have died in Iraq Iran Afghanstan and many other countrys [sic] and also the the martyrs [sic] that died for the sake of Allah s.w.t and died in the wars [sic] may Allah bless them all and except them all into Jannah InshAllah! (2007) MySpace user mujahideen ilal imam ser!!!!! who has recently changed his username to Close to the edge, esteems any muslim that takes it upon himself or herself to sacrifice for Allah (2007). Along with pictures of his children, his page contains the graphic tribute to the mijahideen. Abu Yaasameen Al-Sayfullahs MySpace homepage claims he is an Enemy of the State and Suspected Terrorist, and all this according to the kufar (2007), one of his blog entries details 42 ways to support jihad and he articulates that he would like to meet bin Laden. Ali from Virginia has this quote on their page: fight in the way of Allah with those who fight with you...[2.191] And kill them wherever you find them (2007). Ali has included a picture of selfproclaimed Chechen terrorist Shamil Basayev, one of his heroes. One YouTube user, a 19 year old Somalian named Khalid Muhammad (alias SwordofAllah87), writes: You say Jihadist like its a bad thing... Do you even know what JIHAD means? If you did, you would understand my laughter (2007). Previously I found this quote on his page: I cannot, for the life of me, understand why war is demonized so. War is beautiful. War has been the case since the beginning of time. Jihad is the way of the human being. (2007) Although the individuals posting and maintaining these pages probably do not have links to extremist organisations, this could still be regarded as a security risk as by promoting and glorifying extremism and terrorist actions, such sites can influence young peoples opinions and incite them to commit violence. Youth and the internet Secondly, while those promoting extremist activities and ideologies find the internet is ideally suited to their operations, increasingly it seems that it is the relationship young people have with the internet that they are taking advantage of. Youth may not trust mainstream media but they tend to absorb information from the internet with
7

One MySpace page, Hamas4life, claims that martyr is a way of life.

174

Gen E (Generation Extremist): The significance of youth culture and new media in youth extremism

little question.8 Thus, it would be possible to take advantage of young internet users looking for direction, like 1000percentREAL (a 17 year old girl named Harmony) who is thinking of converting to Islam and looking for guidance from others on YouTube. The internet also provides many opportunities to make contact with strangers. A Pew Research Center study into this topic found that 60% teenagers online have gotten an email or instant message from a perfect stranger and 63% of those who have gotten such emails or IMs say they have responded (19). Considering that this research was conducted before social networking websites, one can only assume contact with strangers through the internet has increased. Furthermore, the communities and bonds formed on the internet are considered real. According to the Pew research: frequent users of the Internet are more enthusiastic about the friendship enhancing quality of the Internet more than six in ten say that it helps some or a lot (Lenhart 2001, 16). This is significant because of the importance of social networks in the recruitment of modern terrorism. As Marc Sageman outlines in his book, Understanding terror networks (2004), social and emotional bonds tend to be more important than external factors like common hatred for an outside group (p. 56).9 Certainly, it takes a combination of factors to recruit terrorists, but if social networks are one of the crucial factors, the internet clearly poses a threat especially considering the popularity of social networking interfaces like MySpace, Friendster and Facebook. Hip hops cultural history The final point to be considered in assessing the function of new media and youth cultures as bridging mechanisms between youth and extremist ideologies is the cultural history of hip hop. Scholars and the media have tended to react critically to hip hop which displays radical leanings, perceiving it as militant and dangerous. Gruen (Jihad watch, 2006) asserts, the music is very persuasive because it is giving young people ideas, and those ideas are what might motivate someone to become a jihadi. There is a case to be made for monitoring this type of music closely especially as young men, in particular, are attracted to the hardness and violence portrayed in some genres of hip hop like gangsta rap. However, adopting such a stance fails to take account of the subtle boundaries between hip hop activism or socially conscious rap and terror rap, and displays a lack of understanding of hip hop culture. Hip hop displaying radical views is often in the style of gangsta rap, which tends to romanticise the violence, misogyny, aggression, hypermasculinity and
An investigation by Lenhart et al. (2001) found that 94% of online teens report using the Internet to research for school, 54% believe it helps them keep ahead of fashion and music trends, and 26% say the internet helps them get information about things that are hard to talk to others about.
9
8

According to Gruen (2006, 14): a virtual world is created is created that provides all the social support necessary to satisfy the prospective recruits needs and to change his worldview. She adds that relief from alienation is a primary reason why people join irregular political organizations (15).
175

Recent advances in security technology

materialism of a gangster lifestyle. While this genre has been very controversial and many have argued that rap music actually causes violence, there is little evidence to suggest that this is the case. Furthermore, in many cases it appears that those young men who listen to gangsta rap (and similar genres) do so in order to appropriate a valid deviant and destructive masculinity which threatens normative definitions of Western masculinity. It is often this which compels young men to produce and listen to violent and aggressive rap lyrics, rather than an attraction to crime or radical agendas.

Engaging with youth and resistances


There are many difficulties for authorities in designing approaches which deal with youth culture and new media as bridging mechanisms between young people and extremism.10 Authorities have recognised the need to adopt alternative approaches in dealing with this however: the US State Department has been finding ways in which to best deal with internet companies in relation to extremism on the web, while the Australian Federal Police have set up fake jihadist websites to track extremists (Allard 2007, 2). Perhaps another direction governments can take is to find ways in which to engage with youth. As Alison Pargeters article on Western converts to radical Islam argues, many converts who have been involved in terrorist related activity, or who have taken up a militant interpretation of Islam, came from deprived backgrounds or from broken homes (21), had personal problems, were educational underachievers (21), did so in order to express social dissatisfaction and frustration (23), or mixed radical Islam with gang culture in order to justify their activities or give them greater street credibility (23). Taking up Islam became an escape route and a way of getting their lives back on track (22), and she asserts that, many of those converts who have been engaged in violence also seem to be misfits who turn to Islam to find a community that will accept them (22). Obviously there are a range of factors involved in creating a terrorist, but governments can do much to address some of the factors mentioned by Pargeter. Another factor that needs to be taken into account is that resistance cannot be conceived as belonging to a single, homogeneous plane. Instead, we should recognise a multiplicity of resistances which are directed toward local and specific targets. Attempts to develop a more specific model of power relations include that of Johan Fornas, who represents the diversity that resistance encompasses in a number of ways. Firstly, he points out that the agents of resistance are diverse, and may be individual or collective, resistant subjectivity or resistant communality territorially localized or nomadic and diasporic (1995, 126). The motives of resistance are also regarded as various. While some are seen as unintentional side-effects, others conscious and intentional, some even reflexive (129). Fornas also argues that the means of resistance and targets of resistance vary (127-8). There are a multiplicity of very specific resistances occurring in the use of youth culture and new media for
10

For instance: there is often no violation of the law and the internet is difficult to police.

176

Gen E (Generation Extremist): The significance of youth culture and new media in youth extremism

extremist purposes. On a broad level, there are different resistances in terms of what motivates youth in various countries to become involved in radical activities. As Weimann (2006, 55) points out, youth in the developing world are often receptive to terrorist recruitment because they have witnessed killings firsthand and this see violence as the only way to deal with grievances and problems. In more developed and Westernised countries, motivating factors tend to be different. Lia (1996) writes about the fact that terrorist manuals online might reach those with criminal, rather than terrorist actions or intentions. In such cases, although the same material is being accessed, clearly the resistance being enacted is different. Understanding resistance as multiple will allow authorities to devise more effective strategies to counter extremism.

Conclusion
The potential youth culture and new media provide to extremists in recruiting youth and communicating their views suggests that the Australian government should take this factor into consideration when designing and reviewing counter terrorism approaches. With growing evidence that contemporary extremist and terrorist groups are using youth culture and new media to recruit, communicate and incite violence, authorities are faced with many difficulties in devising strategies to counter this. For instance, it is difficult to locate the boundaries between hip hop activism or socially conscious rap and jihad rap, and the internet is difficult to police. While these radical campaigns often do not violate laws or draw notice from authorities, even music that does not seem radical (or aim to be) can be used as a bridging mechanism. In assessing the potential that youth culture and new media provide to extremists and the factors governments should take into account when designing and reviewing counter terrorism approaches, a number of issues need to be considered, including the ways in which the internet is used to channel extremism, young peoples relationship with the internet, and the cultural history of hip hop. Ultimately, by engaging with young people and the multiplicity of resistances they enact, governments may be more effective in countering extremism.

References
Abu Yaasameen Al-Sayfullah. 2007. http://profile.myspace.com/index.cfm?fuseaction=user.viewprofile&friendid=42981871 (accessed July 31, 2007). Allard, T. 2007. Calling extremists: A cyber sting awaits. The Sydney Morning Herald, Febraury 27. Barkun, M. 1997. Religion and the racist right. Chapel Hill, NC: University of North Carolina Press. Close to the edge. 2007. http://profile.myspace.com/index.cfm?fuseaction=user.viewprofile&friendid=53760446 (accessed July 31, 2007). Fornas, J. 1995. Cultural theory and late modernity. London: Sage Publications.
177

Recent advances in security technology

Gruen, M. 2006. Innivative recruitment and indoctrination tactics by extremists: Video games, hip-hop, and the world wide web. In James Forest (ed.). The making of a terrorist: Recruitment, training, and root causes. Westport, Connecticut: Praeger Security International. Jesse, B., & Matera, G. 2003. Selling Volksgemeinschaft: The normality of the extreme. http://www.humanityinaction.org/index.php?option=content&task=view&id=502 (accessed May 8, 2007). Jihad watch. 2006. http://www.jihadwatch.org/archives/014011.php (accessed July 31, 2007). Katz, R., & Devon, J. 2003. WWW.JIHAD.COM. http://siteinstitute.org/bin/articles.cgi?ID=inthenews3503&Category=inthenews&Subcateg ory=0 (accessed May 8, 2007). Kerr, J. 2007. Internet is breeding group for terror. Australian National Security September. Lenhart et al. 2001. Teenage life online: The rise of the instant-message generation and the Internets impact on friendships and family relationships. www.pewinternet.org/pdfs/PIP_Teens_Report.pdf (accessed May 8, 2007). Lia, B. 2006. Al-Qaeda online: Understanding jihadist internet infrastructure. Janes Intelligence Review 18(1): 14-19. Mohammad Adil. 2007. http://profile.myspace.com/index.cfm?fuseaction=user.viewprofile&friendid=143385137 (accessed July 31, 2007). Pargeter, A. (2006). Western converts to radical Islam: The global jihads new soldiers? Janes Intelligence Review 18(8): 20-26. Sageman, M. 2004. Understanding terror networks. Philadelphia, Pennsylvania: University of Pennsylvania Press. Schafer, J. A. 2002. Spinning the web of hate: Web-based hate propaganda by extremist organisations. Journal of Criminal Justice and Popular Culture 9(2): 69-88. Simon, S., & Benjamin, D. 2001. The terror. Survival 43(4): 5-18. Soldiers of Allah interview. 2007. http://www.muslimhiphop.com/index.php?p=Stories/10._Soldiers_of_Allah_Interview (accessed May 8, 2007). Swedenborg, T. 2001. Islamic hip-hop vs. Islamaphobia: Aki Nawaz, Natacha Atlas, Akhenaton. http://comp.uark.edu/~tsweden/IAM.html (accessed May 8, 2007). Sword of Allah 87. 2007. http://www.youtube.com/user/SwordofAllah87 (accessed July 31, 2007). Weimann, G. 2006. Terrorist dot com: Using the internet for terrorist recruitment and mobilization. In James Forest (ed.). The making of a terrorist: Recruitment, training, and root causes. Westport, Connecticut: Praeger Security International. YouTube is also a hit with terrorists. 2007. The Sydney Morning Herald, February 11. http://www.smh.com.au (accessed May 8, 2007).

178

An Overview of Pressure-Impulse Diagram Derivation for Structure Components

16
An Overview of PressureImpulse Diagram Derivation for Structure Components
Yanchao Shi
The University of Western Australia, Australia Tianjin University, Tianjin, China

Hong Hao
The University of Western Australia, Australia

Zhong-Xian Li
Tianjin University, Tianjin, China
Abstract
Pressure-impulse (P-I) diagrams were first used to assess the damage extent of structure elements and buildings in World War II. Since then it was studied and developed by many researchers and now it is commonly used as a first tool in the preliminary design or assessment of protective structures to establish safe response limits for given blast-loading scenarios. A few approaches are available in the literature to develop P-I diagram for a structural component. This paper presents an overview of the various methods in constructing P-I diagrams for structural components, including analytical, experimental and numerical approaches. Comparisons and discussions on the suitability, applicability and reliability of these methods are given. A combined numerical and analytical method for the derivation of P-I diagrams of RC columns using remaining load-carrying capacity criterion proposed by the authors is also introduced.

179

Recent advances in security technology

Biography
Yanchao Shi is a PhD candidate in Tianjin University, and a visiting PhD student in the University of Western Australia. His research interest is numerical analysis of dynamic response and damage of frame structures to explosive loads. Hong Hao is professor of structural dynamics in the School of Civil and Resource Engineering, the University of Western Australia. His research interests include earthquake and blast engineering, structural condition monitoring and vibration measurement. Zhong-Xian Li is professor of civil engineering in the School of Civil Engineering at Tianjin University, China. His research interests include earthquake and blast resistance analysis of structures, structural vibration control and health monitoring.

1. Introduction
A P-I curve is an iso-damage curve corresponding to the same damage of a structural component loaded with a particular loading history (e.g., blast load). A P-I diagram usually contains a group of P-I curves with different degrees of damage for a particular structure component. These curves divide the P-I space into several regions, each corresponds to a particular level of damage, and the curves themselves represent the boundaries between the different damage levels, such as low, medium and high damage. With the increase of global terrorist attacks on building structures, the P-I diagram, as the first tool in the preliminary design or assessment of structures to establish safe response limits for given blast-loading scenarios, has been draw great attention in the recent years. Several approaches to develop P-I diagrams for structural components are available in the literature. This paper presents a review of the various methods used to construct P-I diagrams. Comparisons and discussions on the suitability, applicability and reliability of these methods are given in the paper. A combined numerical and analytical method for derivation of P-I diagram of RC columns with a new damage criterion defined according to the column remaining load-carrying capacity proposed by the authors is also introduced.

2. Methods used to generate P-I diagrams


An overview of the available methods for generating P-I diagrams of structural components is presented in this section. According to the nature of the methods, they are classified into three categories, i.e. analytical, experimental and numerical method. 2.1 Analytical methods In analytical approach, the structural component is normally simplified as a SDOF system with an assumed response mode. The dynamic response of the SDOF system
180

An Overview of Pressure-Impulse Diagram Derivation for Structure Components

is calculated to generate the P-I curves under blast loading. Usually the maximum response of the system is used as the damage criterion. Li and Meng (2002a) studied the characteristics of a P-I diagram for blast loads based on dimensional analysis of a SDOF model. Using the maximum deflection as the damage criterion, they developed a normalized P-I diagram for the SDOF system. The pulse shape effect on the normalized P-I diagram was also considered. An effective P-I diagram was finally proposed to eliminate the load shape effect on P-I diagram. For an elastic SDOF system with an equivalent lump mass M and structural stiffness K, when it is subjected to a pulse load with the peak Fm and duration td, the SDOF solution could be easily derived according to Biggs (1964).

Figure 1 Normalized p-i diagram(after Li & Meng 2002b)

Figure 2 Bilinear representation of elastic-plastic-hardening/softening resistance function


Fm yc K
I y c MK
t M K

In this approach, the nondimensional variable is defined as:


p=
i=

(1) (2) (3)

Based on the SDOF solution, the general representative of a P-I curve was obtained in p-i space as shown in Figure 1. An analytical formula to derive P-I curve as shown in Figure 1 was also proposed as
p= n1 (i 1) n2 + 0.5

(4)
181

Recent advances in security technology

where n1 and n2 depend on the pulse loading shape. In order to eliminate the load shape effect, two new loading parameters ie and pe were introduced by
ie =
(i 1) n2 n1

(5) (6)

pe = p

Combined Equation (4)-(6), the effective P-I diagram could be derived as


pe =
1 + 0.5 ie

(7)

Both equation (4) and (7) can be used to derive P-I curve for an elastic SODF system. Further research was also carried out to extend this method into the elastic, perfectly plastic and rigid, perfectly plastic SDOF systems (Li & Meng, 2002b). Based on the idea of Li and Meng (2002a, 2002b), Fallah and Louca (2006) developed P-I diagrams for elastic-plastic-hardening and elastic-plastic-softening SDOF system subjected to blast loading. Figure 2 gives the bilinear representation of elastic-plastic-hardening/softening resistance function of the system. Here K is the stiffness of the SDOF system. K is the hardening/softening angle, i.e. if K>1, it represents the elastic plastic hardening stiffness, and if K<1, it represents the elastic plastic softening stiffness. yel is the yield displacement and yu is the ultimate displacement of the SDOF system. If the same nondimensional parameters as defined in Equation (1)-(3) are used, the PI curves of the SDOF system can be derived in the p-i space as shown in Figure 1 but with different pressure asymptote and impulse asymptote. Based on the analytical results, the authors proposed the formulae to derive the quasi-static asymptote and the impulsive asymptote as:

p= i= I

Fm = (1 2 ) + ( 2 2 + 2 2 ) yc K 2 = 2 (1 2 ) + ( 2 2 + 2 2 )

(8)

y c MK

(9)

The two asymptotes are described in terms of the two dimensionless parameters, i.e. 2, , and the hardening/softening parameter . 2 is the hardening/softening index, defined as the ratio of K to K, is the inverse ductility, defined as the ratio of yel to yd, the maximum displacement corresponding to a critical damage level; is the hardening/softening parameter which is equal to 1 for elastic-plastic-hardening model and to -1 for elastic-plastic-softening model. The above two analytical methods allow straightforward generations of P-I curves for structure component when only one mode of vibration is the dominant mode and responsible for overall structural failure. The load shape effect can also be considered
182

An Overview of Pressure-Impulse Diagram Derivation for Structure Components

in these two methods. And they include the solution of elastic, elastic perfect plastic, rigid plastic, elastic plastic hardening and elastic plastic softening SDOF system. However, the simplification of a structural component to a SDOF system has its own limitation. As it is well known, structures respond to blast loads primarily at their local modes. The local modes of the structure may govern the structure damage, especially when the blast load is of short duration. The use of a SDOF model may not be suitable for structure damage analysis to blast loads. For example, the SDOF model is not suitable to model multi-failure modes of a structural component, but a structural component such as a column might be damaged owing to shear failure initially and subsequently by flexural failure to collapse. Therefore, P-I diagram generated from analysis of a SDOF system may not give accurate prediction of structural component damage. Moreover, the deformation based damage criterion may not be appropriate for the evaluation of local damage of a structural component subjected to blast loads, especially when the damage is caused primarily by shear failure.

Figure 3 Critical P-I curves for shear and bending failure(after Ma et al. 2006)

Figure 4 Non-dimensional P-I Diagram for One-Way unreinforced Masonry (after Wesevich & Oswald 2005)

Since the SDOF model oversimplifies a structural component and neglects the influence of shear force, which subsequently leads to the ignorance of shear failure. Ma, et al. (2006) suggested a new approach to develop P-I diagram for rigid-plastic beams considering combined failure modes, i.e. bending failure and shear failure. For a rigid-plastic beam, they introduced the following parameters:

ie =
pe =

I 2mQ0

I 4mM 0 / L

(10)

p0 p0 = (Q0 / L) (2 M 0 / L2 )

(11)
183

Recent advances in security technology

0.8h v L

(12)

M0, Q0 are the bending and shear strength of the beam. m is the mass per unit length. is the dimensionless ratio of shear to bending strength. L is the half span length. h is depth of the beam. v is the average shear strain. Based on the parameters, the pressure-impulse equations in pe-ie space for simply supported and fully clamped beams can be expressed in the from of

i
2 e

1 = f 1 ( ) pe

(13)

k 1 + = f 2 ( ) 2 pe ie

(14)

where is the ratio of the mid-span deflection at a certain damage level to the halfspan of the beam; , k, f1() and f2() depend on the boundary conditions (simply supported or fully clamped) and failure modes. Thus, for different damage modes, different P-I curves could be obtained. Figure 3 gives a typical P-I diagram for shear and bending failure of a rigid-plastic beam. The latter method is an effective attempt in the development of P-I Diagram for structural components. The consideration of multi-damage modes and continuous beam model is a great leap in the field of analytical method. The primary drawback of the latter method is that it completely separates the shear failure and bending failure when developing the P-I curve, but in the real case, a beam may be damaged with combined shear and flexural mode. Another drawback is that it still uses the displacement response of the mid-span deflection as the damage criterion. In reality, the damage displacement is very difficult to be reliably determined, especially when the beam is damaged primarily by brittle shear failure. Therefore, further study is needed to develop reliable P-I curves if the traditional SDOF system approach will be used. The primary challenge is to define accurate damage criteria corresponding to various possible damage modes. 2.2 Experimental methods Experimental method is the most direct way to derive P-I curves of structures. Wesevich and Oswald (2005) derived the P-I diagrams for concrete masonry walls for varying degrees of damage based on the normalized experimental data. They compiled 236 open-air and shock tube tests on conventional masonry walls with various span, thickness, support, and reinforcement configurations and a range of damage levels (i.e. reuse, replace, collapse and blowout). The data was normalized based on the flexural capacities of the masonry wall in order to facilitate empirically derived P-I curves. The following mathematical expressions shown in Equation (15)(16) were used to compute the normalized P-I data points for common and retrofitted wall configurations.
184

An Overview of Pressure-Impulse Diagram Derivation for Structure Components

Pbar
I bar = i

PL2 = K pM

(15)

Lg K LM K r wM

(16)

in which P is the peak applied blast pressure, i is the applied positive phase impulse. L is the span of the wall between supports. M is the ultimate moment capacity of the CMU wall. g is the gravity constant. w is the average unit wall weight, Kp, Kr,KLM are boundary constant factors. The normalized data was fitted with the numerical expression in Equation (17). The values for the parameters A, B and c depend on the damage level and wall type. A typical P-I diagram is shown in Figure 4.

I bar

A( Pbar ) c = ln[( B)( Pbar )]

(17)

Generating a reliable P-I diagram for a structural component needs a large amount of experimental data, which is usually not available or very expensive to obtain experimentally. Thus, usually experiment data of the similar structural components need be used. Although effort is spent to normalize the dimensions, boundary conditions and structural material strength, variations and uncertainties inevitably exist, which may lead to unrealistic prediction of structural failure. For example, as shown in Figure 4, one can see that for a certain impulse, with the increase of the pressure, the damage of the masonry wall becomes smaller, which is not reasonable. This is because of the errors and variations associated with the many experimental data used to derive the P-I curve. Experimental data with normalization technique, however, remains the most straightforward and reliable approach to derive P-I curves of structures if more testing data is available. 2.3 Numerical methods With the development of computer power, it becomes possible to develop refined numerical model to simulate structural response and damage to blast loads. The computer simulated structural response can be used to construct P-I diagram for particular structural components. Lan and Crawford (2003) used the numerical method to derive the P-I diagram for metal deck roofs based on weld failure criteria. They ran a series of simulations to get the relationship between the blast loads and the roof damage level. Based on the numerical data, they developed the P-I curve for different damage level. They also compared them with the P-I diagram generated based on the SDOF method (PI-plus), it was found that the P-I diagram based on SDOF system underestimates the blast resistance capacity of the metal deck roof. This is probably because the material idealization and the negligence of strain rate effects in the SDOF approach. It is cheaper and effective to use the numerical method to generate P-I diagram as compared with the experiment method. However, the accuracy of the numerically
185

Recent advances in security technology

simulated data depends on a reliable numerical and material model. Moreover, even with todays computer power, it still requires extremely intensive computational effort to simulate a structural component failure to blast load. Therefore, a combination of numerical and experimental and/or analytical methods might be needed for an accurate and effective derivation of P-I diagram for structural components.

3 A combined numerical and analytical method


In this section, a combined numerical and analytical method to generate P-I diagram for RC column is proposed and explained. It is based on a new damage criterion which is defined by the authors based on the vertical load-carrying capacities of the column. 3.1 Damage criterion Considering that columns are all primarily designed to carry the axial loads (horizontal loads are mainly transferred to the rigid floor and the shear wall); the RC column axial load carrying capacity degradation is a reasonable choice to quantify column damage. The axial load carrying capacity of an undamaged RC column depends on the longitudinal reinforcement and concrete. According to MacGregor (1996) and ACI Code, the following equation is used to assess the maximum axial load-carrying capacity of an undamaged RC column:
PN = 0.85 f c' ( AG AS ) + f Y AS

(18)

Where f c' = compressive strength of concrete, f y = yield strength of the longitudinal reinforcement, AG = gross area of the column cross section, AS = the area of the longitudinal reinforcement. The damage index D is defined as:
D =1 PN _ residual PN _ design
(19)

where PN _ residual is the residual axial load carrying capacity of the damaged RC column. It can be obtained from numerical simulation as will be described in the next section or test. PN _ design is the design axial load carrying capacity of the undamaged RC column, which can be derived using Equation (18). The degrees of damage defined in this study are as follows: D= (0~0.2) low damage; D= (0.2~0.5) medium damage; D= (0.5~0.8) high damage; D= (0.8~1) collapse. 3.2 Combined method to generate P-I diagram In order to generate the P-I diagram for RC columns, a series of simulations of responses of RC columns under different blast loads are carried out. Then the blast loads, which define the peak pressure and impulse, will be plotted in the P-I space
186

An Overview of Pressure-Impulse Diagram Derivation for Structure Components

together with the damage level (see Figure 5). Finally the P-I curves, which are the boundary lines between different damage levels, are obtained using the curve-fitting method. It should be noted that the numerical model used to calculate the column response and damage has been calibrated with some independent field blast test data. Owing to the page limits, the calibration results are not given here. Figure 5 shows the P-I diagram of RC column C1 derived from the curve-fitting method. The configuration of column C1 is given in Table 1. A careful examination of the fitted PI curves finds that they can be expressed analytically as:
( P P0 )( I I 0 ) = A( P0 2 + I 0 2)

(20)

where P0 is the pressure asymptote for damage degree D. D is taken as 0.2, 0.5 and 0.8, respectively. I0 is the impulsive asymptote for damage degree D; A and are constants, which are related to the column configuration and degree of damage. P0, I0, A and of column C1 for the three P-I curves are given in Table 2. From this table one can see that A and are almost the same for different P-I curves, i.e. A12, 1.5, so they can be assumed to be independent of the degree of damage. Therefore, Equation (20) can be expressed as:
( P P0 )( I I 0 ) = 12( P0 2 + I 0 2) 1.5

(21)

Figure 6 compares the fitted curves given in Figure 5 and those obtained from Equation (21), as shown, Equation (21) well represents the P-I curves of a RC column.
Column C1` C2 Column width mm 400 600 Column depth mm 600 400 Column height mm 4600 4600 Cross tie/Hoop D10 @200 D10 @200 Longitudinal reinforcement 8D20 8D20 Cover depth mm 25 25

Table 1 Configuration of the RC column C1 and C2


10
5

10
Low damage Medium damage High damage Collapse Fitted D=0.2 Fitted D=0.5 Fitted D=0.8

10 Pressure (kPa)

10 Pressure (kPa)

D=0.2;Eq.(9) D=0.5;Eq.(9) D=0.8;Eq.(9) D=0.2;Eq.(10) D=0.5;Eq.(10) D=0.8;Eq.(10)

10

10

10

10

10

10

10

10

10

10

10

10

10

10

Impulse (kPa*ms)

Impulse (kPa*ms)

Figure 5 Damage of C1 and fitted P-I curve

Figure 6 P-I curves of C1 and Eq.(21)


187

Recent advances in security technology

10 Pressure (kPa)

10

Pressure (kPa)

Low damage Medium damage High damage Collapse Fitted D=0.2 Fitted D=0.5 Fitted D=0.8

10

10

Low damage Medium damage High damage Collapse Fitted D=0.2 Fitted D=0.5 Fitted D=0.8

10

10

10

10

10

10

10

10

10

10

10

10

10

Impulse (kPa*ms)

Impulse (kPa*ms)

Figure 7 Damage of C2 and fitted P-I curves


D 0.2 0.5 0.8 P0 (kPa) 900 1200 1500 I0 (kPa ms) 2500 3500 6000

Figure 8 P-I curves of C2 and Eq.(21)

A 11.5 12 12.5

1.45 1.49 1.54

Table 2 Value of the parameters in Eq. (20) Further study is conducted to investigate whether Equation (21) can be used for other RC columns. The same procedure is used to estimate damage degrees of RC column C2 under different blast loads. The configuration of column C2 is also given in Table 1. The damage levels with respect to peak pressure and impulse are plotted in the P-I space in Figure 7. The best-fitted P-I curves according to Equation (21) are also plotted in Figure 7. It shows that the P-I curves also fit the boundary lines between different damage levels. This demonstrates that Equation (21) can be used to model pressure -impulse curves for other RC columns. Since using Equation (21) to model a P-I curve substantially reduces the number of data points required to fit a reliable P-I curve, Equation (21) is used in this paper to model the variations of P-I curves. According to the previous discussions, the procedure to generate a P-I diagram for a RC column can be simplified to the following two steps: (1) Perform numerical simulations to obtain the damage degrees of a RC column under blast loads in two ranges. One is in the impulsive loading range, and the other is in the quasi-static loading range. The results (damage level) together with the blast peak pressure and impulse are then plotted in the P-I space (see Figure 8). (2) Using Eq. (21) as the regression model, obtain the best-fitted P-I curves, which are the boundaries between different damage levels, as seen in Figure 8.
188

An Overview of Pressure-Impulse Diagram Derivation for Structure Components

5. Conclusions
This paper presents an overview of the various methods used to construct P-I diagrams for structural components. They are analytical, experimental and numerical methods. All these three methods have their limitations. A combination of numerical and experimental and/or analytical methods with remaining load-carrying capacity of the column as damage criterion is proposed for an accurate and effective derivation of P-I diagrams for structural columns.

Acknowledgements
The authors would like to acknowledge the financial supports from the Australian Research Council under grant number DP0774061, and National Natural Science Foundation of China under grant number 50528808 for carrying out this research.

References
Biggs, J. M. 1964. Introduction to Structure Dynamics.New York: McGraw-Hill Book Company. Fallah, A. S. & Louca, L. A. 2006. Pressure-impulse diagrams for elastic-plastic-hardening and softening single-degree-of-freedom models subjected to blast loading. International Journal of Impact Engineering 34(4), 823-42. Lan, S. R. & Crawford, J. C. 2003. Evaluation of the blast resistance of metal deck proofs. In "5th Asia-Pacific Conference on Shock & Impact Loads on Structures", Changsha, Hunan, China. Li, Q. M. & Meng, H. 2002a. Pressure-impulse diagram for blast loads based on dimensional analysis and single-degree-of-freedom model. Journal of Engineering Mechanics 128(1), 87-92. Li, Q. M. & Meng, H. 2002b. Pulse loading shape effects on pressure-impulse diagram of an elastic-plastic, single-degree-of-freedom structural model. International Journal of Mechanical Sciences 44(9), 1985-1998. Ma, G. W., Shi, H. J. & Shu, D. W. 2006. P-I diagram method for combined failure modes of rigid-plastic beams. International Journal of Impact Engineering 34(6), 1081-94. MacGregor, J. G. G. 1996. Reinforced Concrete: Mechanics and Design. Prentice Hall,Professional Technical Reference. Wesevich, J. W. & Oswald, C. J. 2005. Empirical based concrete masonry pressure-impulse diagrams for varying degrees of damage. pp. 2083-2094. American Society of Civil Engineers, Reston, United States, New York, NY, United States.

189

Recent advances in security technology

17
Location-Based Services in Emergency Management- from Government to Citizens: Global Case Studies
Anas Aloudat, Katina Michael and Jun Yan
School of Information Systems and Technology, University of Wollongong
Abstract
In emergencies governments have long utilised broadcasting media like radio and television to disseminate up-to-date real-time information to citizens. In the same context, however, some other technologies like mobile location-based services have not been utilised to full extent or potential. The value of such services could be foreseen in the case of critical situations where the coordination of emergency management procedures with location-awareness activities is paramount. This paper tracks the introduction of location-aware services in the realm of emergency management. It investigates case studies where text messaging has been exploited to deliver safety information and early warnings to users based on the availability of their location information. This paper also examines the reasons for not adopting and using technologies like cell broadcasting at present.

Biographies
Anas Aloudat is a Ph.D. candidate in School of Information Systems and Technology, University of Wollongong. He has a Bachelor and Master degrees in Computer Science. His research focus is location-based solutions for emergency management. Other research interests include advance mobile technologies, navigation systems, and telecommunications. Dr. Katina Michael PhD (UOW) 2003, BIT (UTS) 1996, Senior Member IEEE '04. Katina is on the IEEE Technology and Society Magazine editorial board, and is the technical editor of the Journal of Theoretical and Applied Electronic Commerce Research. Her research interests are in the area of location-based services, emerging mobile technologies, national security, and their respective socio-ethical implications. Katina is currently a senior lecturer in the School of Information Systems and
190

Location-Based Services in Emergency Management- from Government to Citizens: Global Case Studies

Technology, Faculty of Informatics, University of Wollongong, Australia. She teaches eBusiness, strategy, innovation and communication security issues, and is the research administrator of the IP Location Based Services Program. Katina has authored over 40 refereed papers and is currently working towards the completion of her second book. Katina is also studying towards a Masters of Transnational Crime Prevention in the Faculty of Law. She has held several industry positions including as an analyst for United Technologies in 1993, Andersen Consulting in 1996, and a senior network and business planner for Nortel Networks (1996-2001). In her role with Nortel she had the opportunity to consult to telecommunication carriers throughout Asia. Dr. Jun Yan is currently an Australian Postdoctoral Fellow (APD) and a Lecturer in School of Information Systems and Technology, University of Wollongong. He received a PhD, a Master degree, and a Bachelor degree, all in Information Technology. His research interests include location-based services, service-oriented computing, workflow, and agent technologies.

1. Introduction
Emergencies and their aftermath have been a part of civilisation since time began. Emergencies including natural and man-made disasters, by their nature, cannot be predicted precisely in their timing, effects, or intensity. However, human societies have always practiced Emergency Management (EM) activities. Such activities have evolved from simple precautions and scattered procedures into more sophisticated management systems that include preparedness, response, mitigation, and recovery strategies (Canton, 2007). In the twentieth century, countries have utilised technologies like sirens, radio, television, and Internet to deliver warnings and real time information to citizens. Over the past few years, other technologies like mobile phone messaging systems have been exploited to complement traditional emergency systems. They have emerged as a practical solution since users location(s) can be determined using different existing positioning techniques, which then facilitates the providing of services based on derived location information (Kpper, 2005). European Telecommunications Standards Institute has discerned between two types of mobile emergency service applications (European Telecommunications Standards Institute, 2006). The first is initiated by a citizen in the form of a mobile phone call or a distress Short Message Service. This service is known as wireless E911 in the United States and E112 in the European Union. In this case, mobile service providers are obliged to provide information regarding the location of the originated call or message with accuracies within 50 to 150 metres (International

Telecommunication Union, 2002). The second is initiated by the government in which alerts, notifications, or early warnings are disseminated (pushed) to all
citizens located in designated area(s). Several studies have proposed mobile technologies as possible solutions to deliver location-based emergency information and warning notifications (Krishnamurthy, 2002), (Weiss et al., 2006); however, their feasibility has not been well documented. This paper provides an overview of the technologies and their use in real global EM
191

Recent advances in security technology

cases. The focus here is to investigate the value of such services and the obstacles behind their late adoption. The rest of the paper is organised as follows. Section 2 provides a background of different mobile technologies in the realm of EM. Section 3 investigates the reasons behind the late adoption of such technologies and presents actual global case studies of Government-to-Citizens (G2C) Location-Based Services (LBS). Section 4 concludes this paper and outlines authors future work.

2. Applied Mobile Technology Solutions for Emergency Management Applications


Mobile technologies have emerged as possible solutions to deliver warning notifications and emergency alert information. For example, the new 3G standard Multimedia Broadcast Multicast Service (MBMS) could be used to broadcast emergency information to disaster areas with rich multimedia content like voice instructions and evacuation maps (Ericsson.com, 2007). Determining which technology to utilise depends essentially on emergency services providers perception of the types, capabilities, and limitations of currently used mobile handsets and other handheld devices. Perhaps the two most feasible technologies that fulfil the requirements of emergency alert information service are the common Short Message Service (SMS) and the less used but comparable Cell Broadcast Service (CBS). Both could be used for geo-specific EM purposes and two technologies would operate with almost all mobile devices available today. A brief introduction of the two technologies and their characteristics are presented in this section. 2.1 Short Message Service SMS is a well-know asynchronous protocol of communication. It is capable of transmitting limited size of binary or text messages to one or more recipients. SMS offers virtual guarantee for message delivery to its destination (European Telecommunications Standards Institute, 2006). In case of an unavailable network coverage or temporary failure, the message is stored in the Short Message Service Centre (SMSC) network component and delivered when the destination becomes available. The message will also be delivered if the mobile handset is engaged with a voice and/or data activity. SMS messages do not consume much bandwidth although the network might become overloaded if an immense number of SMS messages and/or phone calls have been initiated simultaneously. Delays can occur and may result in delivery failure, especially during emergencies and disasters time. SMS does not provide any geo-specific location information by itself. Such information like cell ID must be obtained from other resources by mobile service provider. However, SMS does have the potential to be used in location-based emergency services. Mass SMS messages can be directed to specific mobile numbers when they have been identified to exist in designated area(s).

192

Location-Based Services in Emergency Management- from Government to Citizens: Global Case Studies

2.2 Cell Broadcasting Service Cell broadcasting technology is a service delivered by mobile providers where uniform text messages are broadcast indiscriminately to all mobile handsets in a specific geographic area. The messages could be broadcast to all towers in a carrier network covering a whole country or to a specific cell covered by a single tower. Cell broadcast service (CBS) has not been widely adopted in commercial applications. Unlike SMS, the nature of CBS does not allow for two-way interactive communication. Although there are few proprietary solutions that exist today, they require specific Subscriber Identity Module (SIM) toolkit and special back-end content management systems (celltick.com, 2003). However, one example is found in the United States where television and radio stations in rural states pay to broadcast messages to mobile users in situations like severe weather in hope to attract users to their channels for further information (O'Brien, 2006). The cell broadcasting spectrum has the capacity of 64000 different channels. Each channel could be used for a different type of messaging, e.g. weather warnings, traffic reports, public health advices, etc. Some channels are reserved for broadcasting specific-purpose messaging types. For example, the cell/area info display service allows a cell to broadcast its geo-specific information (Name or ID) directly to its handsets by utilising channel 050. In Figure 1, the info message is directly exposed on the handset screen in the idle state.

Figure 1. Vodafone Cell info display service CBS does not require the foreknowledge of mobile phone numbers. Analogous to radio, only the activated channel (switched to) would receive the broadcast. The handset has to be switched on to start receiving messages. A message will not be received if the handset is switched on after broadcasting. CBS is conveyed on dedicated channels by using a fraction of the bandwidth that is normally used for mobile phone calls and SMS text messages. Therefore, it will not place additional demand on carrier resources or suffer any degradation when network becomes highly congested during emergency incidents or calamity events (CellCast Communications, 2006).

193

Recent advances in security technology

2.3 SMS and CBS for Location-Based Emergency Management It is important to mention that other technologies such as Enhanced Messaging Service (EMS), Multimedia Messaging Service (MMS), and MBMS might be potentially used to deliver geo-specific emergency alert messages. However, all the cases that have been recorded were deployed by using either SMS or CBS. Table 1 presents a comparison of characteristics of two technologies in the domain of EM. Characteristic Handset compatibility Transmission form Short Message Service All handsets support SMS Unicast and Multicast communication Dependent. Foreknowledge of mobile number(s) is essential Independent. User receives the message anywhere Achieved by obtaining cell ID from the network operator No barring Cell Broadcast Service Most handsets support CBS except few e.g. Nokia 3310 (celltick.com, 2003) Broadcast service. Message received indiscriminately by every handset within broadcast range Independent. Message is received on activate broadcasting channel Dependent. Targets one cell or more Cell(s) location is known for broadcaster beforehand

Mobile number dependency Location dependency Geo-information Service barring Reception Congestion and delay

Delivery failure Delivery confirmation Repetition rate Language format

Received only if the broadcast reception status is set to ON Message is received once No reception if handset is the mobile is switched switched on after on broadcasting Affected by network Congestion is unlikely as CBS congestions. Immense are sent on dedicated number of SMS may channels. Almost no delays produce delays except if received in poor coverage area Network overload might Busy mobile handset might cause delivery failure fail to process a CBS message Sender can request No confirmation of delivery delivery confirmation No repetition rate Can be repeated periodically within 2 to 32 minutes intervals Identical to all receivers Multi-language messages can be broadcast on multiple channels simultaneously

194

Location-Based Services in Emergency Management- from Government to Citizens: Global Case Studies

Spamming

Some mobile service providers support internet connectivity. Internet-based SMS spamming is possible

Not possible expect through uncontrolled access to mobile network infrastructure and lack of safeguards by an irresponsible service provider

Table 1. Characteristics of SMS and CBS for EM Adapted from: (National Communications System, 2003), (European Telecommunications Standards Institute, 2006)

3. Worldwide Adoption of Location-Based Services for Emergency Management


3.1 Late Adoption and Diffusion Very recently, a few countries have deployed LBS but some seem reluctant to integrate such technologies in their existing emergency alert systems due to reasons such as regulations, resistance of mobile providers, and privacy concerns. However, the characteristics of previously-used technologies indicate great potential for LBS to be used alongside existing emergency alert systems to complement their role of keeping people informed. Among the pioneers who have tested and adopted LBS technology are Belgium (Libbenga, 2005), The Netherlands, Singapore, Japan, and South Korea. In particular, South Korea is the first country in the world to implement a nationwide mobile alert CBS-based emergency system (Ho, 2006). The following points summarise legal, political, and technical arguments behind the lack of adoption and diffusion of LBS in EM.

Attractive Business Model: One of the biggest obstacles for adoption of CBS has been finding a visible business model that can lure mobile operators to use it since information is sent without registration to whom it is delivered (celltick.com, 2003). Because it is a broadcasting service, there is no way to know the number of people who will receive it. It is not peer-to-peer communication model where potential recipients can be clearly quantified. Obligation of National Service Providers: Existing emergency regulations are largely subject to voluntary participation. No country except Singapore has mandated the involvement of commercial mobile service providers in the case of emergencies. In the United States, for example, even after 9/11 terrorist attacks and the devastating Hurricane Katrina, the Federal Communications Commission (FCC) is debating whether to obligate mobile service providers in a mandatory mobile emergency alert system or leave it voluntary as it is (Washington, 2006). Regulations: No standard regulations have been established to control the deployment of any proposed mobile alert system. For countries that intend to start deploying such systems, it is necessary to have a dedicated department or organisation that is specifically assigned to such tasks. Some of its duties
195

Recent advances in security technology

will be shaping regulations, maintaining plans, and associate performance standards, goals, and metrics (Washington, 2005). Security and Privacy Concerns: There are no guarantees that spammers would not be able to control the broadcast. Concerns from legal liability in case of hoaxes and false alarms arise as well. Therefore, in The Netherlands and South Korea the cell broadcasting is restricted by law to government agencies and only authorities can send warning messages (O'Brien, 2006).

3.2 Global Case Studies of G2C Location-Based Services The following cases have been acquired from attainable news and media resources. As there is scantly documented literature in the investigation of cases where locationbased mobile technologies have been used in the realm of EM the authors postulate that such work would present a proof of concept and provide concrete examples for any reluctant government to endorse and adopt the technology.

Text messages to allay SARS fears: Began on April 1, 2003, a rumour by a 14-years-old boy left many people panicked. The teenager mimicked the website design of a popular Chinese newspaper saying that SARS has hit Hong Kong (HK) and the city had been declared an infected city. HK city previously faced the death of 16 people because of SARS and more than 700 places under quarantine. The government of HK sent uniform SMS text messages to nearly six million mobile phones in order to allay fears and clear the situation as quickly as possible. The message body was direct yet simple Director of health announced at 3pm today there is no plan to declare Hong Kong as an infected area" The authorities who have been responsible for dispatching the messages were the Hong Kong government, the Health Department, and the Commerce Information and Technology Bureau. The only drawback was network overload. Some people got the message after six hours while others never got it at all (Perrone, 2003). Relief fears after a massive power failure in Italy: On September 28, 2003, a massive blackout hit Italy, leaving nearly 57 million people in the dark. Around 17% of Italys electricity is imported from its neighbours. The failure began in Switzerland and involved France, Austria, then Italy. Rail and air traffic faced interruptions with most of the emergency phone lines jammed. Three people were killed as a direct result of the power failure. The failure coincided with a cultural festival in the capital Rome with more than a million people located outdoors that night. During the blackout people were forced to leave shopping centres and stores for security reasons. Many people found themselves in the streets with rain starting to fall. About 12,000 people took refuge in Romes subway stations with other 30,000 were stranded on 110 trains. People started to panic as many of them did not have any kind of information about the incident. With no electricity, conventional notification channels were useless. Therefore, the Civil Protection Agency (CPA) with the cooperation of mobile carriers targeted every mobile handset in

196

Location-Based Services in Emergency Management- from Government to Citizens: Global Case Studies

metropolitan Rome and successfully sent SMS messages to all mobile phones in Rome informing people (Povoledo, 2003). Lightning risk alert system: Starting in 2007, Singapore deployed an early warning system as a part of the lightning risk alert system. Singapore as a country has one of the highest rates of lightning activities in the world with an average of 166 days of thunderstorms and almost 187 days of lightning a year. The system basically tracks and monitors 400 predefined geographical locations. If there is high possibility for lightning activity in a specific location it will initiate warning messages to schools in the targeted areas. A school principal will receive the SMS message that includes information such as the beginning and duration of high lightning risk situations. Messages are also sent if the warning is cancelled. Chemical companies that have subscribed to the service also get the same message content. The initiative is controlled by National Environment Agency and the Ministry of Education (Mulchand, 2007). Warning system for impending disasters: In 2007, New Zealand started a new warning system providing mobile phone users in perilous areas with free live text updates about pending emergencies. The system sends alert messages in emergencies that include lahars, major flooding, Tsunami, or other natural or man-made disasters. The system explicitly requires the registration of any citizen who wants to use it. Registration can be done by texting the word OPTNNCD to a specific number. Citizens can also register by completing a permission form available from the regional councils websites. Only users who have already subscribed and happen to coincide in danger areas would receive warning messages. A user can unsubscribe anytime. The service is free and the regions Civil Defence Emergency Management Group (CDEM) which has the authority over the project will bear the cost of each SMS that will be delivered by OPTN Ltd mobile network provider (New Zealand Press Association, 2007). Tsunami Disaster in Sri Lanka: When the Tsunami disaster hit Asia on December 26, 2004, Dialog GSM, a mobile service provider, used locationbased cell broadcasting technology to provide ongoing updates and emergency information to its subscribers along the costs of Sri Lanka. The information included coming waves, brief news reports, hospital help lines, and supply distribution centres. The company has been using mobile cell broadcasting for advertising and commercial purposes. The solution is based on Celltick Proprietary LiveScreen technology. During and after the events that followed the cataclysm, the system has been used to disseminate CBS emergency alerts messages to relevant areas under the provision of the Ministry of Disaster Management and Human Rights. While other communication systems like voice calls and SMS collapsed, CBS has proven to be great success in delivering ongoing emergency alerts. The network as a whole suffered extreme traffic overload because millions of people tried to call or send SMS messages. Contrary to phone calls and short message service, CBS does not virtually consume any bandwidth (Writer, 2005).
197

Recent advances in security technology

Satellite-based alert systems: Both Japan and South Korea have launched a satellite-based alert system. The system will provide both countries with instant warnings of natural disasters. Japan is one of the most natural disasterprone countries in the world with 108 active volcanoes and more than 10 percent of worldwide magnitude 6 earthquakes or greater is recorded every year. The Fire and Disaster Management Agency is the authority which is responsible for disseminating immediate warnings to local municipalities, affected in the case of Tsunami, cyclones, or volcano eruptions. The warnings will automatically activate communication devices connected to the system and operate sirens and voice alarms. South Korea which also experiences dire natural hazards like typhoons, set a similar system but it has been extended to be able to send text messages to citizens if they are in or at the vicinity of affected areas. The National Emergency Management Agency is the authority responsible for diffusing location-based text alert messages (Mizoguchi and Kim, 2007). South Korea foreign ministry mobile alert system: In May, 2005, the foreign ministry of South Korea in cooperation with SK Telecom (a mobile service provider) established a mobile alert system for natural disasters and acts of terrorism. The system provides location-based alerts to South Koreans travelling abroad if emergencies happen in their areas. SK Telecom global roaming service is used to deliver message and information in Korean language regarding the type of incident, telephone numbers of the Korean embassy or consular office in that country, regional hospital numbers, and police numbers. The information might turn out to be vital especially in the case where language could become a barrier. South Koreans may not know about the emergency until too late. The system has been successfully used to update South Koreans in The UK about the 7th of July London terror incident (Joins.com, 2005). Early Warning Location-Based Alert System in Australia: Australia has recently started to test its national mobile alert system. Tests have been done in Western Australia and Victoria to warn people about bushfires. The Victorian state government accompanied with Telstra partnership and with the cooperation of Emergency Management Australia (EMA) have successfully tested a trial emergency alert system that simultaneously telephoned every landline in a specific area. Victoria State will initiate a mobile emergency alerting system that will be able to send SMS alert messages and emails to people in specific areas in case of natural disasters or emergencies (Dunn and Collier, 2007). Another similar system will be introduced in New South Wales (NSW). The system sends SMS warnings and emergency information in the case of terrorist acts or natural disasters to all mobile users in suspected or endangered area. The information will include details like evacuation procedures, advices in case of bushfires, alternative routes, etc. The system is expected to operate with all mobile service providers. (The Australian, 2007). Both systems in NSW and VIC are supposed to function in 2007.

198

Location-Based Services in Emergency Management- from Government to Citizens: Global Case Studies

While technologies like SMS and CBS have been successfully used, it is interesting to see whether advanced LBS will be diffused using new standards and protocols like MBMS. Such technologies are capable of delivering full multimedia content of voice instructions and detailed evacuation maps. How governments will approach the private sector is another issue as well. It is argued that many users are worrying about the revealing of their mobile location information without knowing who is accessing it and why (Morris, 2002). Such information might reveal the whereabouts of the user, and with the possibility of privacy intrusion the danger can be greater than revealing other personal details. Therefore, the importance of privacy safeguards that are protected by privacy laws is emphasised and a maximum success would be achieved if privacy to become a component in the technology itself (Morris, 2002).

4 Conclusion and Future Work


In spite of pervasive presence of mobile technologies, their feasibility in EM has been scarcely documented. This paper gives an overview of two mobile technologies and provides a comparison in the domain of EM. It highlights the need to address unanswered questions relating to their viability as legal, political, and technical concerns are still the main challenges behind their late adoption worldwide. Other issues like defining performance standards and the participation of the private sector are paramount as well. Nonetheless, authors believe future looks promising for adoption and diffusion of such services on a broader scale based on the success of several global real case studies that are presented. Authors future work will include inquiring Australia preparedness to launch its states and national LBS systems in EM. A survey will be prepared to measure political and technical readiness for such initiatives. Novel trends of using new generation of mobile technologies in EM will be investigated as well.

References
CANTON, L. G. 2007, Emergency Management: Concepts and Strategies for Effective Programs, Hoboken, New Jersey:John Wiley & Sons, Inc. CELLCAST COMMUNICATIONS, 2006, 'First U.S. Emergency Alerts on Cell Phones By CellCast Communications and Einstein Wireless Prove Viable in Disasters or Terrorist Acts', PR Newswire (U.S.), 16 August, accessed 11 May 2007, Factiva CELLTICK.COM. 2003, Cell broadcast:New interactive services could finally unlock CB revenue potential, accessed 3 May 2007. Available: http://www.celltick.com/technology.asp?id=WhitePapers&cid=14 DUNN, M. & COLLIER, K., 2007, 'Plan to use SMS for SOS', Herald-Sun, 27 February 2007, p11, accessed 14 April 2007, Factiva ERICSSON.COM. 2007, Mobile networks go broadcast with Ericsson, accessed 12 May 2007. Available: http://www.ericsson.com/ericsson/press/releases/200604051043320.shtml

199

Recent advances in security technology

EUROPEAN TELECOMMUNICATIONS STANDARDS INSTITUTE. 2006, Analysis of the Short Message Service and Cell Broadcast Service for Emergency Messaging applications, accessed 10 May 2007. Available: http://pda.etsi.org/pda/home.asp?wki_id=jhPgAkxRGQ2455A550@55 HO, C. K., 2006, 'Emergency alerts on mobile phones? Govt studying idea', Straits Times, 15 January, accessed 11 May 2007, Factiva INTERNATIONAL TELECOMMUNICATION UNION. 2002, ITU Internet Reports: Internet for a Mobile Generation, accessed 02 May 2007. Available: http://www.itu.int/wsis/tunis/newsroom/stats/Mobile_Internet_2002.pdf JOINS.COM, 2005, 'Ministry will provide terror, disaster alerts', Joins.com, 11 May 2005, accessed 02 April 2007, Factiva KRISHNAMURTHY, N. 2002, 'Using SMS to deliver location-based services', in Personal Wireless Communications, accessed 2 July 2007, IEEEXplore KPPER, A. 2005, Location-based Services: Fundamentals and Operation, Chichester, West Sussex:John Wiley & Sons Ltd. LIBBENGA, J. 2005, SMS - Belgium's first line of defence, accessed 17 May 2007. Available: http://www.theregister.co.uk/2005/06/17/belgian_sms/ MIZOGUCHI, K. & KIM, K.-T., 2007, 'Japan starts alert system using satellites to send warnings on natural disasters', Associated Press Newswires, 9 February 2007, accessed 15 April 2007, Factiva MORRIS, J. B. 2002, The Elements of Location Tracking and Privacy Protection. IN SARIKAYA, B. (Ed.) Geographic location in the Internet:163 - 178. Boston:Kluwer Academic Publishers. MULCHAND, A., 2007, 'SMS service alerts schools to danger of lightning strike', Straits Times, 24 March 2007, accessed 15 April 2007, Factiva NATIONAL COMMUNICATIONS SYSTEM. 2003, Short Message Service Over Signaling System 7, accessed 13 May 2007. Available: http://www.ncs.gov/library.html#info NEW ZEALAND PRESS ASSOCIATION, 2007, 'TEXT MESSAGES TO WARN OF IMPENDING DISASTER', Mediacom, 12 March 2007, accessed 15 April 2007, Factiva O'BRIEN, K. J., 2006, 'Mobile providers resisting SOS alerts Loss of text-messaging sales feared', International Herald Tribune, 11 January, p1, accessed 11 May 2007, Factiva PERRONE, J. 2003, Text messaging used to allay Sars fears, accessed 02 April 2007. Available: http://www.guardian.co.uk/sars/story/0,,929609,00.html POVOLEDO, E., 2003, 'Massive Power Failure Sweeps Across Italy', International Herald Tribune, September 28, 2003, accessed 02 March 2007 THE AUSTRALIAN, 2007, 'NSW: Iemma promises mobile phone terrorism alert', Australian Associated Press General News, 26 February 2007, accessed 15/04/2007, Factiva WASHINGTON, J. S., 2005, 'Expert offers advice on how to overhaul alert system', RCR Wireless News, 3 October, p1, accessed 11 May 2007, Factiva WASHINGTON, J. S., 2006, 'Cell phones to be included in upgraded EAS system', RCR Wireless News, 11 September, p3, accessed 11 May 2007, Factiva WEISS, D., KRAMER, I., TREU, G. & KUPPER, A. 2006, 'Zone Services - An Approach for Location-Based Data Collection', in The 8th IEEE International Conference on E200

Location-Based Services in Emergency Management- from Government to Citizens: Global Case Studies

Commerce Technology, The 3rd IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services, accessed 14 May 2007, IEEE Xplore WRITER, S. 2005, Celltick's LiveScreen operated as mobile emergency alert system during Tsunami crisis in Sri Lanka, accessed 13 May 2007. Available: http://www.newswireless.net/index.cfm/article/1845#

201

Recent advances in security technology

18
Denial of Service Vulnerabilities in IEEE 802.11i
J. Smith
Information Security Institute, Queensland University of Technology, Brisbane Australia
Abstract
The medium access control security enhancements specified in the IEEE 802.11i standard detail a completely overhauled security framework with improved authentication, access control, key management, and link layer security algorithms. These improved measures permit the construction of robust security networks that are impervious to the widely publicized attacks targeting legacy wireless networks secured using wired equivalent privacy. While these enhancements assure the confidentiality and integrity of information, techniques for protecting robust security networks from attacks targeting availability are lacking. In fact, the introduction of additional upper layer authentication protocols aggravates the potential for denial of service. Given the intractability of preventing all denial of service attacks targeting wireless networks, techniques for the detection of specific attacks are required in order that optimal responsive strategies can be selected and implemented. In this paper, the robust security network architecture is introduced and the denial of service attacks threatening the availability of such networks are catalogued. This catalogue includes attacks targeting the physical and medium access control layers, upper layer authentication protocols, and wireless network card drivers and firmware. It is proposed that the development of such a catalogue can be used to identify those particular attacks that might be dealt with preventatively and those that must rely on a strategy of detection and response. Furthermore, such a catalogue would support the design, development, and evaluation of effective detection techniques.

Biography
Jason Smith is a research fellow with the Information Security Institute at the Queensland University of Technology. His research interests are in the areas of network security and intrusion detection with a specific focus on denial of service resistance, wireless communications and control systems environments. He has recently completed his doctoral thesis, entitled Denial of Service: Prevention, Modelling and Detection.
202

Denial of Service Vulnerabilities in IEEE 802.11i

1. Introduction
In 1997 the IEEE ratified a standard specifying the physical layer (PHY) and medium access control or message authentication code (MAC) layers required for interoperable wireless communications in LANs (IEEE 1997). This initial standard provided low data rate wireless communications over infra-red (IR), frequency hopping spread spectrum (FHSS), and direct sequence spread spectrum (DSSS) physical layers. Numerous amendments have been made to this initial standard to both increase the data transmission rates (IEEE 1999a, IEEE 1999b) and to enhance security functionality (IEEE 2004). Wireless LAN technology based on these standards has been tremendously successful and has been widely adopted in home, academic, enterprise, government, industrial, and (increasingly) in other critical environments. While security enhancements have increased the assurance that the confidentiality and integrity of information transmitted using these technologies can be adequately protected, techniques for protecting wireless LAN communications from attacks targeting availability are lacking. In the absence of effective techniques for preventing attacks targeting the availability of wireless networks, focus must shift to improving our ability to detect and respond to such attacks when they do occur. In this paper the robust security network architecture is introduced (Section 2) and the range of denial of service attacks threatening the availability of such networks are catalogued (Section 3). A summary and conclusions are presented in Section 4.

2. Robust Security Networks


As detailed in IEEE 802.11i (IEEE 2004), robust security networks (RSNs) are wireless local area infrastructure networks that are comprised of robust security network associations (RSNAs). A RSNA between an access point (AP) and a wireless station (STA) is secured using: port-based network access control (IEEE 2001); extensible authentication protocol (EAP) exchanges (Aboba et al. 2004); cryptographic keys derived using a newly specified 4-way handshake; and with integrity and confidentiality protection provided by temporal key integrity protocol (TKIP) or the advanced encryption standard (AES) in CCMP.40

3. Vulnerabilities in Robust Security Networks


This section will discuss denial of service attacks in IEEE 802.11i infrastructure wireless networks that target the physical layer, MAC layer, upper layer authentication protocols, and vulnerabilities in wireless operating system drivers and

40

Counter-mode and cipher block chaining with message authentication code protocol
203

Recent advances in security technology

network interface card firmware. A summary of the attacks described in this section is shown in Figure 1.

Figure 1. Summary of Denial of Service in IEEE 802.11i Infrastructure Wireless Networks Attacks Targeting the Physical Layer Equipment based on IEEE 802.11 standards require access to unlicensed spectrum in the industrial, scientific and medical (ISM) or unlicensed national information infrastructure (U-NII) bands to operate. Specifically, IEEE 802.11b/g equipment relies on 14 channels of 5MHz each in the frequencies ranging from 2.412GHz through to 2.484GHz. IEEE 802.11a equipment, relies on eight channels of 20MHz each in the frequency range 5.15GHz - 5.35GHz and four channels of 20MHz each on the frequency range 5.725GHz - 5.825GHz. Any significant interference in these frequency bands will disrupt radio communications among IEEE 802.11 devices and result in denial of service. When interference is intentionally generated to disrupt communications, it is referred to as radio, or radio frequency (RF), jamming. A prerequisite for RF jamming, and the other attacks described is that the attacker be within radio communications range of the target. This constraint is dominated by two main factors. Firstly, the proximity of the attacker to the receiver of transmissions, and secondly the transmit power of the attacker. To succeed in generating sufficient interference at a receiver, the attacker must generate a signal powerful enough to make legitimate signals in the spectrum
204

Denial of Service Vulnerabilities in IEEE 802.11i

irretrievable. In generating a jamming signal, the attacker may target a specific narrowband frequency, or if power permits a wider bandwidth spectrum. The jamming signal may be transmitted continuously, or be timed to interfere with signals at critical moments (for example, some wireless intrusion detection and prevention systems will attempt to frustrate intruder actions by generating acknowledgement (ACK) collisions, effectively preventing their use of the wireless medium). The strategy employed by the attacker will be determined in part by the sophistication of the signal generating equipment they posses, the characteristics of the antennae they are using, the available power, and the fear of detection and capture that the attacker possesses. RF jamming attacks are noisy and attackers can usually be localised using spectral analysers. Attacks targeting the RF spectrum are difficult to mitigate using commercial hardware in unlicensed spectrum. While techniques to increase the cost of successfully mounting an RF jamming attack, such as fast frequency hopping over large bandwidths, do exist and are extensively used in military environments, they are likely to remain impractical for commodity wireless LAN systems. Therefore, wireless LANs must be assumed to be vulnerable to RF jamming attacks. Attacks Targeting the MAC Protocols Not only are wireless networks susceptible to attacks at the physical layer, but owing to an absence of integrity protection and authentication of management and control frames, they are also vulnerable to attacks targeting the carrier sensing functions and management frames of the MAC protocols. Carrier Sensing Attacks All STAs participating in a wireless BSS will defer transmission when the medium is sensed, either via physical or virtual carrier sensing, to be busy. It is possible to convince all STAs that the medium is busy by manipulating the physical or virtual carrier sensing procedures. Physical To facilitate correct MAC operation, the IEEE 802.11 standard mandates that a Station Management Entity (SME) will be present and able to interrogate layer specific status and control layer specific parameters. The two layers controlled via the SME are the MAC Layer Management Entity (MLME) and the PHY Layer Management Entity (PLME). It is possible to exploit an optional PLME service primitive that places a standards compliant wireless networking card in a test mode of operation capable of continuously transmitting a specified bit pattern on a given channel. Once the attacking node begins transmitting this pattern, all stations within range of the transmission, including APs, will assess the channel state to be busy until the attacking node is disabled. This results in clients of the network perceiving the AP as out of range. Details of this clear channel assessment (CCA) attack can be found in an
205

Recent advances in security technology

AusCERT advisory (AusCERT 2004) and the paper by Wullems et al. (Wullems et al. 2004). Virtual Bellardo and Savage report on the vulnerability of IEEE 802.11 wireless network MAC protocols to fabricated duration fields in RTS, CTS and directed data frames (Bellardo and Savage 2003). By falsifying the duration field in RTS, CTS, or data frames, an attacker is able to reserve network bandwidth and cause any STA receiving the fabricated frames o update their NAV and defer accessing the medium. This technique is referred to as virtual jamming. Interestingly, Bellardo and Savage reported that none of the hardware manufacturers of the network cards utilised in their experiments correctly implemented the virtual carrier sense functionality. This meant that they were unable to successfully implement the attacks in their experimental test bed. The vulnerability is a feature of the standards, however, and warrants consideration. Selective virtual jamming is a technique described in a patent application by AirTight Networks, Inc. (Gopinath and Bhagwat 2006). This technique exploits the fact that all standards compliant devices, except the destination of a frame containing a duration field, must update their NAV and defer access to the wireless medium. The intrusion detection and prevention system detailed in the patent application uses this feature of the IEEE 802.11 standards to implement a selective form of jamming (against intruders or misconfigured devices), in which presumably legitimate STAs are made the target of a frame containing a duration field (which permits them to access the medium), while those STAs assessed to be intruders, never receive such frames from the network. Using this approach, legitimate devices can be granted access to the medium in a round robin fashion or according to some other service schedule. A significant assumption in the adoption of this technique is that an otherwise malicious intruder will honour the IEEE 802.11 protocols. Malicious intruders will generally have both the capability and the motive to violate assumptions with respect to adherence to the IEEE 802.11 protocols and it should be noted that the CCA attack described in this paper would ignore selective virtual jamming, rendering the countermeasure ineffective. Management Frame Spoofing Attacks Bellardo and Savage document the ease with which attacks targeting IEEE 802.11 management frames could be implemented (Bellardo and Savage 2003). Specifically, they demonstrated denial of service attacks resulting from spoofed disassociation and deauthentication management frames were feasible. The fabrication of deauthentication or disassociation frames by an attacker will alter the state of the STA and prevent the target from transmitting data frames until they have authenticated and associated to the AP again. In addition to spoofing deauthentication and disassociation frames, Bellardo and Savage demonstrated that denial of service results when an AP association table (a
206

Denial of Service Vulnerabilities in IEEE 802.11i

finite sized data structure used to store information on associated STAs and their current state) is flooded with association requests, resulting in the exhaustion of association table space for legitimate STAs to establish a connection with the AP. This type of attack is clearly a resource exhaustion based denial of service and must be addressed by effective resource allocation techniques. In implementing deauthentication and disassociation management frame spoofing for the purpose of denial of service, an attacker can masquerade as the AP or a target STA, can broadcast or direct the spoofed frames to a specific STA. An attacker can flood the network with spoofed frames, or time their transmission, such that they are only in response to the successful completion of an association or authentication event, thereby more effectively utilising their own resources and minimising exposure to detection. Power Saving Mode Attacks The IEEE 802.11 MAC sublayer management entity implements protocols that permit a STA in an infrastructure wireless LAN to enter a power saving (PS) mode. While in PS mode, any traffic destined for the dozing STA will be buffered by the AP. The STA will periodically awaken to listen for AP beacons. The AP will identify those STAs in PS mode that have buffered unicast frames with a traffic indication map (TIM) and the presence of multicast and broadcast frames via a delivery TIM (DTIM). In order to recover buffered frames from an AP, the STA on detecting that it has data awaiting collection (as indicated in a TIM or DTIM) will issue a PS-Poll message to the AP. The AP will then either transmit the buffered frames immediately, or acknowledge the poll request and transmit the frames when the medium is available. As with the management frame attacks just described, the absence of any cryptographic authentication of these frames allows an attacker to: (1) falsify TIM to fool a STA into believing that there are no buffered frames in the AP awaiting retrieval, or (2) to masquerade as the dozing STA and forge a PS-Poll frame, such that the AP will discard the buffered frames while the legitimate STA is still dozing (Bellardo and Savage 2003). Attacks Targeting IEEE 802.11i Authentication Not only did the additional capabilities in IEEE 802.11i wireless LANs improve the quality of authentication, but they also introduced additional attack vectors for denial of service. Recall that authentication in IEEE 802.11i wireless LANs is based on IEEE 802.1X, EAP, and the IEEE 802.11i 4-way handshake. Each of these protocols may become the target of a denial of service attack.

207

Recent advances in security technology

IEEE 802.1X and EAP Attacks The following EAPOL frames affect the state of the IEEE 802.1X authenticator port access entity: EAPOL-Start Sent by the supplicant to the authenticator to initiate an authentication exchange; EAPOL-Logoff Sent to the authenticator by the supplicant, to reset the controlled port state to unauthorised (i.e. blocked) EAP-Failure Sent from the authenticator to the supplicant on the unsuccessful completion of an authentication exchange. EAP-Success Sent from the authenticator to the supplicant on successful completion of an authentication exchange. These frame types are not cryptographically secured, so can be fabricated by an attacker to interfere with attempts by legitimate STAs to establish a connection to the wireless LAN. As with IEEE 802.11 management frame based attacks, these frames can be flooded into the network, or timed to maximise their impact. For example, an effective use of an attackers resources would be to wait until a supplicant had completed an authentication exchange with the AS, prior to transmitting an EAPOLLogoff frame to reset the controlled port.

The forgery and premature transmission of an EAP-Success or EAP-Failure message (from the AP to the supplicant) prevents the supplicant from completing an authentication exchange and gaining access to the controlled port. EAP ID Exhaustion The EAP ID is an 8 bit field in the header of frames exchanged between the supplicant and authenticator. It is used as a session identifier, allowing both the supplicant and authenticator to match requests and responses within an exchange. It is possible, via a flood of association requests, for an attacker to exhaust the available EAP ID space and prevent new EAP exchanges from commencing. IEEE 802.11i 4-Way Handshake Attacks The IEEE 802.11i 4-Way handshake transforms the PMK resulting from end-to-end authentication of the supplicant to the AS into a pairwise transient key (PTK) used by the STA and AP to verify negotiated security parameters, transport group keys, and secure the link layer. The 4-Way handshake is shown in Figure 2. In message one, the AP selects a random nonce ANonce and transmits this to the STA. The STA then combines this ANonce with a nonce of its own choice (SNonce), the PMK, and the MAC addresses of itself and the AP in a pseudo random function to generate the PTK. The PTK is in fact three separate keys of 128 bits each. One of these keys, referred to as the key confirmation key (KCK) is used to generate a message integrity code (MIC) on the subsequent messages exchanged in the 4-way handshake. In message two, the STA sends an authenticated copy of its nonce (SNonce) and the robust security network information element (STA RSN IE) parameters it may
208

Denial of Service Vulnerabilities in IEEE 802.11i

optionally have negotiated during association with the AP. On receipt of message two, the AP can generate a copy of the PTK, recover the KCK and authenticate the contents of message two.

Figure 2 IEEE 802.11i 4-Way Handshake In message three, the AP includes an integrity protected and authenticated copy of its nonce (ANonce), a copy of the AP RSN IE (as advertised in its beacons), and an encrypted copy of the group transient key (GTK) used to secure multicast communications within the BSS. In the final message (message four), the STA confirms completion of the 4-way handshake, with an integrity protected and authenticated ACK. He and Mitchell formally and extensively analysed the IEEE 802.11i protocols and noted the following denial of service vulnerabilities in the 4-way handshake as well as in the countermeasures implemented by the message integrity algorithm Michael (He and Mitchell 2004/5) Forged Message One Message one is sent in the clear. so can be fabricated by an intruder. As a result, the STA and the AP will derive PTKs using different values for ANonce. All attempts to verify messages in the 4-way handshake will fail. Information Element Poisoning To protect the AP and STA from security roll back attacks, in which communicating peers are fooled into using weaker security algorithms than they would independently choose, the 4-way handshake includes the STAs optionally nominated security suite choice (STA RSN IE) in message two, and the AP advertised security capabilities (AP RSN IE) in message three. Unfortunately the STA, in processing message three, performs a bitwise comparison of the AP RSN IE against the advertised capabilities, prior to verifying the MIC of
209

Recent advances in security technology

the message. An adversary can therefore spoof a message three with an incorrect AP RSN IE value, causing the 4-way handshake to abort and the STA to deauthenticate from the AP. Note that even if the processing order was corrected in the IEEE 802.11i specification, an attacker could still transmit bogus AP RSN IEs in beacon and probe response frames, thereby poisoning the RSN IE values stored by STAs (He and Mitchell 2005). Michael Countermeasure Attacks The integrity protection algorithm adopted by TKIP, known as Michael, was designed to meet the constraints that operation on legacy hardware imposed. The algorithm is computationally lightweight and performs well on the target hardware platforms. The trade-off in the design, however, was a reduced level of security and a dependence on the correct functioning of other components in the system, initialisation vector reuse for example (Ferguson 2002). As a result of the design constraints concern has been raised that an active attacker would be able to compromise the message integrity protection afforded by Michael with a limited number of messages. Therefore, Michael incorporates countermeasures that prevent an attacker from generating the required number of messages. The countermeasures operate on the assumption that message integrity check failures are indicative of an attack and the source of messages failing the integrity check are throttled to two messages per minute. This is achieved by the AP deleting the keys for the session, initialising a 60 second timer, and only allowing re-keying after that timer has expired. By introducing message integrity check failures an attacker can invoke the countermeasures and initiate a denial of service attack on the targeted STA. Attacks Targeting Drivers and Firmware Device drivers are particularly attractive targets for two main reasons. Firstly, the various hardware vendors and not the operating system vendor develop them. As a result, device driver programmers do not necessarily have a complete understanding of all of the kernel data structures that the device driver must interface with. Hardware manufacturers are under intense pressure to be first to market with new features for their devices, so device drivers are developed under significant time pressures and may have only undergone rudimentary testing prior to release. Given such constraints it is unsurprising that device driver code contains many more errors than other kernel code (Chou et al. 2001) and is responsible for the majority of system failures in commodity operating systems (Swift et al. 2005). The second reason that device drivers are attractive targets stems from the fact that current operating systems architectures execute device drivers as privileged processes. The failure or compromise of a device driver, then, will potentially result in privileged access to the target system. Another trend is that wireless network card manufacturers have been migrating functionality out of the cards firmware and into the user accessible software driver.
210

Denial of Service Vulnerabilities in IEEE 802.11i

Exposing low-level functionality that was previously protected within the network interface card. This introduces new vulnerabilities as discussed below. Flawed Driver Recently, vulnerability researchers have been actively evaluating the behaviour of wireless network card drivers and firmware in the presence of unexpected inputs. This has led to a spate of vulnerability announcements (US-CERT 2004/6) describing denial of service and remote code execution attacks. The attacks result from the manner in which specific drivers handle invalid frames. Concurrently, other research has identified that the behaviour of wireless network interface cards can be passively observed, and the timing of specific frame types can be used as a useful device driver fingerprinting technique (Franklin et al. 2006). When these two research directions are combined, it is readily apparent that attackers have both the means to identify vulnerable devices and the capability to launch attacks targeting those devices. Modified Driver As MAC protocol functionality is increasingly implemented in user accessible software, it is possible for malicious or greedy users to modify the behaviour of their devices to gain unfair access to networking resources and cause denial of service. For example, an altered MAC layer protocol can be used to facilitate prioritized access to the wireless medium by reducing IFS values. Modified MAC layer protocols could also be used for countermeasure evasion by ignoring duration values as applied in selective virtual jamming.

4. Summary and Conclusions


The introduction of RSNs improves the techniques available for protecting the confidentiality and integrity of information transmitted using wireless LANs based on the IEEE 802.11 MAC. These new measures do, however, introduce additional threats to availability. These threats have been catalogued in this paper. This catalogue includes attacks targeting the physical and medium access control layers, upper layer authentication protocols, and wireless network card drivers and firmware. It is proposed that such a catalogue can be used to identify those particular attacks that might be dealt with preventatively and those that must rely on a strategy of detection and response. Furthermore, such a catalogue would support the design, development, and evaluation of effective detection techniques.

5. References
B. Aboba, L. Blunk, J. Vollbrecht, J. Carlson, and H. Levkowetz. 2004. Extensible authentication protocol (EAP). Request for Comments 3748, IETF, June. Proposed Standard.
211

Recent advances in security technology

AusCERT. 2004. Denial of service vulnerability in IEEE 802.11 wireless devices. Advisory AA-2004.02. J. Bellardo and S. Savage. 2003. 802.11 denial-of-service attacks: Real vulnerabilities and practical solutions. In Proceedings of the 11th USENIX Security Symposium, pages 15 28. USENIX. A. Chou, J. Yang, B. Chelf, S. Hallem, and D. Engler. 2001. An empirical study of operating systems errors. In SOSP 01, pages 73 88, New York, NY, USA, 2001. ACM Press. N. Ferguson. 2002. Michael: an improved MIC for 802.11 WEP. Technical Report 1102/020r0, IEEE, January. J. Franklin, D. McCoy, P. Tabriz, V. Neagoe, J. V. Randwyk, and D. Sicker. 2006. Passive data link layer 802.11 wireless device driver fingerprinting. In Security 06: 15th USENIX Security Symposium, pages 167 178. K. N. Gopinath and P. Bhagwat. 2006. Method and system for allowing and preventing wireless devices to transmit wireless signals. U.S. Patent Application Publication 11/230,436, AirTight Networks, July 2006. C. He and J. C. Mitchell. 2004. Analysis of the 802.11i 4-way handshake. In WiSe 04: Proceedings of the 2004 ACM workshop on Wireless security, pages 4350, New York, NY, USA. ACM Press. C. He and J. C. Mitchell. 2005. Security analysis and improvements for IEEE 802.11i. In NDSS. The Internet Society. IEEE 1999a Std 802.11a1999. Information technologyTelecommunications and information exchange between systemsLocal and metropolitan area networksSpecific requirementsPart 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications: High-speed Physical Layer in the 5 GHZ Band. IEEE Standards Association. IEEE 1999b Std 802.11b1999. Supplement to IEEE Standard for Information technology Telecommunications and information exchange between systemsLocal and metropolitan area networksSpecific requirementsPart 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications: Higher-Speed Physical Layer Extension in the 2.4 GHz Band. IEEE Standards Association. IEEE 2004 Std 802.11i2004. IEEE Standard for Information technology Telecommunications and information exchange between systemsLocal and metropolitan area networksSpecific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications Amendment 6: Medium Access Control (MAC) Security Enhancements. IEEE Standards Association. IEEE Std 802.1X2001. IEEE Standard 802.1X-2001. IEEE Standard for Local and metropolitan area networks - Port-Based Network Access Control. IEEE Standards Association. M. M. Swift, B. N. Bershad, and H. M. Levy. 2005. Improving the reliability of commodity operating systems. ACM Trans. Comput. Syst., 23(1):77 110, 2005. US-CERT. 2006. Broadcom wireless driver fails to properly process 802.11 probe response frames. Vulnerability Note 209376, US-CERT, November 2006. US-CERT. 2006. Intel Centrino wireless network drivers fail to properly handle malformed frames. Vulnerability Note 230208, US-CERT, July 2006
212

Denial of Service Vulnerabilities in IEEE 802.11i

C. Wullems, K. Tham, J. Smith, and M. Looi. 2004. A trivial denial of service attack on IEEE 802.11 direct sequence spread spectrum wireless LANs. In IEEE Wireless Telecommunications Symposium, pages 129 136, 2004.

213

Recent advances in security technology

19
Analysis of Concrete Slab Fragmentation from Blast Damage
X.Q. Zhou and H. Hao
The University of Western Australia, Australia

Abstract
When an explosion occurs, secondary fragments resulting from damaged structure may cause serious injury and damage. Therefore, it is of interest to know the fragmentation process of structural components and the distribution of the fragment mass and size. This paper presents a practical approach for predicting concrete fragmentation under explosive loading. Continuum damage mechanics model is used to estimate the dynamic deformation and the fragment size in a RC slab to blast load. The ejection velocity of the fragments is then estimated. The statistical method together with the numerical simulation is combined to estimate the possible fragment size distribution. Finally, a simple method is used to calculate the debris trajectories, in which the air drag force and the gravity of the flying fragment are considered. Discussions and comparisons of the predicted fragmentation by numerical and simplified method are made.

Biographies
Dr. Xiao-Qing Zhou, research associate in the school of civil and resource engineering, the university of Western Australia, her research interest is numerical simulation of structural response to blast loading. Hong Hao is professor of structural dynamics in the School of Civil and Resource Engineering, the University of Western Australia. His research interests include earthquake and blast engineering, structural condition monitoring and vibration measurement.

214

Analysis of Concrete Slab Fragmentation from Blast Damage

1 Introduction
Fragments resulting from explosions can be divided into two categories, that is, primary fragments and secondary fragments (Baker et al. 1983). The term primary fragment denotes a fragment from a casing or container of an explosive source. Normally the primary fragment has a mass of about 1 gram and a very high ejecting velocity (can be more than one thousand meters per second). Other potential damaging objects, known as secondary fragments can also be produced due to the blast wave interaction with objects or structures located near the explosive source. These objects can be torn loose from their moorings, if they are attached, and then accelerated to velocities high enough to cause impact damage. In the present study, only the secondary fragments caused by the damaged structures are analysed. The secondary fragments vary greatly in size, shape, initial velocity, and range. Each of these parameters affects the damage potential of a secondary fragment impact in an explosion event. Especially, the damage potential is directly related to the momentum and kinetic energy, which can be determined by the mass and velocity. Therefore, the most important variables considered are the mass and the velocity of debris. The processes of dynamic fragmentation within a concrete member are very complicated since discontinuities such as cleavage cracks and defects with different shapes and orientations are commonly encountered in concrete material and they have significant influence on the failure of concrete. The actual process of dynamic fragmentation is still not well understood, but some theoretical and experimental efforts have provided useful insight regarding the distribution of fragmentation. Based on energy and momentum balance principals, some models have been developed to predict average fragment size as a function of strain rate and material toughness (Grady 1988, Yew & Taylor 1994, Zhang et al. 2004). To determine the distribution of fragment in mass or size, some statistical approaches have been developed (Grady & Kipp, 1985, Grady 1990). In those approaches, the intrinsic failure process leading to fragmentation is not concerned. To understand the mechanisms of fragmentation, some theoretical models have been suggested to correlate the dynamic fracture and fragmentation (Grady & Kipp 1980, Espinosa et al. 1998). Recently, numerical modeling has been carried out to simulate the dynamic deformation in the fragmentation process (Liu & Katsabanis 1997, Espinosia et al. 1998, Zhang et al. 2003, Rabczuk & Eibl 2003). There are three different typical methods for fragmentation simulation: 1) Interface elements were incorporated between standard finite elements to serve as dynamic fracture paths (Espinosia et al. 1998). The primary drawback of this method is that it cannot predict the fragment size because the size and shape are determined by the pre-defined interface. 2) Damage material model has been put into smooth particle hydrodynamics (SPH method) to simulate the fragmentation process (Rabczuk & Eibl 2003, 2004). The fragment distribution was obtained by checking the radius of the fully damaged particles. 3) Standard finite element method together with damage mechanics has been employed to model the dynamic deformation and to predict the fragment size (Liu & Katsabanis 1997, Zhang et al. 2003). The fragment size is predicted by either energy balance principal or relating full damage to fragmentation. In the present study, the later method is adopted, that is, continuum damage mechanics and AUTODYN (2005) are employed
215

Recent advances in security technology

to model the dynamic deformation of an underground structure and an above-ground structure. The fragment size distributions are predicted by relating full damage to fragmentation. The debris throw distances are also estimated.

2 Numerical model
2.1 Material model 2.1.1 Concrete model In the numerical simulation, the modified Drucker-Prager plastic damage model developed in UWA (Zhou et al. 2006) is used. In the model, the stress tensor is separated into the hydrostatic tensor and the deviatoric tensor. The hydrostatic stress tensor controls the change of concrete volume and the deviatoric stress tensor controls the shape deformation. For the hydrostatic tensor, the pressure p is often related to the density and the internal energy e through an Equation of State (EOS). In the present study, Hermanns P- model (1969) is used to model the EOS. The deviatoric stress tensor is governed by a damage-based yield strength surface. The dynamic strength surface is amplified from static surface by considering the strain rate effect. Typically the compressive and tensile strength are multiplied by the respective compressive and tensile dynamic increase factors (DIF). In the model, the DIF of the compressive strength is from the CEB recommendation (Bischoff and Perry, 1991). The DIF for tensile strength is obtained from curve fitting of experimental results and is given in reference (Zhou et al. 2006). The piece-wise Drucker-Prager yield strength model adopted is determined by four sets of experimental data: (1) cut off hydro-tensile strength; (2) uniaxial tensile strength; (3) uniaxial compressive strength; (4) confined compressive strength. The damage scalar D is determined by Mazars model (Mazars 1986). The parameters used can be found in the reference (Zhou et al. 2006). 2.1.2 Steel reinforcement consideration In the numerical model, the effect of reinforcement is simulated by a mixture model. In this model, a reinforcement fraction, fr, is defined along with properties of the reinforcement material. The bulk modulus, shear modulus, and yield strength are then calculated from the mixture rule, e.g., for the yield strength the rule is

y = (1 f r ) yc + f r yr

(1)

where yc and yr are the yield strength for the concrete and the reinforcement steel, respectively. The strength for concrete is determined in the previous section, while the strength for steel is assumed to be ideal elasto-plastic. The yield strength and Youngs modulus for the steel bars are assumed to be 500MPa and 200GPa, respectively; and also modified with respective DIF. The relative importance of concrete in the mixture decreases as damage increases.
216

Analysis of Concrete Slab Fragmentation from Blast Damage

2.1.3 Sand model Sand is a granular material. Porous equation of state is also used to model sand. In the present study, a piece-wise linear porous equation of state is adopted (Laine & Sandvik, 2001). Along the loading path, the pressure density relation is expressed as

p = pi +

pi +1 pi i=1,2,10 i +1 i

(2)

where P is pressure, is density. Pi and i are the pressure and density pairs defining the piecewise linear plastic compaction path. Unloading and reloading path also need to be determined. The strength criterion for sand is assumed to be piecewise linear as well. A hydro tensile failure is defined for sand, and the tensile limit is set to 1kPa as the failure criterion (Laine & Sandvik, 2001). 2.2 Numerical model 2.2.1 Numerical model for underground structure The 2D numerical model for an underground structure buried in sand is shown in Figure 1. Typical element size is 20mm*20mm. The green area denotes the sand material and the blue area concrete material. Transmitting boundaries are used at the bottom and the both sides of the model. Because of the confidentiality, the detailed dimensions of the structures and explosive type and weight are not given in the present paper. The pressure time histories on ceiling, side walls and floor of the structure obtained from a detailed 3D simulation (shown in Figure 2, Figure 3 and Figure 4 respectively) are used as loading input in the simulations.

Figure 1 Numerical model for underground structure

217

Recent advances in security technology

25

40 35

20

30 Pressure(MPa)
0 10 20 30 time(ms) 40 50

Pressure(MPa)

15

25 20 15 10 5

10

0
60 0

10

20

30 time(ms)

40

50

60

Figure 2 Loading on roof slab


160 140 120 Pressure(MPa) 100 80 60 40 20 0 0 10 20 30 time(ms) 40 50

Figure 3 Loading on side wall

60

Figure 4 Loading on floor

Figure 5 The above-ground structure model

2.2.2 Numerical model for an above-ground structure Finer mesh size is adopted to model the structure frame shown in Figure 5 because only RC wall and roof of the structure need to be considered. The element size used is 6.25mm*6.25mm. In the numerical calculation, the pressure-time history obtained from the AUTODYN3D simulation is used as input on the RC wall and the roof. The pressure on the RC roof and wall are shown in Figure 6 and Figure 7 respectively.
7 6 5 PRessure(MPa) 4 3 2 1 0 -1 0 10 20 30 40 50 60 70

3 2.5 2 PRessure(MPa) 1.5 1 0.5 0 0 -0.5 time(ms) 10 20 30 40 50 60 70

time(ms)

Figure 6 Loading on RC roof


218

Figure 7 Loading on RC wall

Analysis of Concrete Slab Fragmentation from Blast Damage

3 Basic theory of the debris analysis


3.1 Ejecting velocity Once the time history of the pressure loading is known, the prediction of debris velocity can be obtained. For the simple method in this section, the following basic assumptions for the debris (fragments) are made: 1) the fragment behaves as a rigid body, none of the energy in the blast wave is absorbed in breaking the fragment or deforming it elastically or plastically; 2) gravity effects is ignored during the acceleration phase of the motion. The equation of motion of the fragment is

A p(t ) = Mx
where A is the area of object presented to the blast front, p(t) is the pressure-time history of blast wave acting on object, M is the mass of the object, x is the acceleration of the object. The object is assumed to be at rest initially, so that

(3)

x(0) = 0, x(0) = 0

(4)

Using the initial condition, rearranging the terms and integrating Eq (3), the velocity can be obtained as

A x(T ) = M

p(t )dt = M i
0

(5)

where T is the time duration of the pressure, x(T ) is the initial velocity at time T, id is the total impulse. In the present study, the pressure time history is obtained from the 3D numerical simulation. The total impulse id can be calculated from the pressure time history. 3.2 Debris trajectory Once fragments have been formed and accelerated by an explosion pressure wave, they will move along a specific trajectory until they impact a target, or the ground. The forces acting on the moving fragments and affecting their trajectories are: inertia force, gravitational force, and fluid dynamic forces. The fluid dynamic forces are determined by the instantaneous velocity of the fragment at each instant time. They are subdivided into drag and lift components. The drag component is along the trajectory or normal to the gravity vector, while the lift component is normal to the trajectory or opposing gravity. The effect of drag and lift will depend on both the shape of the fragment and its direction of motion with respect to the relative wind. Fragments considered in the present study are of chunky shape and they can be called drag-type fragments. The lift force on drag-type fragments is very small and may be neglected. The drag force at any instant can be expressed as
219

Recent advances in security technology

1 FD = CD ( ) v 2 AD 2

(6)

where FD is drag force; is density of the medium through which the fragment is travelling; v is the velocity of the fragment; AD is drag area; CD is drag coefficients, it is assumed to be 0.8 in the present study based on the suggestions in TM5-1300 (1990), where it is determined empirically according to the shape and orientation with respect to the velocity vector. In a simplified trajectory problem, where the fragment is considered to move in one plane, equations of motion can be written for acceleration in the X and Y directions. The accelerations in the both directions are,

x=

ACD ( x 2 + y 2 ) cos 2M ACD ( x 2 + y 2 ) sin 2M

(7) (8)

y = g

where x and y are the accelerations in the X and Y direction, respectively; A is the area of the fragment, M is the mass of the fragment, g is gravitational acceleration, x and y are the velocity in the X and Y direction, respectively, is the trajectory angle. At time T,

x = v0 cos 0 y = v0 sin 0

(9) (10)

where v0 is the initial velocity and 0 is the initial trajectory angle at time T, which can be determined by Eq.(5).

4 Debris analysis
4.1 Debris from underground structure 4.1.1 Fragment size distribution The deformation of the underground structure obtained from the numerical calculation is shown in Figure 8. From the numerical simulation, it is found that the connections between the RC roof slab and the RC wall of the structure are fully fractured and the roof slab is pushed upwards with a velocity of 17.2m/s. It can also be found that there are some fully damaged elements in the roof slab. Therefore the roof slab is considered as damaged with many fragments of different sizes. Because their ejection velocity is the same, smaller fragments have less impact energy than larger fragments. For the large debris, its shape and size can be estimated from the
220

Analysis of Concrete Slab Fragmentation from Blast Damage

numerical simulation. Figure 9 shows the material status of the RC roof slab, some possible large debris is marked in this figure. From this figure, it can be estimated that the largest debris size is about 600mm and the maximum mass is about 150kg. The largest fragment locates near the centre of the roof slab. For the debris near the corner between the wall and the roof, the largest fragment is about 200mm with a mass of about 25kg.

Figure 8 Deformation of the chamber (t=500ms) 3m y x Figure 9 Material status of the cover (red part denotes failed element) The fragment size distribution can be estimated by a statistical approach (Grady, 1990). The cumulative mass of fragments with mass less than or equal to is,

M c ( ) = M (1 e / a )
accordingly, the cumulative size of fragments with size less than or equal to s is,

(11)

M c ( s ) = M (1 e ( s / sa ) )
where M is the total fragment mass, a is the average fragment mass, and sa is the average fragment size.

(12)

Based on the numerical results shown in Figure and the statistical method Eq (12), statistical fragment size distribution is estimated as shown in Figure 10. It should be mentioned that the estimation here is based on the assumption of average size of 300mm.

221

Recent advances in security technology

100 90 percentage passing(%) 80 70 60 50 40 30 20 10 0 0 100 200 300 400 500 600 700 fragment size (mm)

Figure 10 Statistical fragment size distribution of the roof slab 4.1.2 Ejection velocity and launch angle From the numerical results, it can be found that the ejection velocity for the whole RC roof slab is 17.2m/s. By using the simplified method in section 3.1, the ejection velocity can be determined by Eq (5). It should be noted that the mass here includes the mass of sand. Therefore, the total mass M=cvc+ svs , where c is the density of concrete, which is 2400kg/m3, s is the density of sand, which is 1670kg/m3. The volume of concrete roof slab vc=4m*3m*0.3m=3.6m3, and the volume of the sand cover is vs=4m*3m*3m=36m3. From the numerical simulation, it can be obtained that the calculated impulse is 1.125*105Pas. Thus, the ejection velocity is

x=

A 4*3 id = *1.125 *105 = 19.6m / s m 2400 * 3.6 + 1670 * 36

(13)

The numerical result of 17.2m/s is lower than the result of 19.6m/s from simplified analysis, with an error of -12.2%. The simplified method gives a higher ejection velocity because it assumes that all the blast wave energy acting on the roof slab is transferred to kinetic energy and the energy absorbed in breaking the concrete is assumed to be zero. In the numerical simulation, the launch angle for most of the roof slab is 90 degree, that is, the debris is ejected vertically into the air. Only a very small part near the corner of the roof slab has a launch angle slightly lower than 90 degree, and all of them are larger than 80 degree. However, in real case, the launch angle is not ideal as that in the numerical simulation. Therefore, the possibility of the most disadvantageous case needs to be considered. In the present study, the launch angle for the debris from the roof slab is assumed to be normally distributed. For most of the debris located near the centre, the ejection angle is assumed to be 90 degree with a standard deviation of 10 degrees. While for a small portion near the two ends of the roof slab, which is connected to the side RC wall, the launch angle is assumed to be 45 degree with a standard deviation of 10 degrees.

222

Analysis of Concrete Slab Fragmentation from Blast Damage

4.1.3 Debris throw distance Method described in 3.2 is adopted to calculate the debris throw distance. It is found that, for the fragments of size about 200mm and mass about 25kg, when the launch angle is around 45 degree, the debris will travel the maximum distance, which is about 28.6m; for the debris near the centre, the calculated distance is 9.9m for big heavy debris when the launch angle is 80 degree (10 degree is taken as the standard deviation, which means the possibility of launch angle higher than 80 degree is 68.3%, it also means that the possibility of debris throw distance within this range is 68.3%), and it is 16.7m when the launch angle is 70 degree (95.5% of debris within this range). The debris throw distances for some other possible fragment sizes with different mass are also calculated, and it is found that the maximum debris throw distance is 28.6m. It should be noted that the estimated value of 28.6 m might underestimate the distance because the debris roll is not considered in the present study due to the lack of a reliable numerical model and experimental results. However, this distance of 28.6m corresponds to the ejection angle of 45 degree. Numerical simulation indicated that the launch angle is in general larger than 55 degree. When the impact angle is higher than 55 degree, the debris is not likely to be rolled. In this consideration, the maximum debris throw distance of 28.6m is reasonable. The debris throw distance is also calculated by using the standard of TM5-1300 (1990). The Figure 2-252 in TM51300 (1990) is used to calculate the current fragment range. The range obtained is 22.3m, which is slightly lower than the current numerical result of 28.6m. 4.2 Debris from the above-ground structure 4.2.1 Fragment size and ejection velocity Material status of the RC roof slab from numerical simulation is shown in Figure 11. From this figure, it can be seen that the maximum fragment of the RC roof is about the size of 320mm*160mm, with an estimated mass of 34kg. Statistical fragment size distribution is obtained as shown in Figure 12. The average fragment size is assumed to be 200mm. The numerical results show that the ejection velocity for the roof is 45 m/s. For the large fragment, the launch angle is 90 degree in the numerical simulation. Regarding the uncertainty, a standard deviation of 10 degrees is considered. For some small sized debris at the corner, a smaller launch angle of 45 degree is considered. The maximum size for the smaller debris is about 50mm*50mm, with a mass of 0.2625kg. By using the simplified method in section 3.1, the ejecting velocity is 54m/s. The numerical result of 45m/s is lower than the simplified result of 54m/s. The simplified method obtained higher ejection velocity because it is assumed that all the energy is transferred to kinetic energy and the energy absorbed in breaking the concrete is assumed to be zero. On this consideration, the ejection velocity of 45m/s is considered reasonable.

223

Recent advances in security technology

Figure 11 Material status of the RC roof slab


100 90 percentage passing(%) 80 70 60 50 40 30 20 10 0 0 100 200 300 400 500 fragment size (mm)

Figure 12 Statistical fragment size distribution of the RC roof


4.2.2 Debris throw distance The method adopted here is the same as that in the section 4.2.1. When the ejection is 80 degree for the bigger fragment, the calculated maximum possible debris throw distance from the roof is about 55.5m. That means 68.3% of the debris will be within 55.5m.When the launch angle is 70 degree, the debris throw distance is 101m, and 95.5% of the debris will be within this range. The debris throw distance is also calculated by using the standard of TM5-1300 (1990). The range obtained is 83.7m, which is lower than the current result of 101m.
angle

5 Conclusions
In the present paper, a practical method is adopted to analyse the RC slab fragment caused by blast loading. Numerical results combined with statistical method are used to estimate the fragment size distribution. The debris ejecting velocities calculated by numerical method are compared with those by the simple method. Once the mass and the ejecting velocity are determined, the debris throw distances are calculated by using the basic theory of fluid dynamics. Comparison of the results with the empirical results in TM5-1300 shows that the present method yields reasonable estimation of the debris throw distance.

References
AUTODYN, 2005. Century Dynamics, Theory manual. Baker W.E., Cox P.A., Westine P.S., Kulesz J.J. & Strehlow R.A. 1983. Explosion hazards and Evaluation. Elsevier Scientific Publishing Company, Amsterdam. Bischoff P.H. & Perry S.H. 1991. Compressive behaviour of concrete at high strain rate, Materials and Structures, 24:425-450.
224

Analysis of Concrete Slab Fragmentation from Blast Damage

Espinosia H.D., Zavattieri P.D. & Dwivedi S.K. 1998. A finite deformation continuum/discrete model for the description of fragmentation and damage in brittle material. Journal of the Mechanics and Physics of solids. 46:1909-1942. Grady D.E. & Kipp ME. 1980. Continuum modeling of explosive fracture in oil shale. International Journal of Rock Mechanics and Mining Science & Geomechanics Abstracts, 17:147-157. Grady D.E. & Kipp M.E. 1985. Geometric statistics in dynamic fragmentation. Journal of Applied Physics, 58:1210-1222. Grady D.E. 1988. The spall strength of condensed fragmentation. Journal of the Mechanics and Physics of Solids, 36:353-384. Grady D.E. 1990. Particle size statistics in dynamic fragmentation. Journal of Applied Physics, 68: 6099-105. Herrmann W. 1969. Constitutive equation for the dynamic compaction of ductile porous materials, Journal of Applied Physics, 40:2490-9. Laine L & Sandvik A. 2001. Derivation of mechanical properties for sand. Proceedings of the 4th Asia-Pacific Conference on Shock and Impact Loads on Structures, Singapore, 361368. Liu L.Q. & Katsabanis P.D. 1997. Development of a continuum damage model for blasting analysis, International Journal of Rock Mechanics and Mining Science, 34(2):217-231. Mazars J. 1986. A description of micro- and macroscale damage of concrete structures, Engineering Fracture Mechanics, 25(5/6):729-37. Rabczuk T. & Eibl J. 2003. Simulation of high velocity concrete fragmentation using SPH/MLSPH. International Journal of Numerical Methods in Engineering. 56:1412-44. Rabczuk T., Eibl J. & Stempniewski L. 2004. Numerical analysis of high speed concrete fragmentation using a meshfree Lagrangian method. Engineering Fracture Mechanics, 71:547-556. TM5-1300. 1990. Structures to resist the effects of accidental explosions. US army. USA. Yew C.H. & Taylor P.A. 1994. A thermodynamic theory of dynamic fragmentation. International Journal of Impact Engineering, 15:385-94. Zhang Y.Q., Hao H. & Lu Y. 2003. Anisotropic dynamic and fragmentation of rock materials under explosive loading. International Journal of Engineering Science, 41:917-929. Zhang Y.Q., Lu Y. & Hao H. 2004. Analysis of fragment size and ejection velocity at high strain rate. International Journal of Mechanical Sciences, 46:27-34. Zhou X.Q., Hao H., Kuznetsov V.A. & Waschl J. 2006. Numerical calculation of concrete slab response to blast loading. 1st International Conference on Analysis and Design of Structures against Explosive and Impact Loads, Sept. 15-17 2006, Tianjin, China. Transaction of Tianjin University. 12(Suppl.):94-99.

225

Recent advances in security technology

20
Modeling and Integration of Disaster Situational Reports
Sai Sun and Renato Iannella
National ICT Australia (NICTA)
Abstract
During disasters and emergencies, situational reports are generated to provide critical information on the current state, desired responses, and future needs of the incident. These reports are usually summarised and communicated to higher-levels for coordination and management (e.g. from local, to district, to state, to commonwealth levels). Correctly and effectively generating and processing these crisis reports plays a vital role in emergency services management. Currently, situational reports are primarily unstructured textual documents but usually follow a common pattern to their presentation and layout. The generation and processing of these reports is almost wholly manual, which inevitably causes the omission of information, low efficiency of manipulation, and hence, difficulty in decision support functions. Based on a case study from a disaster exercise, this paper proposes the information model of situational reports and further discusses the information integration and coordination of these reports, thus improving the performance of the disaster management system.

Biographies
Sai Sun is a researcher in the Smart Applications For Emergencies (SAFE) project at National ICT Australia (NICTA). Her current research focuses on information modeling and integration for emergencies. Her background is in the field of advanced databases, and she received a PhD in this area from the University of Queensland in November 2006. She has previously worked as a Research Scientist at the University of Queensland. Renato Iannella is Principal Scientist and Program Leader at National ICT Australia (NICTA). His research covers Web Information Engineering and standards in trusted information and rights management. Renato has extensive experience in the development of Internet, Web, and Mobile technologies and standards and was a former member of the World Wide Web Consortium (W3C) Advisory Board. Renato also is an Adjunct Associate Professor at the University of Queensland, Visiting Associate Professor at the University of Hong Kong and was previously the Chief Scientist at LiveEvents Wireless, IPR Systems and Principal Research Scientist at the Distributed Systems Technology Centre (DSTC).
226

Modeling and Integration of Disaster Situational Reports

1. Introduction
Disaster Management Systems (DMS) are frameworks to address and ameliorate the impacts resulting from natural disasters and man-made hazards. Although varying across countries, most DMS have a hierarchical command and control framework and require all levels of government and non-government involvement. To enable the seamless integration of activities and resources of multiple agencies, information management plays a vital role in the coordination of DMS. A DMS is technically supported by software, such as Crisis Information Management Systems (CIMS). CIMS are not only responsible for ordinary and critical data collection, data analysis, and data process but also responsible for information integration, information delivery and information sharing (Iannella, R. et al, 2007). Formal crisis reporting (e.g. situational reports) is one of the most important information resources of CIMS, which are generated during disasters and emergencies to provide critical information on the current state, desired responses, and future needs of the incident. The information provided in situational reports is the basis of the incident management team to identify needs, to develop incident activity plans, and to assign resources. The remainder of this paper is structured as follows. Section 2 gives a background on the generation and processing of disaster situational reports, with explicit reference to the hierarchy of the Australian Disaster Management System (ADMS). This is followed by a discussion of current performance issues and modeling principles in Section 3. Evacuation centres are used as examples for information modeling and integration in Section 4. Finally, Section 5 summarises the findings of this paper and describes future work.

2. Disaster Management Systems


In this section, we introduce the hierarchy of ADMS and discuss situational reports produced during Exercise Reef Breaker. This exercise was designed around a hypothetical but potential tropical cyclone impacting the coast of north Queensland, Australia. (Iannella, R. & Henricksen, K. 2007) reports on other findings from this Exercise. 2.1 The Australian Disaster Management System The Australian Disaster Management System (ADMS) operates on four distinct levels as follows: Local Government Disaster District State Government Commonwealth Government As depicted in Figure 1, each of the four levels has a committee structure supported by a disaster coordination centre as its basis. These committees and coordination

227

Recent advances in security technology

centres together ensure the coordinated and effective capability to prevent, prepare for, respond to and recover from disasters (QSDMG, 2007).

Figure 1. The Australia Disaster Management System [from (QSDMG, 2007)] The Local Disaster Management Group (LDMG), the fundamental level of ADMS, is responsible for the management of a disaster at the community level. When a disaster is large or serious - more than the capability of a LDMG - the LDMG may require additional resources and assistance from its District Disaster Management Group (DDMG). Similarly, the DDMG may address these needs or partly solve them and pass requests up to the State level. In extreme cases, the Statement Government may need to seek Commonwealth support. Exercise Reef Breaker was designed to test the capabilities of the LDMG and DDMG as they play a key role in ADMS. 2.2 Situational Reports Corresponding to the hierarchy of ADMS, the original situational reports are generated by the LDMGs. They are then manipulated and used at the current level for decision making or summarised and communicated to higher-levels for further coordination and management (i.e. from Local, to District, to State, to Commonwealth levels). The template of situational reports used in Exercise Reef Breaker is the situational report form designed by Queensland Government State Disaster Management Group (SDMG). The form begins with the general information of situational reports, such as the name of the event, the sender and recipient of the situational report, the issue date and time of the situational report, and the Sitrep No.. The main content of the situational report form includes twelve sections, which are Weather, Transport, Communications, Power, Damage Reports, Population, Evacuations, Industry, Local
228

Modeling and Integration of Disaster Situational Reports

Arrangements, Ongoing Activities, Projected Operations, and General Overview of the situation. Figure 2 shows part of the situational report form.

Figure 2. The Situational Report Form Currently, the generation of situational reports at the Local level is almost wholly manual; the integration at the District level is simply copying relevant content from Local reports and putting them together into another document; and the transmission methods are mainly fax and email. All of these inevitably raise the following issues:

the omission of information, the low efficiency of manipulation, the difficulty of tracking statuses, and the missed opportunity for automated decision support.

3. Information Modeling
The suddenness and impact of emergencies demand the requirement for the generation and processing of situational reports. To address the above issues, formally modeling situational reports and using structured information formats is an ideal solution as it can improve the performance of DMS in four ways:
229

Recent advances in security technology

The introduction of a formal model can assist in the automatic generation of situational reports at the Local level; With structured information formats, situational reports become machinereadable and further enable innovative merging and integration algorithms for levels higher than Local (e.g. merging information from Local to District Level); The XML schema basis allows the situational report model to be used in conjunction with new standards that support the exchange of information in emergency situations, including the Emergency Data Exchange Language (EXDL) Distribution Element (OASIS, 2006); The gathered incident information is more reliable and summarised, which formulates the foundation of decision support and situational awareness systems for the emergency management sector.

Modeling situational reports is a non-trivial task as the contents of situational reports cover a large extent which requires a flexible and extendable model. At the same time a robust situational report model should take into account the integration requirements for higher levels. In the following section, we discuss the principles of modeling situational reports focussed on the example evacuation centre as our study example. 3.1 Evacuation Centre Example In the current situational report form, only one sub-item (Evacuations/Evacuation Centres Activated) refer to evacuation centres. That is, only a list of activated evacuation centres is given in situational report form. Figure 3(a) shows the Evacuations section of a situational report submitted by the Hinchinbbrook LDMG during Exercise Reef Breaker. Figure 3(b) then shows the corresponding integrated situational report at the District level including a number of LDMG reports.

Figure 3 (a). Situational Report/Evacuation Local Level 3.2 Modeling Principles The Evacuation Centre is an essential element of DMS during incidents. Based on the status of evacuation centres, the incident management team decides how and when to evacuate people and distribute catering and other resources. Obviously, the
230

Modeling and Integration of Disaster Situational Reports

information provided in the above Local and District situational reports is too limited in some cases (for example, without the current and potential people numbers in evacuation centres, it is difficult to distribute catering). In this subsection, we discuss our modeling principles, followed by the UML models for the evacuation centre information.

Figure 3(b). Integrated Situational Report/Evacuation District Level 3.2.1 Models should be designed for different government levels In Exercise Reef Breaker, the LDMG, DDMG and SDMG use the same form to generate situational reports, which is not suitable to fully implement the functions of emergency sectors at different government levels, due to their different roles in emergency incidents. As introduced in Subsection 2.1, the Local level is the fundamental level of ADMS. It is responsible to react to emergencies and is in charge of executing most concrete tasks. Thus, information should be collected as much as possible at the Local level. Consequently the situational reports should be much more detailed than reports at other higher levels. For higher levels, the information should be integrated and summarised according to the needs of the government levels. Too detailed information may cost unnecessary processing time and postpone or delay response times. Additionally, situational reports at higher levels should quote and reference of corresponding reports at lower level, which ensures the ability to check details if needed, and provide accountability.

231

Recent advances in security technology

3.2.2 Using uniform measure units Using uniform measurement units is helpful not only to integrate and process situational reports but also to manage resources. Here, "uniform" include three aspects: 1. The units for the same purpose should be uniform (e.g. during disasters, the catering is better to be accounted according to per person per day then to be accounted separately as individual food items etc.); 2. The units used in different local areas should be uniform (e.g. the units used in Local area A should be uniform with the units used in Local area B); 3. The units used at different government levels should be uniform (i.e. the units used at the Local, District, State, and Commonwealth should be uniform). 3.2.3 The information model should support special cases Due to complex situations during emergencies, it is difficult to enumerate all current and future possibilities. For example, it is normal that an evacuation requires food, water, etc. for the evacuees. However, a specific evacuation may also require some special medical supplies, translators and so on. To make sure the information model is concise and also extendable, predefining some elements to describe special cases is an appropriate method (e.g. the definition of Evacuation Centre.special requirement in Subsection 4.1).

4 Evacuation Centre Model


Based on the principles proposed in Subsection 3.2, we design our UML models for Evacuation centres as follows.

232

Modeling and Integration of Disaster Situational Reports

Figure 4. UML Model Evacuation Centre Local Level 4.1 Evacuation Centre Model at the Local Level Figure 4 is the UML model at the Local level. Here Evacuation Centre Status is the top-level container element and Evacuation Centre Status.summary indicates the general status. As one LDMG may have more than one evacuation centre, the relation between Evacuation Centre Status and Evacuation Centre is one to many. Road damages are common during disasters, two types of Position (i.e. Address and Coordinate) can ensure to locate the position in most cases. The Traffic element indicates the status of operations of traffic in to and out of the evacuation centre, which is related to the ability of the centre to receive evacuees. Finally, the Population and Logistics elements are the most important two elements which describe the capacity of the centre, current and potential population, and logistics status. All of the above information is essential for the LDMG to execute operations.
233

Recent advances in security technology

4.2 Evacuation Centre Model at the District Level The main responsibilities of Disaster Management Groups at higher level are developing incident activity plans, allocating tasks to different organizations, and managing resources for lower level LDMGs. Thus, some detailed information should be removed from situational reports. Figure 5 depicts the UML model of evacuation centre at the District level. To some extent, DDMG can be regarded as middleware providing coordinated State Government support when requested by Local Governments.

Figure 5. UML Model Evacuation Centre District Level

4.3 Information Integration With well formed information models, the integration is a mechanical process of amalgamating parts of the model with a well defined vocabulary. Consider the above evacuation centre model at the district level, all sub-elements in Population and Logistics can be simply accumulated from the corresponding sub-elements in the Local situational reports. The SDMG just needs to provide logistics according to the integrated numbers and it is DDMG who decides how to assign the resource to different LDMGs. For more complex scenarios, database support and innovative integration algorithms may be required. For example, the traffic status of a District is not the simple collection of the status of each Local group. Since some roads go through more than one Local area, the integration needs the support of spatial databases across the State. Ontology support will also be required for more complex integration across sectors to support different vocabularies (Little, E. & Rogova, G. 2005).
234

Modeling and Integration of Disaster Situational Reports

5. Summary
This paper has discussed the development of a new information model for disaster situational reports based on analysis of the hierarchy of ADMS and the situational report form used in Exercise Reef Breaker. The benefits of a machine-readable situational report are clear and obvious for information processing and integration. Future work on the situational report models will involve completing the models for different aspects and designing detailed integration algorithms. The goal is to improve the performance of processing situational reports and further increase the efficiency of DMS. Acknowledgements National ICT Australia (NICTA) is funded by the Australian Governments Department of Communications, Information Technology, and the Arts and the Australian Research Council through Backing Australias Ability and the ICT Research Centre of Excellence programs, and the Queensland Government.

References
Iannella, R. & Henricksen, K. 2007. Managing Information in the Disaster Coordination Centre: Lessons and Opportunities. In 4th International Conference on Information Systems for Crisis Response and Management Proceedings of ISCRAM 2007, pages 581 590. Delft, May 2007. Iannella, R. & Robinson, K. & Rinta-Koski, O. 2007. Towards a Framework for Crisis Information Management Systems (CIMS). In 14th Annual TIEMS Conference, Trogir, Croatia, June 2007. Little, E. & Rogova, G. 2005. Ontology Meta-Model For Building A Situational Picture Of Catastrophic Events. The Eighth International Conference on Information Fusion, Philadelphia, USA, 25 - 29 July 2005 OASIS Emergency Management Technical Committee. 2006. Emergency Data Exchange Language (EDXL) Distribution Element, v. 1.0. OASIS Standard EDXL-DE v1.0, 1 May 2006. <http://docs.oasis-open.org/emergency/edxl-de/v1.0/EDXL-DE_Spec_v1.0.pdf > QSDMG, 2007. Queensland State Disaster Management Group, Department of Emergency Services, Queensland. < http://www.disaster.qld.gov.au/about/>

235

Recent advances in security technology

21
Managing security effects of WLAN deployments in critical infrastructure control systems
R. Gill, J. Smith and M. Branagan
Information Security Institute, Queensland University of Technology, Brisbane, Australia
Abstract
Driven by incentives such as low cost, convenience and ease of connectivity, high assurance environments, such as critical infrastructure control systems (CSs), are increasingly considering the adoption of IEEE 802.11 wireless local area networks (WLANs). This trend is of concern given the relative immaturity of techniques for securely managing deployments of WLANs. Despite security enhancements in the IEEE 802.11i standard, WLANs still suffer from a number of serious vulnerabilities and hence can negatively impact the risk environment for CSs. An adversary could potentially exploit WLAN vulnerabilities to attack the monitoring and control functions of the associated control system. The existence of implicit trust between CS components and the highly interconnected nature of critical infrastructure information networks, further exacerbates this problem. In this paper, we explore the vulnerabilities introduced by WLAN deployments in high assurance environments and identify policy based strategies for risk mitigation. We highlight the exposure of CSs to WLAN based attacks and stress the requirement of a WLAN security policy to effectively manage these risks. The paper also discusses detecting policy violations by the use of a security policy compliance monitoring system. The requirements of such a monitoring system are identified and a possible implementation of such a system using current wireless intrusion detection systems is also discussed.

Biography
Rupinder Gill is a PhD candidate with the Queensland University of Technology (QUT). His research interests include wireless intrusion detection and wireless
236

Managing security effects of WLAN deployments in critical infrastructure control systems

security. Jason Smith is a research fellow and security consultant with QUT. His research interests are in the areas of network security and intrusion detection with a specific focus on denial of service resistance, wireless communications and control systems environments. Mark Branagan is a PhD candidate with QUT. His research interests include information security risk assessment and management of critical infrastructure protection.

1 Introduction
Critical Infrastructure (CI) extends across multiple industry sectors such as banking and finance, transport and distribution, energy, utilities, health, food supply, communications and key government services. Protecting the national CI is essential to any nations security, public health and safety, economic vitality and way of life. Failures in the CI can significantly disrupt the functioning of both government and business and can produce catastrophic cascading effects. A major component of CIs are Supervisory Control and Data Acquisition (SCADA) systems, distributed control systems, industrial automation systems and other process control systems (hereafter collectively referred to as Control Systems (CSs)). CSs utilize information systems to aid in command and control operations for many CI components. While many CI sectors have traditionally been sheltered from market forces increasing levels of privatisation and expectations of profitable operation have meant that cost savings and increasing perceived efficiency have become necessities. CSs for many CI sectors have traditionally employed proprietary, dedicated communications and networking technologies and infrastructures. There is, however, increasing migration to newer standardized communications protocols and architectures to realise cost and efficiency benefits. Wireless Local Area Networks (WLANs) have enjoyed wide acceptance since the ratification of the 802.11 standard by the IEEE (IEEE 1999). WLANs offer great flexibility along with considerable economic savings. They eliminate the cost and time required for installation and maintenance of much previously required physical infrastructure. Further they provide enhanced mobility and ease of deployment, a key consideration in a widely dispersed infrastructure. WLAN technology is becoming prevalent not only in the commercial and home user environments, but also in security sensitive deployments like critical infrastructure and government agencies (GAO 2004, Koumpis et al. 2005). There are clearly potential benefits of WLANs both in convenience and improved cost management. However, the implications of deploying WLANs in security sensitive environments such as CSs must be carefully considered. The rapid take-up of wireless networks has resulted in the widespread deployment of a relatively immature and insecure technology41. Since their first release, wireless technologies have been plagued with security problems. Recent enhancements to the IEEE 802.11 standard (IEEE 2004) undoubtedly improve the level of security provided by preventative techniques in wireless network deployments. However, not all the enhancements and their combinations are secure
41

http://www.gartner.com/press_release/asset_88267_11.html
237

Recent advances in security technology

and satisfy the strict security requirements42 of CS information networks (Stanley et al. 2005). Introduction of WLANs will clearly have some impact on the risk environment of any organisation utilising such networks. Security policies and standards are designed to minimise the exposure of an organisation to unacceptable levels of risk. Therefore it is essential that existing security principles be applied to WLANs. These principles should be catered specifically to WLAN deployment and the needs of an organisation by the use of comprehensive risk assessments in combination with existing security guidelines, standards and best practices. The result of the process should be a set of policies standards and configuration guides that minimise the risk exposure of an organisation deploying wireless technologies. However, establishing a security policy by itself is not enough. Monitoring is required to ensure the WLAN implements and complies with policy. In this paper, we consider whether the adoption of WLANs in CS information networks has increased the vulnerability of those networks to attack and if so, what policy based mitigation strategies may be adopted. This paper highlights the vulnerability of CS information networks to attack via WLAN components and stresses the need for the establishment of well-informed wireless security policies. Important components of such a policy have been identified in the paper. The paper also points out the importance of detecting policy violations by the use of a security policy compliance monitoring system. The requirements of such a monitoring system are identified and a possible implementation of such a system using current wireless intrusion detection systems is also discussed. The remainder of the paper is structured as follows. The next Section discusses related work. Sections 3 and 4 discuss WLAN security issues and the impact of WLAN deployments on CS security. Section 5 discusses the need for a wireless security policy to manage the risks posed to CS information networks by WLAN deployments. Section 6 presents the requirements of an ideal security policy compliance monitoring system for CS WLANs and conclusions and future directions are presented in Section 7.

2. Related Work
Studies have been conducted on the suitability of using wireless networks in industrial control system networks, however none of these studies were security or 802.11 specific (Willig et al. 2005, Egea-Lopez et al. 2005). Risley A. and Roberts J. studied the electronic security risks associated with the use of 802.11 point-to-point communications in the electric power industry (Risley et al. 2003), however they did not explore any security policy compliance issues. US-CERT Control Systems Security Center released a case study to identify backdoors in network perimeters of CS networks (Nash 2005). Recognizing the threat WLANs pose to CS networks, guidelines are becoming available to securely deploy WLANs in industrial networks (Masica 2007) and some work has also commenced on using honeynets to monitor
42

http://www.isd.mel.nist.gov/projects/processcontrol/SPP- ICSv1.0.pdf

238

Managing security effects of WLAN deployments in critical infrastructure control systems

activity of hackers over WLANs in CS networks43. NIST also provides guidelines on WLAN threats and there management in general (Karygiannis & Owens 2002). However to the best of our knowledge, ours is the only work that explores effective management of security effects related to WLAN deployments in critical infrastructure control systems using a policy based approach.

3. Security Issues in WLANs


Similar to wired networks, WLANs need to support the security objectives of confidentiality, integrity and availability. These security objectives underline the major threats faced by WLANs. The threats to wireless networks can be divided into following main categories (NIST 2006):

Masquerading: An adversary impersonates an authorized entity and gains certain unauthorized privileges. Message Modification: An adversary modifies a legitimate message by appending, changing, deleting, or reordering. Message Replay: An adversary passively captures legitimate network transmissions and reinjects them into the network at a later stage, assuming the identity of a legitimate entity. Traffic Analysis: Techniques used by an adversary to collect information about network communication patterns by monitoring the size, number and frequency of packets being transmitted. Eavesdropping: Passive monitoring and capture of WLAN traffic allowing for offline analysis of captured transmissions. May lead to information leakage and brute force key discovery attacks. Denial of Service: Techniques that make WLAN resources unavailable to its intended users. These attacks can be launched at both the medium access control (MAC) (eg. spoofed management frames) and physical (PHY) layers (eg. radio frequency jamming). Man-in-the-Middle: Interception of the communication between legitimate network nodes in order to assume identity of a legitimate node to each party, passing data between them, neither party being aware of the adversaries presence. Session Hijacking: Similar to Man-in-the-Middle attacks but the adversary forces one party to disconnect and takes over its session without the knowledge of the other.

Early implementations of 802.11 attempted to address security objectives by using shared key based encryption and authentication (WEP). However, numerous security weaknesses existed both inherent in the 802.11 standard and in its implementation (NIST 2006). More recent 802.11 enhancements address many of the original weaknesses. 802.11i introduces the notion of a robust security network association
43

http://www.digitalbond.com/index.php/2006/10/14/scada-honeynet-deployment-scenariosand-early-results
239

Recent advances in security technology

(RSNA), providing mutual authentication, key management, data encryption and integrity protocols. An RSNA may use the 802.1X port-based authentication scheme to further improve MAC layer security by forcing the use of an authentication server (AS) by all wireless clients (STA) to gain access to the network. 802.1X itself and its reliance on the EAP framework for provisioning authentication is problematic as neither were designed specifically for wireless networks (Stanley et al. 2005). The 802.11i standard allows RSNA and pre-RSNA equipment to co-exist in what is referred to as a transitional security network (TSN), allowing STAs to connect to both types of network. In this case, a security roll-back attack may be employed by an adversary to trick the STA into using pre-RSNA by impersonating association frames from an RSNA configured wireless access points (AP). Such an attack is particularly potent as it may result in keying material utilised in RSNA becoming known if the adversary is able to exploit the weaknesses in the pre-RSNA algorithms (He & Mitchell 2005). The lack of authentication for MAC layer management frames remains a significant problem, even in 802.11i networks. Neither the original 802.11 standards, nor the recent IEEE 802.11i standard specify mechanisms for protecting the integrity of management frames, leaving 802.11 based WLANs vulnerable to management frame spoofing and associated denial of service attacks (DoS) (Bellardo & Savage 2003). Even the EAP frames used for authentication in 802.11i networks are unprotected and can be easily used as a means to launch similar attacks against WLANs (Mishra & Arbaugh 2003). These WLAN vulnerabilities, if not managed carefully, can have adverse impacts on the security posture of CS networks as discussed in the following section.

4. Exposure of CS Networks to Attacks via WLANs


CSs are highly dependent on information networks for both critical and non critical functions (Koumpis 2005). Increased proliferation of WLAN technology in these networks has changed the risk environment for CSs and has exposed them to a new set of threats, vulnerabilities and threat sources that need to be understood and managed to mitigate any negative security effects. 4.1 CS network architectures and WLAN deployments CS information networks rely on a hierarchical network structure which typically comprises a corporate business network, an operations control or management network and multiple remote station or field networks (see Figure 1) (Nash 2005). The business network is used for normal business operations as for any organisation. This network is often connected to the Internet, usually through a firewall and is increasingly also connected to the management network. The management network consists of hardware/software components for controlling operations of distributed control equipment at remote field sites. The field networks contain the actual control equipment.
240

Managing security effects of WLAN deployments in critical infrastructure control systems

Figure 1 Control Systems Network Architecture While WLANs are most often deployed in the business network section of the CS, they are increasingly being deployed in all parts of the CS due to the ease of connectivity and the need to support an increasingly mobile workforce. Therefore security concerns presented by the use of WLANs do not simply arise because of interconnections between business and other networks; they potentially exist in all sections of the CS with WLANs (Masica 2007). 4.2 Negative Impacts on Traditional CS Risk Environment Traditionally, the implicit trust placed in components and commands within CS networks has limited their level of security (INL 2006). The use of access control devices between CS networks and the Internet acknowledges the problems presented by the lack of security in CS networks, but wrongly assumes the Internet as the single attack path (Nash 2005). However, the presence of WLAN APs on various CS network segments bypasses such measures. Successful compromise of an AP would permit an adversary to circumvent the firewall and obtain direct access to the CS network providing potential access to other physically and logically connected networks. For instance, in a CS network, compromising a business network AP, would not only expose the business network to attacks, but also potentially provides access to the management and field networks. Similar situations exist for access to any of the CS network components. Historically, the isolated nature of the CS networks meant unauthorized access was highly unlikely, reducing the requirement for additional security measures (INL 2006). However, WLAN deployment has weakened this network isolation, making physical access to the CS as trivial as obtaining a WLAN adapter. Secure configuration of WLANs is therefore essential to reduce the CSs exposure to potential attack. There have been reported incidents where individuals have successfully exploited unprotected WLANs in the field networks to compromise the security of the entire CS network. In 2002, a security consultant was able to use an
241

Recent advances in security technology

unprotected AP in a remote substation to gain access to the core management and business networks of a control system utility44. 4.3 Increased exposure to Network Reconnaissance A significant disadvantage of WLANs compared to wired networks is that it is trivial to detect their presence using off-the shelf WLAN equipment and simple active and passive methods. Thus adversaries can gain valuable reconnaissance data about the network without any significant effort or special equipment. This information is harder to obtain for wired networks where physical access can be restricted. For CS networks, this information is sensitive and could be used to launch targeted attacks against sections of the network. 4.4 Blurred Defense Perimeter The broadcast nature of WLANs, permitting ready access to the physical medium, has the effect of blurring the perimeter of CS networks. STAs can access a WLAN from anywhere within the radio range of the AP/peer. This means it is no longer feasible to control all interconnectivity on the network via a single perimeter access control device. STAs might be associated with an external network without the knowledge of the network perimeter firewall, hence making access control even more difficult. Exposure of CS networks to WLAN based threats is further exacerbated by widespread availability of WLAN adapters in most computing equipment. Where equipment connected to the CS by a wired connection has an active wireless adapter, it could expose the network to attacks by establishing connections with unauthorized networks. An adversary can also exploit vulnerabilities in WLAN client software to cause the clients to connect to a hostile wireless network without the interaction or permission of the user (Dai & Macaulay 2005). Since the adversary controls the hostile WLAN, it could be used to obtain unauthorised access to the CS network by the installation of malicious software or the direct theft of passwords or encryption keys. 4.5 Reduced Resource/Capability Requirements of Threat Sources A wide variety of WLAN attack and reconnaissance tools (e.g. kismet45, aircrackng46, etc.) are readily available from the Internet and can potentially be used by an adversary to initiate any of the previously mentioned threats (see Section 3). The availability of such pre-packaged tools significantly reduces the capability and resources required of threat sources to launch WLAN attacks.

44 45

http://www.memagazine.org/backissues/dec02/features/scadavs/scadavs.html http://www.kismetwireless.net/ 46 http://tinyshell.be/aircrackng/wiki/index.php?title=Aircrack-ng


242

Managing security effects of WLAN deployments in critical infrastructure control systems

4.6 Increased Exposure to Availability Threats Due to the nature of the physical medium (RF) and design faults in the MAC layer protocol (e.g. unauthenticated management frames), WLANs are inherently vulnerable to jamming and other denial of service attacks. Hence the use of WLANs in CSs is of grave concern where availability is a crucial security objective (Dzung 2005). Any attacks on the availability of such networks would be potentially catastrophic. The ease with which such jamming and denial of service attacks can be launched against WLANs exacerbates the level of threat presented to the CSs and the CI. The risk environment for CS information networks has worsened as a result of the vulnerabilities introduced by WLAN deployments. Control measures need to be initiated in order to mitigate the increase in overall risk created by WLAN deployment in CS networks. The next section discusses, how establishing and implementing a WLAN security policy can assist in effectively managing these negative security impacts.

5. Establishing WLAN Security Policy in CS Networks


Security policies in CSs often do not exist or are poorly enforced (ISA 2006). Sections 3 and 4 demonstrate the need for effectively managing the security risks associated with WLANs for CS networks. An important requirement to manage these risks is the application of a well designed, comprehensive security policy. An effective WLAN security policy would help create a proactive environment where tools, techniques and procedures are in place to mitigate risks by addressing threats to and vulnerabilities in CS networks. The details of a security policy represent the results of a risk analysis of the network where risks are prioritised according to their severity and resources are allocated accordingly for risk mitigation. Absence of an explicit policy can lead to wasteful resource allocations. Most importantly, the security policy should not impact productivity and should be practical and enforceable. To mitigate the threats and vulnerabilities identified in Sections 3 and 4, a WLAN security policy should, at minimum: 1. Provide guidance for carrying out risk assessment and their frequency. 2. Identify what client devices are permitted to be on the WLAN. 3. Identify what APs are permitted on the WLAN and state their physical locations. 4. Describe the nature of information that may be transmitted over the WLAN. 5. Define security settings for all access points and WLAN clients i.e. methods permitted for encryption, key management, authentication and access control. The choices for these methods should be based on security requirements and risk assessment of the WLAN.

243

Recent advances in security technology

6. Require logical and physical segregation of wired and wireless network segments so that compromise of one segment would not threaten the security of the rest of the network. 7. Require detection of attacks on confidentiality, availability and integrity of WLAN communications. 8. Establish clear delegation of authority and responsibility. 9. Define response and recovery procedures for compromise of WLAN security. 10. Provide guidance on protecting wireless clients against malicious attacks and misconfigurations. Governing bodies around the world have recognised the threat WLANs present to the security and reliability of CSs and have taken steps towards mitigating these threats. The United States National Institute of Standards and Technology (NIST), has released guidelines for secure deployment of WLANs (NIST 2006). Similarly, the Trusted Information Sharing Network (TISN) of the Australian government has released informational papers highlighting areas of concern in wireless security for Chief Executive Officers and Chief Information Officers47. A good WLAN security policy should also be informed by such guidelines and security best practices.

6. WLAN Security Policy Compliance Monitoring


Establishing a security policy is only the first step towards comprehensive security management. Continuous monitoring of WLAN traffic is required to ensure that policy is implemented and complied with. As such a Security Policy Compliance Monitoring System (SPCMS) is required to detect violations of the wireless security policy in the WLAN by any network entity. Violations represent either misconfigurations or malicious activity Hence a SPCMS can not only detect malicious attacks but also reduce the vulnerability of the WLAN to attacks by detecting misconfigured nodes. The requirements of an ideal SPCMS for monitoring security in CS WLANs are:

Passive: It should not require modification of the wireless network nodes or the WLAN protocols. It should operate in receive only monitor mode so as not to announce its presence or affect the performance of the network. Accurate and sensitive: The system should be accurate enough so that alerts are only raised as a result of a security policy violation and sensitive enough to ensure that all policy violations are detected. Maintainable: The system, once deployed, should not require extensive reconfiguration and maintenance. Additionally, the system should be flexible and extensible enough to accommodate changes in the site security policy or practice.

47

http://www.dcita.gov.au/ie/critical_infrastructure_security

244

Managing security effects of WLAN deployments in critical infrastructure control systems

Flexible: The monitoring system should be capable of operating in both online and offline modes, and be deployable in an autonomous distributed fashion or support centralised analysis of alerts. Attack resistant: The monitoring system should be designed to resist attacks, providing assurance that the monitoring capability is not easily disabled. Robust: The system must not be easily spoofed, or evaded by intruders.

In reality, a Wireless Intrusion Detection System (WIDS) would be used to implement such a SPCMS. As a result of widely publicised wireless security vulnerabilities, a number of commercial, research and open source WIDSs (e.g. Snort-Wireless48, WIDZ49, etc.) have appeared in the last few years. However, none of these fulfil the requirements of a SPCMS as identified above. We envisage building such a system using a WIDS and implementing the security policy compliance monitoring functionality by using a specification-based intrusion detection approach rather than traditional misuse-based or anomaly-based techniques; as it would be easier to represent a security policy as a specification rather than attack signatures or expected behaviour patterns (Debar & Viinikka 2005). The specification used by this system would be formed by combining a model of the underlying protocol state machines with the constraints imposed by the security policy of the system. A snort-wireless based SPCMS has already been implemented by the authors (Gill et al. 2006). Architecturally such a system would be distributed sensor based, where the sensors pass the traffic capture streams to a central processing/correlation server for evaluation.

7. Conclusions and Future Work


This paper has explored the effects on the vulnerability of CS networks posed by deployment of WLANs and what policy based strategies might be used to mitigate any resultant risks. Our conclusion is that WLANs have adversely impacted the security risk environment of CS networks and hence the CI in general. In CS networks, there is a pressing need to ensure WLAN deployments are configured to minimise vulnerabilities and deter any potential threat sources. Wireless security policies are essential to adequately manage the risks posed to the CSs via WLANs. Such policies should mandate undertaking regular risk assessment and dictate security measures, techniques and processes to be complied with in the WLAN. Such a policy should not only be based on a thorough and regular evaluation of risks, but it should also draw on guidelines, recommendations and best practice documents provided by industry and various government bodies. Besides developing appropriate security policies to protect the CSs, it is absolutely essential to constantly monitor for any violations of the policy. These violations could occur as a result of malicious attacks or misconfigurations. A wireless security policy compliance monitoring system is required which not only passively monitors for

48 49

http://snort- wireless.org/ http://www.loud- fat- bloke.co.uk/w80211.html


245

Recent advances in security technology

violations of the security policy but is also robust, accurate, easy to maintain, flexible and does not require regular maintenance and updates. In future, we also plan to investigate the usefulness of a proactive WLAN Security Policy Compliance Enforcing System in CS networks, both on the wireless client and the AP side. Such a system would implement proactive measures to force the STAs and the APs to comply with the security policy. We plan to also investigate the impacts of various wireless intrusion response mechanisms on CS network security.

References
Bellardo, J., Savage, S.2003. 802.11 denial-of-service attacks: Real vulnerabilities and practical solutions. In: Proceedings of the USENIX Security Symposium. Washington D.C., USA. Dai Zovi, D., Macaulay, S. 2005. Attacking automatic wireless network selection. Systems, Man and Cybernetics (SMC) Information Assurance Workshop. In proceedings of the Sixth Annual IEEE 365372 Dzung, D., Naedele, M., Von Hoff, T., Crevatin, M. 2005. Security for industrial communication systems. Proceedings of the IEEE 93(6) (2005) 11521177 Debar, H., Viinikka, J. 2005. Intrusion detection: Introduction to intrusion detectionand security information management. In: FOSAD 2004/2005. Egea-Lopez, E., Martinez-Sala, A., Vales-Alonso, J., Garcia-Haro, J., Malgosa-Sanahuja, J. 2005. Wireless communications deployment in industry: a review of issues, options and technologies. Elsevier Science Publishers BV Amsterdam, The Netherlands, The Netherlands. 56: 29-53.. GAO. 2004. Technology Assessment. Cybersecurity For Critical Infrastructure Protection. United States General Accounting Office. Gill R., Smith J., and A. Clark. 2006. Specification-based intrusion detection in WLANs. In ACSAC 06: Proceedings of the 22nd Annual Computer Security Applications Conference on Annual Computer Security Applications Conference, pages 141 152, Washington, DC, USA, IEEE Computer Society. He, C., Mitchell, J.C. 2005. Security analysis and improvements for IEEE 802.11i. In Proceedings of the 12th Annual Network and Distributed System Security Symposium. IEEE. 1999. IEEE Standard 802.11-1999. Institute of Electrical and Electronics Engineers IEEE. 2004. IEEE Std 802.11i2004: Institute of Electrical and Electronics Engineers INL. 2006. Control Systems Cyber Security: Defense in Depth Strategies (2006) Idaho National Laboratory. Control Systems Security Center. ISA. 2006. Mitigations for Security Vulnerabilities Found in Control System Networks. The Instrumentations, Systems and Automation Society. Presented at 16th Annual Joint ISA POWID/EPRI Controls and Instrumentation Conference. Karygiannis, T. and L. Owens (2002). Wireless Network Security: 802.11, Bluetooth and Handheld Devices Draft, special publication 800-48, US Natl Inst. Koumpis, K., Hanna, L., Andersson, M., Johansson, M. 2005. Wireless Industrial Control and Monitoring beyond Cable Replacement, In PROFIBUS International Conference, Warwickshire, UK.
246

Managing security effects of WLAN deployments in critical infrastructure control systems

Mishra, A., Arbaugh, W. 2003. An Initial Security Analysis of the IEEE 802.1X Standard. Technical report Masica, K. 2007. Securing WLANs using 802.11i. Draft Recommended Practice. Lawrence Livermore National Laboratory. Nash, T. 2005. Backdoors and Holes in Network Perimeters - A Case Study for Imroving Your Control System Security. Vulnerability and Risk Assessment Program (VRAP). Lawrence Livermore National Laboratory. NIST. 2006. Guide to IEEE 802.11i: Establishing Robust Security Networks. National Institute of Standards and Technology. Special Publication 800-97 (Draft). Risley, A., Roberts, J. 2003. : Electronic Security Risks Associated With Use of Wireless, Point-to-Point Communications in the Electric Power Industry. In proceedings of the DistribuTECH Conference and Exhibition, Las Vegas, NV, February (2003) 46 Stanley, D., Walker, J., Aboba, B. 2005. Extensible authentication protocol (eap) method requirements for wireless lans. Technical report, IETF RFC 4017. Willig, A., Matheus, K., Wolisz, A. 2005. Wireless technology in industrial networks. Institute of Electrical and Electronics Engineers. 93: 11301151

247

Recent advances in security technology

22
Behavioural responses to the terrorism threat: Applications of the Metric of Fear
Anne Aly, Mark Balnaves and Christopher Chalon
Edith Cowan University
Abstract
In Australia, terrorism is defined by the Australian Defence Force as the use or threatened use of violence for political ends or for the purpose of putting the public or any section of the public in fear (Martyn 2002). Among the various definitions of terrorism that exist is the universal notion that terrorism uses violence, targets noncombatants, is intended to intimidate and creates a state of terror. Importantly, all definitions agree that fear is the ultimate aim of terrorism. Following the terrorist attacks on the World Trade Centre and the Pentagon in 2001, Australian polls indicated heightened levels of fear and anxiety about a possible terrorist attack in Australia, despite the fact that risk assessment studies underline that the actual risk of a terrorist attack is marginal in comparison to many other mortality risks such as smoking and car accidents (Mueller 2004; Viscusi, 2003). This paper reports on a national project at Edith Cowan University funded by an Australian Research Council Discovery Grant (Safeguarding Australia). This project examines the nature and extent of the fear of terrorism operating within the Australian community since the September 11 terrorist attacks. The project incorporates a qualitative study for the development of a fear scale. As the first of its kind, the Metric of Fear measures the extent to which Australians are restricting their behaviours and adopting protective behaviours in response to the fear of terrorism.

248

Behavioural responses to the terrorism threat: Applications of the Metric of Fear

Introduction
Since the terrorist attacks on the United States in September 2001, Australians have witnessed the introduction of counter-terrorism measures on an unprecedented scale including border surveillance strategies, legislative amendments, communication strategies, the introduction of a citizenship test and a range of approaches aimed at addressing radicalization and promoting social inclusion, all of which suggest that the threat of terrorism continues to rate highly as a matter of political and public concern. Concern over the threat of an imminent terrorist attack is captured in the National Security Information Campaign, Lets Look Out For Australia, first launched in December 2002. In September 2004, a new phase of the campaign entitled was launched Help Protect Australia from Terrorism was launched. The campaign includes television, press, transit and outdoor advertising urging Australians to report possible signs of terrorism to the National Security Hotline. The use of both visual and print media ensures that the campaign is highly visible to Australians and communicates a message that Australians need to be consistently vigilant about the threat of terrorism (Aly and Balnaves 2007). This paper reports on Australias first study on eliciting empirical data on fear of terrorism. The research project at Edith Cowan University, Australian responses to the images and discourses of terrorism and the other: establishing a metric of fear, is a national, cross-methodological, investigation of public opinion formation, interpersonal communication and media messages. Funded by an Australian Research Council Discovery Grant (Safeguarding Australia), the project interrogates the key media events and messages, as remembered and circulated by specific audiences, and analyses different constructions of terrorism and fear responses in Australian society.

Research Methodology
The first level of inquiry involved qualitative research, conducted by the projects PhD researcher Anne Aly. The purpose of this research was to examine how Australian audiences construct the media and political discourses on terrorism and how this impacts on community fears of terrorism, comparing responses from members of Western Australias Muslim communities with those of the broader community. Ten focus groups were conducted with 85 participants from various ethnic backgrounds, religious and age groups. Of the ten focus groups, four were held exclusively with Australian Muslim participants in gender specific groups. The focus groups discussed issues relating to the media discourse on terrorism and perceptions of the terrorist threat to Australia, the dominant messages in the media discourse on terrorism, and how information and opinions about terrorism are circulated. Initial analysis of the focus groups provided themes for further investigation through a series of 60 in-depth individual interviews with equal numbers of Muslim respondents and respondents from the broader Australian community. Prompts were used to explore respondents constructions of media messages and the influence of the media on their opinions and perceptions.
249

Recent advances in security technology

Thematic analysis techniques were used to analyse the focus group transcripts with the aid of the NVivo data analysis tool. The broad theoretical approach was phenomenological. Asensio (2000) describes the outcome of phenomenological research as a set of categories of description which describe the variation in experiences of phenomena in ways that allow researchers to deepen their understanding of the phenomena. The constructs derived from the focus group analysis were used to inform the adaptation of rape and vulnerability inventories to create a fear of terrorism survey. The survey was administered by telephone to 750 households nationally. In order to obtain a statistically useful sample of Australian Muslims, the survey was administered to 105 Muslim households, an overrepresentative number in comparison to the demographic data, which places Muslim Australians at just 1.5% of the total Australian population50. Based on the findings from the focus groups, the Fear Survey included questions to test behavioural responses to the fear of terrorism and self reported feelings of safety before and after the September 11 terrorist attacks as well as questions on individual and community identity. Developing the Fear Scale The focus group discussions provided an insight into how people talk about terrorism, their perceptions of the terrorist threat and their fears in relation to terrorism. This kind of information was useful for constructing questions likely to produce responses that would most accurately reflect the level and nature of the fear of terrorism prevalent in Australian communities. The focus group discussions revealed that people were most likely to articulate feelings of fear in terms of safety and behaviour modifications in response to the perceived threat of a terrorist attack. Behaviour modifications expressed by the participants included avoiding situations which resonated with media images of global terrorist attacks such as public transport and crowded public places. They also included cognitive responses such as heightened awareness of surroundings and suspicion of strangers, particularly of young Arab or Muslim men whose physical appearance mirrored the media- perpetuated image of a terrorist. The survey sought comparative feelings of safety before and after the terrorist attacks of September 11. These self-report data are useful as they are indicative of current perceptions of safety compared to the past. However, like all self-report data, especially involving lengthy amounts of time, they should be viewed with some caution. Several scales have been developed that attempt to measure the fear of rape and the fear of crime (Liska 1988; Warr 1990; Senn 1996). For the most part, investigations into the fear of crime have focused on describing and explaining variations in fear among different genders, ages and social groups (Warr 1990). In terms of examining fear phenomenologically, in order to understand fear as a social force that impacts on
50

ABS Data from the 2001 Census. Available from www.omi.wa.gov.au

250

Behavioural responses to the terrorism threat: Applications of the Metric of Fear

behaviour, two general patterns have emerged. One concerns preventative or restrictive behaviours in which individuals will take measures to avoid places and situations perceived as dangerous. The other concerns protective or assertive behaviours in which individuals will undertake protective measures in places and situations perceived as dangerous. In surveying the range of survey tools that could be modified to include the constructs extracted from the first stage of the project, the researchers found that there were no scales that measured both patterns of behavioural responses to fear. There are also no existing scales that measure personal perceptions of risk as well as community perceptions of risk. Of the existing scales, the Fear of Rape Scale developed by Gordon & Riger (1979), provided a sound basis for developing the Fear Scale. Modifications to this scale included the omission of some questions specific to the context of rape and the inclusion of questions derived from the constructs that evolved from the phenomenological analysis of the focus group findings. The Fear Scale consists of three (indiscriminate) sections. One section was designed to collect demographic data. Another section tested self reported feelings of safety before and after the September 11 terrorist attacks on the United States at both the personal and community level, and perceptions of the terrorist risk. A third section tested restrictive and protective behavioural responses to fear to gain a sense of how safe or unsafe people felt within their own neighbourhoods or in situations that resonated with images of terrorism.

Findings
A statistical analysis of the results of the Fear Scale revealed certain characteristics about the prevalence and nature of the fear of terrorism in the Australian community. 51 Notably, the findings confirm heightened levels of fear after the September 11 terrorist attacks on the United States and behavioural modifications in response to feelings of fear. These findings are substantiated by empirical evidence observed in the qualitative analysis of the focus groups and individual interviews. Table 1 presents reported feelings of safety prior to and after the September 11 terrorist attacks. On a four point scale ranging from very safe (a score of 1) to very unsafe (a score of 4) the mean for both the Muslim communities and the broader communities is substantially higher for after the September 11 attacks. The higher means for Muslim respondents (both before and after 9/11) are supported by qualitative data in which Muslim participants expressed high levels of fear of the

51

Of the sample observed (n = 750), 44% were male and 66% were female; 24% were Muslim and 76% were not; 54% had a school education, 18% had a technical education and 28% had a tertiary education.
251

Recent advances in security technology

possible repercussions of a terrorist attack and the impact on themselves, their families and the Muslim communities in Australia.
Class Broader Community Muslims Broader Community Muslims N 569 177 571 177 Mean 1.46 1.58 2.12 2.45

Safe before 9/11

Safe after 9/11

Table 1. Feelings of safety before and after 9/11 on a four- point scale (higher mean scores indicate lower levels of felt safety) The elevated levels of fear in the Muslim population in comparison to the broader community may, in part, be due to perceptions among the Muslim communities that they are viewed negatively and portrayed negatively in the popular media. In response to the question Do you feel that you belong to a community that is viewed negatively by others? 59% of Muslims responded in the positive compared to only 17% of respondents from the broader community. In response to the question Do you feel that the media portrays you or the community you belong to negatively? 67% of the Muslims surveyed responded in the positive compared to only 19% of the broader community. The Chi-Square test for these associations is significant (p < .001), and can be generalised to the rest of the population. Fear Scale The focus groups revealed that those who expressed fear of terrorism were likely to express their fear in terms of changes in behavioural patterns. Some participants in the focus groups had adopted preventative behaviours such as avoiding public transport. Others were unaware of their own anxieties until they were placed in situations in which their fear motivated them to take on assertive or precautionary behaviours, such as increased awareness of their surroundings. The fear scale incorporated these findings into 23 questions designed to test if respondents had adopted a range of behavioural modifications in response to a perceived terrorist threat. The sub-scales that emerged from the analysis of the responses to the 23 questions relating to fear: fear of being alone, wariness of others, fear in immediate proximity and fear in public places, represent dimensions associated with the two main constructs of interest in this study 52, namely restrictive and protective behaviours. Cronbach alphas for fear of being alone ( = .79), wariness of others ( = .79) and fear in immediate proximity ( = .74) all meet Nunnallys (1978) minimum
52

Of the original scale five questions were deleted as they either had poor factor loadings or loaded onto more than one factor.

252

Behavioural responses to the terrorism threat: Applications of the Metric of Fear

requirement of 0.7. While fear in public places ( = .63) does not meet this requirement, Hair et al (1998) suggest that a Cronbach alpha can be as low as 0.6 if the research is exploratory in nature. FEAR Sub-Scales
Factor 1 - Fear of Being Alone ( = .79) B4. I ask friends to walk me to my car in public car parks. B14. If I had to walk to my car, I would make sure I was accompanied by someone I trusted B7. When I am walking alone I think about where I would run to if in trouble. B10. If I was waiting for an elevator and it arrived with one person alone inside, I would wait for the next one. B3. I avoid going out alone. Factor 2 - Wariness of Others ( = .79) B13. In general, I am suspicious of people. B11. I am wary of people generally. B17. In general, I am afraid of people. B12. If I have to walk outside I take precautions. B9. I am especially careful of wearing clothes that do not draw attention to me. Factor 3 - Fear in Immediate Proximity ( = .74) B21. How safe do you feel being out alone in your neighbourhood? B16. How safe do you feel in your own house when you are by yourself? B6. In general how safe do you feel? B8. I feel confident walking alone in my neighbourhood Factor 4 - Fear in Public Places ( = .63) B1. I think twice before going to a crowded shopping centre. B2. If I have to take the train, tram or bus I feel anxious. B22. How safe do you feel travelling by airline? 0.77 0.74 0.56 0.76 0.72 0.69 0.58 0.80 0.76 0.62 0.60 0.49 1 0.80 0.75 0.64 0.57 0.57

Component
2 3 4

Table 2. Fear sub-scales


Extraction Method: Principal Component Analysis; Rotation Method: Varimax with Kaiser Normalization.

While it is, of course, unlikely that a 20-30 minute interview will yield detailed insights into a persons psychology, the fear scale does provide an indicative measure of fear at both the individual and community level. The scale ranges from 1 to 5, where a mean score of 2.0 or over indicates the level of community fear is significant enough to warrant behavioural modifications that are either restrictive or assertive. A
253

Recent advances in security technology

mean score of 4.0-5.0 is indicative of extreme levels of community fear. The kinds of behaviours that may be expected with this level of fear include social and economic isolation induced by the fear of being the victim of a terrorist attack. It is to be expected that such extreme restrictive and protective behaviours would have a significantly adverse impact on the social and economic health and well-being of the community. Consistent with patterns reflected in fear of crime surveys, there were statistically significant differences in the feelings of fear and safety against demographic variables such as gender, income and education level. The sample of Muslim respondents to the fear survey also demonstrated significantly higher levels of fear in comparison to respondents from the broader community, as indicated in Table 3. Responses from the Muslim population showed higher means across all four fear sub-scales, indicating responses across the spectrum of protective and restrictive behaviours. The qualitative exploration suggests that, unlike the broader community, members of Australias Muslim communities, are adopting such behaviours in response to the perceived impact (both personal and community) of a terrorist attack as opposed to the perceived risk of a terrorist attack occurring.
Class Broader Community Muslims Others Broader Community Muslims Immediate Proximity Broader Community Muslims Public Places Broader Community Muslims Alone N 505 155 551 171 564 173 456 157 Mean 1.6966 2.0929 1.6163 2.1205 1.5554 2.0332 1.7617 2.1571

Table 3. Fear Scale Means- Broader Community and Muslims

Conclusion: Applications of the Metric of Fear


Researchers have for some time used fear of crime and rape scales in order to gauge perceived safety among individuals and communities, and inform appropriate policy responses. The present study has revealed the presence of heightened levels of fear, particularly among Australian Muslim communities. These trends require regular monitoring as increased levels of community fear can impact adversely on health and wellbeing and by extension involve substantial social and economic cost to Australia. The Metric of Fear can be used to inform communication strategies around the threat of terrorism and the impact of strategies such as the National Security Information Campaign. At another level, the Metric may have some useful applications to risk assessment and contingency planning by offering researchers a tool for predicting behavioural modifications in response to heightened perceptions of threat.
254

Behavioural responses to the terrorism threat: Applications of the Metric of Fear

Fear has a legitimate role in nature and in human societies. Anxiety over terrorism has a legitimate basis and should play a significant role in shaping policy responses in countering terrorism.

References
Aly, A. and M. Balnaves (2007). "They Want Us To Be Afraid: Developing metrics of the fear of terrorism." International Journal of Diversity in Organisations, Communities and Nations 6. Asensio, M. (2000). Choosing NVivo to support phenomenographic research in networked learning. Second International on Networked Learnings, Lancaster, England. Gordon, M. T., & Riger, S. (1979). "Fear and avoidance: A link between attitudes and behaviour." Victimology 4: 395- 402. Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate Data Analysis. Upper Saddle River, New Jersey: Prentice-Hall Inc. Liska, A. E., Sanchirico, A. & Reed, M.D. (1988). "Fear of crime and cosntrained behavior specifying and estimating a reciprocal effects model " Social Forces 66(3): 827-837. Martyn, A. (2002). "The right of self-defence under International Law- the response to the terrorist attacks of 11 September." from http://www.aph.gov.au/LIBRARY/Pubs/cib/200102/02cib08.htm#international. Mueller, J. (2004). "A false of insecurity?" Regulation 27(3): 42-47. Nunnally, J. C. (1978). Psychometric Theory (2nd ed.). New York: McGraw-Hill. Senn, C. Y., & Dzinas, K. (1996). "Measuring fear of rape: A new scale." Canadian Journal of Behavioural Sciences 28(2): 141-144. Warr, M. (1990). "Dangerous situations: Social context and fear of victimization." Social Forces 68(3): 891- 907.

255

Recent advances in security technology

23
Mechanical Output of Contact Explosive Charges
Gregory Szuladzinski
Analytical Service Pty Ltd, Australia
Abstract
Explosive charges are often placed on or near surfaces of objects that can be moved or damaged as a result of a detonation. Quantification of such events must include the peak force and the reflected impulse experienced by a surface. Although perfectly rigid surfaces dont exist, they are often approximated by real media and therefore this condition is assumed here in order to provide an upper bound of the variables of interest. The work is based on experimental results, on theoretical investigation and is augmented by finite-element simulation of surface charges. In the latter work, two different methods for implementing explosive action are used. The output of four explosive shapes is established with regard to peak reactions and the maximum impulse. Some useful conclusions related to finite-element modeling of contact charges are presented.

Biography
Gregory received his Masters Degree in Mechanical Engineering from Warsaw University of Technology in 1965 and Doctoral Degree in Structural Mechanics from University of Southern California in 1973. From 1966 to 1980 he worked in the United States and then in Australia in a range of industries and applications. He has a number of publications to his credit in stress analysis, dynamics and plasticity. His book entitled Dynamics of Structures and Machinery. Problems and Solutions was published worldwide by John Wiley Interscience in 1982. He is a Fellow of the Institute of Engineers Australia and a member of the ASME.

256

Mechanical Output of Contact Explosive Charges

1. Modeling of exploding charges


There are essentially two ways of simulating the explosive event. One is to imitate explosive burning, or a detonation process within the substance prior to these effects being transferred to the surrounding medium. This is called a progressive burn, or Burn in brief. This approach is most frequently used. An alternative, simpler way is to replace the explosive by a volume of strongly compressed gas. When an initial confinement is removed, the gas begins to violently expand thus imitating the action of the explosive. The latter model describes an instantaneous explosion and will be referred to as a compressed gas or the -law method. The Burn approach makes it necessary to specify the ignition point of the explosive. Igniting the charge, placed on a surface, as far as possible from that surface magnifies the effect, in that the force and impulse applied to the surface will typically be larger. This happens because the detonation wave spreading within the body of explosive will impact the surface with its full speed. For this reason, the overall effect of Burn approach has a chance of being more realistic, especially at a close range, as considered here. Still, the simplicity -law of makes it a very useful and attractive tool. This writer has conducted a study of fly-off velocity in bursting steel containers using both methods, Szuladzinski (1999) and concluded that the compressed gas approach gave better results. In that instance the ignition and burn process were perfectly symmetric with regard to the container. The detonation wave effect was negligible. In Zukas (1998) an outline -law of presented. A more complete, energy-consistent formulation is given below.

2. Instantaneous Explosion
The initial volume of gas is assumed to expand according to the polytropic curve. When pressure p and volume V change, the product pVn remains constant. If the change is from condition 1 to 2, then

p1V1n = p2V2n The energy content of gas is Q= pV n-1

(1)

(2)

The polytropic exponent n is analogous to in the exact theory. For an idealized instantaneous explosion the initial pressure is

257

Recent advances in security technology

po =

o D 2 2(n + 1)

(3)

where o is the initial density and D is the velocity of detonation. (One should also note here that pCJ pressure, often used in the explosive theory is 2po.) The explosive energy q per unit mass can be calculated from Eq.2:

q=

po o (n - 1)

(4)

Usually the basic explosive data provided by a manufacturer are o, q and D . The equivalent polytropic exponent n can be determined by equating po/o from Eqs.3 and 4:
2 n =

D2 +1 2q

(5)

The volume of sphere is 4r3/3, which converts Eq.1 to


3n p1 r 1 = p 2 r 3n 2

(6)

which gives the relation between the initial pressure and the pressure in the expanding spherical field as

r p 2 = p1 s r +u s

3n

(7)

The expanded radius is r2 = rs + u, where rs is the initial charge radius and u is the radial movement of the outer surface. The following example will illustrate the use of the equations for TNT, employing consistent units of mm-g-ms. One has =0.0016 g/mm3, q = 4.61x106 N-mm and D = 6900mm/ms. From Eq.5 one obtains n = 2.483 and from Eq.3, po = 10940 Mpa. The above formulation was developed by this writer based on an idea expressed by Henrych (1979). In Mayers (1994), a listing of polytropic exponents is quoted for a large number of explosives using an equation for analogous to Eq.5.

3. Experimental Evidence, Contact Charges


One cant expect to be able to perform an experiment conforming strictly to our scenario, as the requirement of a perfectly rigid surface would not be fulfilled. However, some of the experiments on metal propelling carried out by Gurney (1943) bear a resemblance to our problem. In particular, there is a case of a flat explosive layer pushing a flat metal sheet, both materials being unconstrained. The fly-off velocity of the metal sheet is
258

Mechanical Output of Contact Explosive Charges

(1 + 2 )3 + 1 v f = 2qG + 6(1 + )

1/ 2

with = Mm/Me

(8)

where qG is the specific Gurney energy, Mm is the metal mass and Me is the explosive charge mass. In our case, referred to as an open sandwich, with a large metal to charge mass ratio , the whole expression can be shown to attain the limit value of

v f 2qG

3 Me 4 Mm

(10)

When this is multiplied by Mm, the result is the impulse Ss = Move applied to the metal. Applying this to a column with a unit-area base, cut out of explosive layer:

S s = Mv f = 0.866 2qG e he

(11a,b)

where he is the thickness of explosive layer and the impulse is per unit surface area. The above relates to a continuous layer of explosive. For any charge, of a compact shape, one can write, in a more general fashion

S = M e 2qG

(12)

where the coefficient, <1, depends on the geometry of the charge. If, for an arbitrary shape the fly-off velocity vf is experimentally found, then the impulse is S = Mmvf becomes known and the coefficient can be found from Eq.12. As one can easily establish by using Eq.8 with the same Me, but different values of Mm, that using a larger metal mass with the same charge decreases vf, but increases impulse S applied to metal. The term

2qG = 2676 m/s for TNT is sometimes

referred to as the average outflow velocity of explosive gas. The other part of evidence is that of a cylindrical charge used to propel a metal plate. There are several references to such experiments, among others in Cooper (1996). The effective part of a cylindrical charge standing on a flat plate was found to be the cone with 60o apex angle, as shown in Figure2d, this being determined for the initiation at the base point of the axis. The ratio of the mass within the cone to the total mass of explosive is then our per Eq.12:

= 0.5774

r for H > 1.73r H

(13)

For a shallow cylinder such as in Figure2e the effective explosive becomes a truncated cone and again may be found by taking the effective mass from only that cone. With regard to a cylindrical charge, the above estimate is of a historical interest only. In his recent work, Cooper (2002) examined a large amount of experimental data for plates driven by cylindrical contact charges. As it turned out, the above
259

Recent advances in security technology

approximation overestimates the fly-off velocity, especially for relatively large explosive/metal mass ratio. A good fit to experimental data is obtained by

= 0.22

d H

(14)

with d = cylinder diameter. For a case with d = H for example, one finds = 0.22 from Eq.14 while = 0.289 from Eq. 13, a substantial over-estimate.

4. Experimental Evidence, Close Proximity Charges


Equation 12 tells us that in an idealized case of an explosive acting on a rigid surface, with the entire energy content being effective, we may set = 1 and replace qG with q, obtaining S = M e 2q . Using a similar form, Baker (1983) proposed that a specific reflected impulse ir on a rigid surface surrounding a spherical charge be approximated by

ir =

[ 2( M e + M a ) M e q ]
4r 2

1/ 2

Me = 4r 2

1 + M a 2q M e

1/ 2

(15)

where r is the radius of the surrounding surface and Me is the mass of air enclosed within r. As long as r > rs, where rs is the charge radius, then, with the density of air a one has Ma (4/3)r3a. Equation 15 was experimentally demonstrated to work well up to about 20 charge radii.

5. Basic relations for selected contact charges


The following shapes of charges are taken into account: flat, hemispherical, cubic and cylindrical, as illustrated in Figure1. The pressure within the charge provides the force R acting on the surface. The elasticity of the charge acts a spring, which pushes the mass of the charge away from the surface (Figure2). In this way, the impulse S is applied to the surface. An FEA simulation produces pressure as a function of time. The reaction of exploding gas on the adjacent rigid surface can be expressed as

260

Mechanical Output of Contact Explosive Charges

R(t ) = p (t )dA
A

(16)

The impulse applied to the surface of interest is the integral of the above reaction:
.t =

S=

.t = 0

R(t )dt

(17)

While the upper integration limit is infinity, one rarely needs to integrate for more than a fraction of a millisecond for moderately-sized charges, as in our examples. In space, the integration rarely needs to extend farther than a few charge diameters. Using law, the initial surface force Ro = Aopo, where Ao is the contact area of the surface charge, is a convenient reference value. This needs some deliberation for the shapes involved. 5.1 Flat Charge To have a good picture of the essential part of the action within a flat charge, one concentrate our attention on a small column, cut out of the charge somewhere away from the edges, having a unit-area base and a height h. The presence of the rest of the material around it, which behaves in the same manner, allows us to think of this block as expanding away from the surface, in a channel with rigid walls. The initial pressure po given by Eq.3, persists for some time near the base. This pressure is the largest the hard surface will experience. Near the edges of the charge, the situation is less favorable, as there is a sideways flow associated with expansion, which decreases gas pressure. For this reason, one can assume the peak explosive force applied to the surface as Ro = (b-h) (l-h) po (18)

5.2 Hemispherical Charge Our half-sphere can be thought as a half of a fictitious sphere. As long as the dividing surface is frictionless, i.e. it does not interfere with the radial expansion, one can use the spherical relationship for each of the half-spheres. If the radius changes from rs to r, then the pressure changes from po to p. The nominal force, due to pressure po, is Ro = rs2po (19)
261

Recent advances in security technology

For this charge one can easily establish how the contact force changes as the charge expands. Using Eq.6 one has

r R = r p = r po s r
2 2

3n

(20)

The expanded radius is r = rs + u, where u is the radial movement of the outer surface.
3n2 3 n 2

r The above gives R = rs po s r


2

r = Ro s r

(21)

From this, one can see that Ro is indeed the peak force and that this contact force decreases as the radius of charge grows, because the increase in area does not offset the pressure drop. 5.3 Cubic Charge and Cylindrical Charge Here the physical relationships are different, because much of the flow is directed horizontally therefore imposing less reaction force on the surface. Still, the best reference value seems to be the nominal force Ro = a2po where a is the side of a cube, and Ro = a2 po for the case of a cylinder with radius a. (23) (22)

262

Mechanical Output of Contact Explosive Charges

b L (a) 2rs (b) 60o a H (c) 60o r (d) r h (e)

Fig.1 Charge shapes under consideration: (a) Flat, (b) hemisphere, (c) Cube, (d) Cylinder and (e) Low cylinder

Fig.2 The charge represented as a grounded mass-spring system

6. FEA Simulation
The behavior of the selected shapes is simulated using the finite element method. Both the lagrangian approach is used (in conjunction with the -law) as well as the eulerian method (in conjunction with Burn), using a designated ignition point. The JWL equation of state is employed in the latter case. The JWL data for TNT are as follows: A=373800, B=3747, r1=4.15, r2=0.9, =0.35, e=7376 consistent with Ref.7 notation and keeping in mind that MPa is the unit of pressure.
263

Recent advances in security technology

Equation (12) will be employed to calculate the equivalent explosive mass. The response will be ignition-dependant when Burn method is used. Unfortunately, in most real cases, especially those motivated by malicious intents, the actual ignition point is not known. (Strange as it may sound, this is also true with many engineering, otherwise well-documented tests.) For this reason, the attempt will be to postulate the most probable values. Typically, Burn will produce the largest response when the ignition is farthest away from the surface, on which the charge is positioned. The mid-point seems to be statistically most likely and this will be considered as a reference value. The -law derived value will also be considered, as appropriate to the case in question. The models used in simulation will now be briefly described. Flat Charge: The simulation was performed using a column segment, as described before, consisting of 99 cubic elements, with the side length of 1 mm and the total mass of 0.1584g. Hemispherical Charge: The model consisted of 4000 brick elements, with the average length of 15 mm along the radius. The model represented one quarter of the hemisphere, had 300mm radius and a mass of 22.62 kg. Cubic Charge: The model had 2000 cubic elements, with the edge length of 10 mm. The model represented one quarter of the cube, had 200mm height and a mass of 3200g. Cylindrical Charge: The simulation was performed using a quarter-cylinder model, consisting of 12000 elements, with the radius of 41 mm and a height of 80mm. The model had a mass of 169g. The simulation results are shown in Table I and II.
Model Flat Hemisphere Cube Cylinder S0 (N-s) 0.411 30,060 1953 115.1 R0 (kN) 10.94 773,300 109,400 14,440 Rn /R0 1.0 0.958 0.98 0.99

Table I. Instantaneous explosion results R0 = A0p0; Rn is the output value; S0 is the output impulse
Model Flat Hemisphere Cube Cylinder R1/R0 0.544 0.690 0.596 0.664 R2/R0 1.492 0.920 1.144 1.508 R3/R0 1.884 1.310 1.637 2.035 S1/S0 0.85 0.778 0.952 1.270 S2/S0 0.85 0.815 1.238 1.206 S3/S0 0.85 0.875 1.331 1.218

Table II. Burn model results Indices 1, 2 and3 refer to a low, medium and high point initiation, respectively. The values are relative to those of Table I. The underlined values from this table were used in typical output calculations.
264

Mechanical Output of Contact Explosive Charges

The summary Table III shows the representative values of reaction and impulse. The typical reaction Rt is selected as follows: It is the larger of the two: R0, according to law, or R2. The typical impulse St is taken to be S2, provided S2 > S0. If not, St = (S2 + S0)/2 To recover the coefficients from the above data, one needs to use Eq. 12. However, that was written for physical tests, while ours is a virtual test, which uses the total energy content q, instead of qG, the smaller value. Based on this reason the coefficients are found from

St 3036M e

(24)

This gives = 0.855 for a flat charge and = 0.271 for the cylinder. The first is practically the same as 0.866 before, but the second is substantially in excess of 0.22 from Coopers results. There could be several reasons for the discrepancy. The most prominent is the fact that the last figure may have been relevant for flyer plates not sufficiently thick to approach a rigid surface. The reaction values were left unchanged in the final Table III, as perhaps conservatism should rule in face of doubt. The literature data for contact explosions of the type discussed are scarce. Henrych (1979) obtained impulse results referring to geometric reasoning, but did not elaborate on his method. His coefficients are shown in Table III as (H). It is seen that the differences are significant. It is interesting that even such frequently quoted sources like TM5-855-1 (1986) and TM5-1300 (1992) do not give information on contact explosion. It may be related to the fact that a rigid surface, while an important reference concept, is not a real-world object.
Shape Flat Hemispherical Cubic Cylindrical Reaction 1.49Ro 1.00 Ro 1.14 Ro 1.51 Ro 0.87 0.44 0.25 0.27* (H) 1.0 0.5 0.167 0.167

Table III. Summary of reactions and impulse coefficients Cylindrical result was marked (*) as it is valid only for the cylinder with h = d. For other proportions, refer to Eq.14.

265

Recent advances in security technology

Fig.3 Hemispherical charge exploding, one quarter shown. (MPa)

Fig.4 Cubic charge exploding, top initiated, one quarter shown. (MPa)

Fig.5 Cylindrical charge exploding, top initiated, one-half shown. (MPa)

7. Summary
An explosive charge placed on a surface and detonated applies an impulse to that surface owing to the fact that it contains not only energy, but also inertia. If the value of impulse is known from experiments, it leads to the determination of a shape coefficient , valid for all charges of the same shape. The values of that coefficient were established using literature sources as well as an FEA simulation. Both the instantaneous explosion method as well as the conventional, progressive burn method was employed. The results depend, to some extent, on where the detonation is initiated, but this is often unknown. The final coefficients were selected here were based on the most probable anticipated scenario.

266

Mechanical Output of Contact Explosive Charges

References
Baker, W.E. 1983.Explosions in Air. Wilfred Baker Engineering, San Antonio, Texas. Bulson, P.S. 1997. Explosive loading of engineering structures. E&FN Spon. Cooper, P. W. 1996. Explosives Engineering. (pp. 394-399.) Wiley-VCH. Cooper, P. 2002. Explosive weight correction for untamped flyer-plates. Proceedings, 20th International Symposium on Ballistics, Orlando, Florida Gurney, R.W., 1943. The initial velocity of fragments of bombs, shells, and grenades, US Army Ballistic Research Lab., Aberdeen Proving Ground, MD. Report No. 405 Henrych, J. 1979. The Dynamics of Explosion and Its Use. Elsevier LS-Dyna Keyword Users Manual. 2001. Livermore Software Corporation, Livermore, California. Version 960. Meyers, M.A., 1994. Dynamic behavior of materials. Wiley Persson, P.A, Holmberg, R. and Lee, J. 1994. Rock blasting and explosives engineering. CRC Press. Szuladzinski, G. 1999. Computer simulation of propelling of solid pieces by explosive action. Proceedings of the 3d Asia-Pacific Conference on shock and impact loads on structures. Singapore. TM5-1300. 1992. Structures to resist the effects of accidental explosions. Explosives Safety Board, Dept.of Defense, USA. TM-5-855-1. 1986.Fundamentals of protective design for conventional weapons. Dept of the Army, Washington DC, USA, Zukas, J.A. and Walters, P.W. (Editors) 1998. Explosive effects and applications. Springer.

267

Recent advances in security technology

24
On Ensuring Continuity of Mobile Communications in a Disaster Environment
J. Ring, E. Foo and M. Looi
Queensland University of Technology, Australia
Abstract
Natural disasters and deliberate, willful damage to telecommunication infrastructure can result in a loss of critical voice and data services. This loss of service hinders the ability for efficient emergency response and can cause delays leading to loss of life. Current mobile devices are generally tied to one network operator. When a disaster is of significant impact, that network operator cannot be relied upon to provide service and coverage levels that would normally exist. While some operators have agreements with other operators to share resources (such as network roaming) these agreements are contractual in nature and cannot be activated quickly in an emergency. This paper introduces Fourth Generation (4G) wireless networks. 4G networks are highly mobile and heterogeneous, which makes 4G networks highly resilient in times of disaster.

Biographies
Jared Ring is a PhD Candidate with the Information Security Institute and the Faculty of Information Technology at the Queensland University of Technology (QUT). Mr Ring's research interests include 4G networks and billing systems, mobile networks, secure protocols, network management and administration. Ernest Foo was awarded a Doctor of Philosophy degree in the area of computer security from the Queensland University of Technology (QUT) in 2000. Since then he has been employed as a lecturer in the Faculty of Information Technology at QUT. He is an active researcher in the Information Security Institute which is also based at QUT. Dr Foo's research interests include secure electronic commerce protocols for
268

On Ensuring Continuity of Mobile Communications in a Disaster Environment

electronic payments, tendering and contracting. He also has research interests in wireless sensor network security protocols, RFID security, identity management and secure billing in 4G networks. Mark Looi is currently a Professor in, and Head of School of the School of Software Engineering and Data Communications, which forms part of the Faculty of Information Technology of the Queensland University of Technology. Professor Looi's research interests include mobile and wireless security, smart cards, network security and authentication, and security auditing.

1. Introduction
Natural disasters, accidents and malicious actions (herein referred to as disruptive events) can have devastating effects on critical telecommunication infrastructure. Unfortunately, protecting telecommunication assets against these threats is difficult as the acts are random and not necessarily directed at the telecommunication equipment. Fourth Generation (4G) Mobile Networks are currently under development and will provide greater resilience in times of disaster. This paper will highlight some of the advantages in disaster scenarios that can be obtained by the introduction of 4G networks. The loss of wireless communications in a disaster is a loss of critical infrastructure that hinders authorities and aid organisations providing assistance as well as the general population (Manoj & Baker 2007). While the affect on command and control is obvious the isolation felt by the populace may be difficult to quantify. The development and gradual introduction of 4G networks can mitigate the loss of service and network meltdown problems that disruptive events can have on traditional mobile networks. A 4G network has no single network operator or provider, but rather makes use of any available Radio Access Network (RAN), called an Access Provider (AP). A 4G user need not have a preexisting relationship with an AP before being able to make use of them.

2. Current State of Mobile Communications


If a disruptive event were to occur today first responders (emergency services, aid organisations, etc) would likely use the P25 or APCO25 network. However, as the scope of the event increases the need to coordinate with agencies and services that are not part of the P25 network becomes apparent. In this case, the use of existing public telecommunication infrastructure would become necessary as the P25 network has limited inter-networking capability. The general public would also begin to make use of the publicly available networks as individuals make contact with their own social networks contributing to network meltdown. These networks are likely to be GSM based 2/2.5G services, CDMA, and 3G networks. Increasing penetration of Voice over IP (VoIP) networks may also carry such traffic out of the geographic area of the event.
269

Recent advances in security technology

Protocol 25 Network The P25 network is a digital, encrypted network capable of providing voice and limited data services in the 12.5kHz range. This network has the advantages of being physically separate from the communication networks of the general public, be flexible enough to operate in both digital and analog modes, and be standardised to allow interoperability amongst different vendors and emergency services. Perhaps the greatest advantage for P25 is that it is rapidly deployable to a disaster site and totally isolated from the public networks. However, as the size of the disaster grows geographically the network needs to rely on an established (or rapidly deployable) infrastructure of repeaters and base stations. Furthermore, there is a finite number of stations that are possible on a given P25 RAN. Unfortunately, P25 relies on additional infrastructure to provide full telecommunication functionality, ie, communication outside the immediate P25 network. As the need for rapid access to information, and rapid dissemination of information from a disaster site becomes more and more pressing the advantages of being isolated become a challenge for P25 networks. Furthermore, damage to fixed infrastructure repeaters could be so extensive that enough mobile repeaters do not exist to maintain coverage. Furthermore, it is likely that P25 users will need to cross the network boundary onto the PSTN or Internet. Presently, P25 base stations can be deployed with satellite or microwave back haul, which can then connect with off site equipment to interface with other networks. However, this equipment takes time to deploy and align and maybe limited in number, or difficult to distribute to a particular site. The P25 network in regards to modern telecommunication expectations lacks the ability to cross networks and provide geographically diverse communication. Furthermore, P25 is reliant on an specially deployed RAN where redundancy must be provided within the P25 network. Common Public Access Networks A number of wide area network technologies are available for the general public with multiple network operators servicing the same geographic location. A brief outline of these technologies are: 1. GSM GSM is (globally) the most widely deployed and adopted mobile voice technology. In Australia a number of GSM networks exist with coverage of most populated regions. Additionally, of the available technologies, GSM has the highest number of subscribers. A GSM network is made up of cells, with each cell servicing a given geographic area. It is possible (and a common deployment practice in densely populated areas) for multiple cells of the same network operator to cover the same geographic area, but be
270

On Ensuring Continuity of Mobile Communications in a Disaster Environment

differentiated by frequency. Unfortunately, as tower space is a premium multiple cells may be physically located at the same site. This increases available capacity, but does not provide resilience in the case of physical damage of that site. 2. CDMA The CDMA network in Australia operates in a similar fashion to the GSM network however its deployment is considerably less. Furthermore, Australia's largest CDMA network (operated by Telstra) is being decommissioned and replaced with the NextG network. 3. UMTS and NextG Networks Wideband CDMA (W-CDMA) or Universal Mobile Telecommunication System (UMTS) networks, aka 3G, are rapidly replacing the existing GSM and CDMA networks. The Telstra NextG network is a UMTS network operating at 850Mhz, while other 3G offerings in Australia operate at the typical 3G frequency band of 2100Mhz. UMTS was developed by the Third Generation Partnership Project (3GPP) and extends the GSM standard to meet the requirements of the ITU's IMT-2000 network. As UMTS is an extension of GSM, the same concept of cells exists as in GSM. 4. Wireless Voice over IP (WVoIP) Due to the increasing number of 802.11 Wireless LAN (WLAN) hot spots and an increase in the number of mobile handsets providing both GSM/3G and 802.11 radios as standard, the use of Wireless Voice over IP (WVoIP) is becoming a cost effective alternative to traditional mobile voice calls. At present, WLANs are not deployed with the intention of providing global, carrier grade access but due to the low hardware cost, and simple setup (compared to traditional technologies) the number of available access points can easily blanket most CBD locations. Vulnerabilities of Current Mobile Technologies All of the above mobile technologies are vulnerable to some degree of denial of service due to disruptive events. These disruptive events can impact communication systems in a number of different ways: 1. Destruction of Infrastructure The most obvious disruptive event is the accidental (man made or natural) or willful destruction of deployed telecommunication infrastructure. An example of this was in the 9/11 terrorist attacks on the World Trade Center where a large portion of the GSM switching equipment servicing lower Manhattan was located in the twin towers. The terrorist attack had the, supposedly, unexpected impact of causing a mass outage of mobile communication systems. Another extreme situation may be the physical
271

Recent advances in security technology

destruction, or reduced functionality of the an extremely critical piece of infrastructure such as the GSM Home Location Register (HLR). 2. Radio Interference Interference of the radio spectrum can impact on mobile communication. While this interference is unlikely to be caused by a destructive event, it is worth considering loss of service in any form as a potential life threatening situation. For example, during the 2006 Australian Tennis Open in Melbourne, IBM was conducting an RFID demonstration. However, this demonstration interfered with Vodaphone's GSM network affecting customers of that network in the vicinity of the demonstration (Woodhead 2007). There was a potential tragedy in this case if someone required the ability to contact emergency services. 3. Unmaintainable Infrastructure While a disruptive event may not damage infrastructure directly, the consequence of an event may cause the loss of a critical dependency (such as fuel or gas) or unmaintainable due to lack of physical access due to an event. A quantity of communication equipment survived the initial impact of Hurricane Katrina in New Orleans, but as generators ran out of fuel and with no physical access to transport additional fuel, service was eventually lost. 4. Network isolation and Upstream Faults The core of most voice networks is shared amongst a number of different types of traffic. Furthermore, as network operators use IP transit in the core for voice traffic, routing problems can affect the network in different ways. This can affect WvoIP services more commonly as these are typically deployed over consumer grade broadband connections. For example, last year a hardware failure at a Sydney data centre affected VoIP customers in at least three states and two service providers (Sweeney 2006). 5. Lack of Capacity Another problem faced by communication networks, especially during a disaster, is lack of capacity. This lack of capacity is commonly referred to as network meltdown. Network meltdown occurs when the sheer volume of traffic fair exceeds the available capacity. The problem is further compounded as, in GSM for example, considerably less capacity exists for establishing a call than exists for delivery. So while a cell may have spare capacity for carrying established calls, the Paging and Access Granting Channel (PAGCH) may become overwhelmed with the volume of users trying to establish new calls. While emergency services can be issued with special SIM cards and handsets which can cause the Base Transceiver Station (BTS) to give priority to them when issuing
272

On Ensuring Continuity of Mobile Communications in a Disaster Environment

Transport Channels (TCH), the PAGCH cannot be prioritised which can lead to life threatening delays.

3. Research Problem
Mobile networks, and to a lesser extent fixed line networks, are vulnerable to a number of possible disruptive events. It would be theoretically impossible to build a network that is protected from all the above mentioned points of failure as the lowest common denominator, physical destruction of infrastructure cannot be prevented. What is needed is a mobile telecommunications system that is resilient in the face of the different types of vulnerability. Such a network would be able to withstand a reasonable degree of physical damage, interference, extended periods of unmaintained operation, network splits or routing failures, and rapid explosion in demand.

Figure 1. Typical 4G Network

273

Recent advances in security technology

4. 4G Networks
4G networks are designed to allow ease of mobility between network providers and technologies. The intention was to reduce the costs to consumers and provider greater capacity and network heterogeneity. However, a side affect of this design is increased resilience to failure and effective redundancy without extra cost. What is a 4G Network Fourth Generation (4G) mobile networks (Ohmori et al. 2000), or Beyond 3G (Steer 2007) networks will be extremely fast (upto 100Mbit in motion and 1Gbit while stationary) packet switched networks providing end-to-end IP connectivity via any available wireless provider (Hui & Yeung 2003, Varshney & Jain 2001). The network will utilise Internet standard Voice over IP (VoIP) protocols to provide rich multimedia telephony services, seamless mobility between networks, and give access to an unlimited number of Providers. Users will no longer have a relationship with a single monolithic provider, but maintain a relationship with a Biller or Agent which enables them access to any provider in the 4G world (Kim & Prasad 2006). As show in Figure 1 a 4G network can utilise many different RAN's, where commonly multiple RAN's service the same geographic region, giving the quality of redundancy. Resilient Networks As 4G networks were designed to fully realise the benefits of heterogeneous networks the ability to be resilient in the face of disruptive events is apparent. This resilience is possible due to the high level of mobility afforded by 4G Networks. However, the resilient nature of 4G networks is based on the assumption that some radio network remains after a disruptive event. The rationale behind this assumption is that it would appear unlikely for the same event to affect all providers. The obvious exception to this would be large scale environmental events that would likely affect all providers in some way. A more detailed analysis of how 4G networks and heterogeneity can provide greater resilience will be discussed in the following section. In the theoretical event that all infrastructure from all providers is rendered unusable for whatever reason, a new RAN can be rapidly deployed through the use of mobile base stations which, if 4G aware, will become accessible to all 4G handsets. This way, communications is restored to the greatest number of users in the shortest period of time.

5. Analysis
The benefits of the heterogeneous model in 4G networks can mitigate against the various disruptive events described earlier. Unfortunately, 4G networks are still in the design phases and are not in available right now. The standardisation process has
274

On Ensuring Continuity of Mobile Communications in a Disaster Environment

begun and some small research networks exist in Beijing and Tokyo. The usefulness of 4G networks in the various disruptive scenarios described earlier is discussed below: 1. Destruction of Infrastructure As most physical damage is localised the likelihood of the event affecting all network operators in the area serviced by the affected infrastructure is small. The alternative situation is wide spread events such as flooding and cyclones where wide geographic areas are all impacted with the same event. Even in these extreme cases the likelihood that some RAN survives over another is plausible. Not all areas will experience flooding to the same depth and not all areas affected by a cyclone would have received the same destructive wind speeds. 2. Radio Interference Radio frequency interference can occur for a number reasons, both accidental and intentional, for natural or man made causes. The advantage of 4G networks is that the different RAN's operate at different frequencies. This means that while one network, say 802.11 WLANs, may become inoperable, GSM and UMTS may be unaffected by the interference and continue to provide service. 3. Unmaintainable Infrastructure Unfortunately 4G networks may not provide an effective solution for unmaintainable infrastructure as typically events of this nature affect very large geographic areas as described earlier. Eventually, after a lengthy unmaintained period all RANs will fail. However, it is possible that each individual RAN may remain in operation for different lengths of time. Therefore, the 4G approach will provide availability for the longest possible period of time, albeit with dwindling capacity. In this case, the deployment of emergency mobile equipment to form a new RAN is likely to offer the simplest solution as infrastructure could be deployed in areas where maintenance is possible or where existing infrastructure is operational but lacks a critical piece of equipment elsewhere in the network. An example of this is GSM base stations that may still be operational but the Base Station Controller (BSC) has been damaged. In this situation, it may be simple enough to deploy a new BSC and reactive the existing undamaged BTS's. 4. Network Isolation and Upstream Faults Infrastructure that has become unusable due to a network isolation or a fault with an upstream provider will also benefit from the 4G approach. While it is likely that some cell sites may share a common medium for back haul, different network operators are unlikely to use the same backbone providers. Specifically, large national telecommunication carriers will have built their own back haul networks for their
275

Recent advances in security technology

exclusive use. In this case a loss of back haul for Provider A is not going to impact Provider B's ability to provide service. The loss of back haul however becomes more apparent when considering 802.11 WLAN RANs which may be connected to the Internet through a service provider that wholesales back haul from a national telecommunications carrier. In this case, mobile service and Internet connectivity will be affected, rendering both the 802.11 networks and the GSM/UMTS network unavailable. However, again the outage should be localised to one particular carrier with others still being able to provide service. 5. Insufficient Capacity Handling explosive demand in network resources is better handled by 4G networks which by design share the demand across a greater number of access points and frequency space than what could be possible if locked to one provider. Furthermore, as 4G networks are highly mobile and heterogeneous it is easier to expand capacity as any additional RAN will automatically become part of the 4G network and be accessible to users.

6. Conclusions
Natural disasters, accidents, and wilful damage to telecommunication infrastructure or the unintended collateral damage to infrastructure from an act against a third party can disrupt critical mobile telecommunication services. Furthermore, building resiliency and redundancy into isolated mobile networks is expensive and time consuming. 4G networks will provide technology agnosticism and provider independence, which will mitigate against outages or disruptions in any one technology or operator. The impact of physical destruction, radio interference, difficulties in maintaining infrastructure, problems with upstream service providers, and insufficient capacity can be mostly addressed by the use of 4G networks. However, as shown in the analysis, prolonged periods without maintenance over a large geographic area cannot be routed around and the 4G network will become unavailable. The benefit being though, that availability will be maintained longer than that of independent operators. A challenge that is noted is what occurs when a provider, especially one with a large number of users, goes offline and potentially, all those users migrate to a smaller providers network. In this case the loss of a large RAN will cause congestion on the smaller RAN's and the difficulties of insufficient capacity will continue to exist. But this can be mitigated by deploying emergency mobile equipment that most large providers keep in reserve for these events. The future of 4G network promises strong reliability through increased redundancy and resiliency. While unfortunately these networks are not actively being deployed yet as the standardisation process is just beginning, their eventual deployment will provide increased public safety as mobile communications will be more available.
276

On Ensuring Continuity of Mobile Communications in a Disaster Environment

Furthermore, the benefits to emergency services will also further enhance public safety.

Acknowledgments
The authors wish to thank the anonymous referees for their invaluable feedback and for the support provided by the Information Security Institute and the Queensland University of Technology.

References
Hui, S. Y. & Yeung, K. H. 2003. Challenges in the Migration to 4G Mobile Systems. IEEE Communications Magazine 41(12):54-59. Kim, Y. K & Prasad, R. 2006. 4G Roadmap and Emerging Communication Technologies. Artech House Publishers. Manoj, B. S. & Baker, A. H. 2007. Communication Challenges in Emergency Response. Communications of the ACM 50(3):51-53 Ohmori, S. & Yamao, Y. & Nakajima, N. 2000. The Future Generations of Mobile Communications Based on Broadband Access Technologies. IEEE Communications Magazine 38(12):134-142. Steer, M. 2007. Beyond 3G. IEE Microwave Magazine 8(1):76-82. Sweeny, P. 2006. Major Outage for iiNet. Whirlpool News, 16 July. Available from: http://whirlpool.net.au/article.cfm/1649 Varshney, U & Jain, R. 2001. Issues in Emerging 4G Wireless Networks. IEEE Computer 34(6):94-96 Woodhead, B. 2007. Tennis Open fault hits demonstration. The Australian: IT Business, 19 June.

277

Recent advances in security technology

25
Decision Support Tools for National Security 'Capacity' Problems - A Decontamination Case Study
Dion Grieger and Rick Nunes-Vaz
Defence Science and Technology Organisation, Australia
Abstract
In common with the approaches of other nations, Australia uses a National Counter Terrorism Exercise Program to inform and develop its abilities to prevent, prepare for, respond to and recover from terrorist incidents. The Exercise Program is used to assess Australias national coordination arrangements and provides opportunities for relevant individual agencies to evaluate their specific counter terrorism capabilities. One of the common limitations of such programs is that activities are generally designed not to stress the system beyond its ability to cope, and thus capacity boundaries remain untested and potentially undefined. Therefore other tools and techniques need to be considered in order to help agencies, and their commanders, gain insights into these capacity related problems. Such tools could help provide decision making support to issues such as resource allocation and could help to identify trigger points at which to call on interstate or federal resources. This paper outlines how one such technique, Coloured Petri Net modelling, has been applied to a chemical decontamination scenario in order to gain insights into both the capacity, and the optimal resource allocation combinations, of a hypothetical local fire service.

Biographies
Dion Grieger joined the Defence Science and Technology Organisation in 2001 as an Industry Based Learning Student. In 2002 he completed a Bachelor of Science (Mathematical and Computer Sciences) majoring in Applied Mathematics. He
278

Decision Support Tools for National Security 'Capacity' Problems - A Decontamination Case Study

returned to DSTO in December 2002 and is now working in the Concept Studies and Analysis discipline. His current areas of research relate primarily to counter-terrorism but also to crowd behaviour and control, non-lethal weapons and agent-based distillations. Dr Rick Nunes-Vaz has a BSc in physics and MSc and PhD in physical oceanography. He also holds postgraduate qualifications in systems engineering and tertiary education. Following a successful academic career in oceanography with The University of Wales, The Flinders University of South Australia, and the University Of New South Wales, which included launching an international Elsevier marine science journal as its founder and Editor, Rick switched careers in 2000 to join the Defence Science & Technology Organisation. Since that time, Rick has worked to support enhancement of the methodology of Defence Experimentation, including coauthorship of the Guide for Understanding and Implementing Defense Experimentation (GUIDEx, 2006). More recently Rick has managed work in support of national security in both Defence and civilian sectors. He is currently Head of Counter-Terrorism and Security Analysis in the National Security Branch of DSTO.

Introduction
In common with the approaches of other nations, Australia uses a National Counter Terrorism Exercise Program to inform and develop its abilities to prevent, prepare for, respond to and recover from terrorist incidents. The Exercise Program is used to assess Australias national coordination arrangements and provides opportunities for relevant individual agencies to evaluate their specific counter terrorism capabilities. Exercises are commonly required to meet training objectives which dictates that they should not stress the system beyond its ability to cope, which means that capacity boundaries remain untested and potentially undefined. Therefore other tools and techniques need to be considered in order to help agencies, and their commanders, gain insights into these capacity related problems. Such tools could help provide decision making support to issues such as resource allocation and could help to identify trigger points at which to call on interstate or federal resources. This paper outlines how one such technique, Coloured Petri Net modelling, has been applied to a chemical decontamination scenario in order to gain insights into both the capacity, and the optimal resource allocation combinations, of a hypothetical local fire service. The purpose of the paper is to demonstrate how this type of modeling could be used to enhance the National Counter Terrorism Exercise Program. As such the emphasis should not be on the results themselves, but on how these types of studies have the potential to offer insights into questions that cannot be addressed during a field exercise.

Case Study
This particular case study uses a Coloured Petri-Net (CPN) (Jensen, 1997) model to investigate the boundaries and limitations of a hypothetical local fire service response to a chemical terrorist attack. CPNs have been chosen because they provide a framework for the construction and analysis of systems and assist the user in
279

Recent advances in security technology

modelling through flow problems and in identifying bottlenecks or weaknesses in system design. A CPN model describes a system in terms of the states which the system may be in, and the transitions between these states. CPNs have been used to investigate a wide range of systems including communication protocols (Kristensen et al, 1998) and military operations planning (Zhang et al, 2002) among others. CPN models are dynamic, which makes it possible to investigate the behaviour of a system through visual inspection. They also allow the user to quickly and easily alter variable values and to repeat scenarios, which facilitates the exploration of many what if..? type scenarios. The benefits of this sort of analysis approach are discussed in more detail in the concluding comments of this paper. CPNs also include a time component, which allows the time taken for discrete events to occur to be captured and analysed. The scenario for this case study (illustrated in Figure 1) has been simplified in order to focus on the methodology rather than the details of the scenario. Initially the focus for this case study is on a potential response effort by the state fire service. Whilst this only represents one small component of what may be a larger whole of government response effort, the methodology used is still the same. It is assumed that the area has been cleared of any explosive devices. The scenario begins when the fire service is instructed to rescue all non-ambulatory casualties from the site and bring them out ready for decontamination. It is assumed that the chemical has been identified and that the highest level protective suits are required in the hot zone53. Through discussions with subject matter experts at the South Australian Metropolitan Fire Service a list of the relevant data for this scenario was compiled. The data represent aspects of the system such as the contiguous time that decontamination personnel are expected to remain in protective suits and active in their roles, or the time it takes to decontaminate an individual etc. These data were then used to construct a CPN model of the chemical incident using the software package CPN Tools. A screen capture of the baseline CPN model is shown in Figure 2. The model contains a number of oval shaped places (or nodes), and transitions, which are represented by a series of arrows and rectangular boxes (labelled t1 to t8).

53

Where hazardous materials may be involved a three tier system is used to set up boundaries adjacent to the incident site. The hot zone is an area around the incident site that is typically restricted to rescuers and explosive ordnance personnel only. The warm zone extends upwind from the hot zone and is where decontamination and initial triage typically occurs. The cold zone is upwind from the warm zone and is where fully decontaminated casualties are further triaged and sent for appropriate treatment.

280

Decision Support Tools for National Security 'Capacity' Problems - A Decontamination Case Study

Figure 1 Schematic diagram of scenario used in case study (modelled components are shaded) The scenario begins with user defined numbers of fire fighters/rescuers, chemical protection suits and non-ambulatory casualties that require rescuing. Rescuers may be required to either tend the decontamination stations or perform rescues. The scenario ends when all casualties have been rescued. Table 1, which describes the function of each place and transition in the CPN, and Table 2 which describes the user-defined variables and their default values for the baseline scenario, help to provide an overview of how the model plays out the scenario.

281

Recent advances in security technology

Figure 2 Screen Capture of Baseline CPN Model of Chemical Incident


Place / Transition Decon Manning FF Pool Rehab Suits Working FF Decon Rescues Queue Casualties t1 t2, t4 t3 Function Holding place for fire fighters currently tending the decontamination unit. Holding place for fire fighters waiting to be assigned a task (either decontamination manning or rescue). Holding place for fire fighters to recuperate after they have finished their assigned task. Holding place for the number of available chemical protection suits. Holding place for fire fighters currently performing rescues. Holding place for fire fighters currently being decontaminated. This place indicates the number of rescues that have been performed. Holding place for casualties once they have been rescued. Holding place for casualties who are waiting to be rescued. Transition for fire fighters from Rehab back to FF Pool. Transitions for fire fighters from Decon Manning to Rehab. Transition for fire fighters from FF Pool to Decon Manning. Only enabled when there is an insufficient number of fire fighters manning the decontamination unit and whilst there are casualties still remaining to be rescued.

282

Decision Support Tools for National Security 'Capacity' Problems - A Decontamination Case Study

t5

t6, t8 t7

t9

t10

Transition for fire fighters from FF Pool, and for the corresponding number of chemical suits from Suits, to Working. Only enabled when there are less rescue teams in the working area than casualties requiring rescue. Transitions from FF Decon to Rehab for the fire fighters, and back to Suits for the chemical suits. Transition from Working to FF Decon. Only enabled when a rescuer has performed the maximum number of rescues (defined by the length of time allowed in the chemical suit) OR there are no more rescues to be performed. Internal transition for rescuers within Working that effectively increases the counter in the Rescues place to indicate that another rescue has been performed. Only enabled when there are still casualties to be rescued and the rescuer has not already completed their maximum number of rescues. Transition from Casualties to the Queue (effectively a decontamination queue.) Only enabled when there is a rescue counter available in Rescues.

Table 1 Description of Place and Transition Functions in the Baseline CPN Model
Variable DeconSize DeconManTime FFDeconTime BreatheTime EvacTime EvacRehabTime DeconRehabTime WorkTime ShiftTime MinTime Description (Value in baseline model in brackets) The maximum number of fire fighters required at the decontamination unit at any one time. The maximum length of time a fire fighter can spend at the decontamination unit. The length of time for a fire fighter, and their chemical protection suit, to be decontaminated. The amount of breathing time a fire fighter has whilst in their chemical protection suit. The length of time for a rescue team to perform a rescue. (5 mins) The minimum length of time a fire fighter needs to recuperate after undertaking rescues. The minimum length of time a fire fighter needs to recuperate after manning the decontamination unit. The maximum length of time a fire fighter can spend performing rescues. (BreatheTime DeconRehabTime) The maximum shift time for a fire fighter The minimum time a fire fighter needs left in their shift in order to continue to rehab and perform another duty. (BreatheTime + EvacRehabTime) The maximum number of evacuations a fire fighter can perform before moving to decontamination. (WorkTime / EvacTime) The initial number of non-ambulatory casualties requiring rescue. The number of fire fighters in an evacuation team.

Evacs NACas EvacSize

Table 2 Description of Variables Used in the Baseline CPN Model

283

Recent advances in security technology

Figure 3 Total Rescue Time for Varying Numbers of Fire Fighters (arbitrary scales)

Figure 4 Total Rescue Time for Varying Numbers of Casualties Results from the Baseline Scenario Some results from the execution of the baseline scenario until termination (all casualties are rescued), are presented in the graphs below. Figure 3 shows how the
284

Decision Support Tools for National Security 'Capacity' Problems - A Decontamination Case Study

total rescue time varies as the number of fire fighters available for duty varies, whilst Figure 4 shows how the total rescue time is affected by the number of nonambulatory casualties that require rescuing. The number of chemical suits available is fixed for all scenarios. The first graph highlights a high payoff for increasing the number of fire fighters until that number is approximately twice the number of protective suits. At this point all suits can be used continuously; anything less means that some suits must wait until fire fighters recover in rehab. After a number of shifts, there is a recovery period during which most fire fighters are resting at the same time. In this case a ratio of firefighters to suits of approximately 3.75 overcomes this down-time, but there is no benefit in raising the ratio above 3.75, although it is important to stress that these numbers apply only to the particular scenario and assumptions used in this model, and should not be interpreted to have any general meaning. The second graph accords with intuition in suggesting that the total time taken for all rescues is approximately linearly proportional to the number of casualties. The graph also may assist a commander in making decisions about whether extra resources and support are required. For instance, if the chemical agent causes fatalities after victims are exposed for a given time then the number of required fire-fighters and suits could be inferred. The analysis of the results has been deliberately kept brief in order to ensure the focus is on the types of results that can be obtained from this type of modelling, rather than the actual raw numbers. These results are intended to provide examples of the many what if..? scenarios that could be explored further with the model and refined through other techniques such as field trials and discussion exercises. The results from this type of modelling also need to be considered in conjunction with the limitations and assumptions of the model. The fidelity of the output data is dependent on the confidence of the input data. It is likely that many values in such a model will be estimates only. Hence, the models should be used as decision support tools and not decision making tools and the results should be considered as insights and not hard evidence. Possible Extensions to the Scenario The model described above, in its current state, can be used to explore a myriad of different combinations of variables that may help inform a commanders decision making process. To help provide a greater understanding of what other situations and components of terrorism response may be able to be modelled using CPNs, this section describes some possible extensions to the current model. Whilst the actual implementation of these extensions is beyond the scope of this report, by providing such examples it is hoped that a greater understanding of the potential of this modelling approach may be achieved. The simplest extensions to the model may just involve an increase in the fidelity of the model. For instance, other specific data could be introduced, including the
285

Recent advances in security technology

preparation time for fire fighters putting on suits (including an extra fire fighter to assist) or a diminishing number of suits over time to reflect possible failures. A more complex extension to the model might involve the introduction of other agencies that can also provide a chemical rescue, and/or decontamination, capability. For instance, another local fire service, an interstate fire service or the Australian Defence Forces Incident Response Regiment could be included. This may involve the introduction of a national pool of rescuers each with different response times depending on their location and operating procedures. There may also be interoperability issues that could be examined with the introduction of other agencies. Any of the potential extensions to the baseline scenario discussed so far could also be applied to situations where there is more than one incident site. Multiple incidents may be simultaneous or staggered and may be located either in the same vicinity or greatly separated. Perhaps the biggest challenge in extending the CPN modelling is being able to model a number of different capabilities, each being provided by different agencies, at the same, or multiple site(s). The chemical incident example presented in this report is very specific and ignores the response effort relevant to the scenario from other agencies (police and ambulance for example).

Discussion
This paper has identified one technique that could be used to complement the National Counter Terrorism Exercise Program and assist in providing some insights into the suite of problems that live exercises cannot investigate. In general, modelling and simulation can be used to help decision makers determine system capacities at any particular level from single agency up to a national response. As demonstrated here, CPN models are useful in identifying potential system bottlenecks or areas lacking resilience. Appropriate use of such techniques permits exploration of different scenarios and a range of what if..? type questions. Initially, analyses of this type can be used to inform key decision makers on the limits and boundaries of their capabilities and illustrate the signs and symptoms that should trigger decisions to escalate a response (eg, bring in fire-fighters from a neighbouring jurisdiction). Ultimately, such models might be used to support decision-making in an operational context.

References
Jensen, K., 1997, Coloured Petri Nets. Basic Concepts, Analysis Methods and Practical Use. Volume 1. Monographs in Theoretical Computer Science, Springer-Verlag. Kristensen, L., S. Christensen, and K. Jensen, 1998, The practitioner's guide to coloured Petri nets. International Journal on Software Tools for Technology Transfer. Zhang, L. et al, 2002, A Coloured Petri Net based Tool for Course of Action Development and Analysis. in Proceedings of Workshop on Formal Methods Applied to Defence Systems, pp. 125-134, Volume 12 of Conferences in Research and Practice in Information Technology, Australian Computer Society.
286

Nonlinear dynamic analysis of beam-column subjected to impact of blast loading

26
Nonlinear dynamic analysis of beam-column subjected to impact of blast loading
H.R. Vali Pour, Luan Huynh and S.J. Foster
The University of New South Wales, Australia
Abstract
Presented in this paper is the development of a reinforced concrete one-dimensional frame element for modelling of distant blast loading on reinforced concrete frames. The model is developed in the framework of a flexibility fibre element formulation. The effect of strain rate at the fibre level is taken into account by a rate dependent damage model. The potential of the formulation is demonstrated with two examples.

Biographies
Hamid Valipour is a research PhD student at the school of civil and environmental engineering, the University of New South Wales. Luan Huynh is a research PhD student at the school of civil and environmental engineering, the University of New South Wales. Dr Stephen Foster has an established history of research into the behaviour of reinforced concrete structures using high level modelling techniques and has published widely in the field. The movement into modelling of blast and impulse loadings is an extension of this work.

287

Recent advances in security technology

1. Introduction
Research on behaviour of structures subjected to explosion, and to mitigate the destructive effects, have received considerable attention in recent years with the heightened awareness to the threat of terrorist attacks. Over the last decades, the local response of shells and plates (i.e. slabs) under the impact of projectiles and blast has been studied widely but less attention has been paid to the global response of frames. A wide variety of FE models are available that take account of fluid-structure interaction through combination of Eulerian-Lagrangian mesh which is well suited to nonlinear analysis of structures subjected to air blast. In such an approach, propagation of pressure waves in the medium and blast wave-structure interaction as well as response of the structure is obtained concurrently by solving the equations of state. However, such an analysis can be costly and results are ambiguous (Luccioni et al., 2004). In another approach air blast pressure caused by detonation of explosives is obtained by using a numerical or empirical method and a time history of pressure acting on the surface of structure. This pressure time history is then imposed on the FE model as external load and analysis of the structural model proceeds. This latter approach neglects the fluid-structure interaction; however this defect does not influence the global response of the structure that much. This study is a first step toward providing a versatile analytical tool to assess the vulnerability of reinforced concrete frames under impact or blast using 1D discrete frame elements.

2. Element formulation
Generally the stiffness and flexibility formulations have the same degree of intrinsic approximation; however, in discrete frame elements by adopting some kinematic assumptions the flexibility formulation leads to more efficient solutions. This study takes advantage of the flexibility formulation within the modified fibre element framework, which offers good potential for future development of the model. Figure 1a shows a cantilever with 6 DOFs at each nodal end point. Ignoring the torsional freedoms and satisfying the equilibrium equations for this configuration yields

D( x) = b( x) Q A + D* ( x)

(1)

0 0 1 0 0 b( x) = 0 x 0 1 0 0 0 x 0 1
288

(2)

Nonlinear dynamic analysis of beam-column subjected to impact of blast loading

where QA and D(x)=[N(x) Mz(x) My(x)]T denote the element generalized nodal force increment vector at end A and section generalized force increment, respectively, * b(x) is the force interpolation matrix and D (x) is a sectional internal force vector increment due to the increment of the distributed loads wz(x) and wy(x).

y Q5 q5 Q1 q1 Q4 q4 z Q3 q3 x
A

y z z

y i-fib A i-fib

y int.

Q2 q2
(a)

z i-fib

i-fib= 1, n

z int.
(b)

Integration point

Fibre Element

Present approach

Figure 1(a) Cantilever element and DOFs, (b) cross-section sampling points.
Equilibrium of internal stresses and section internal tractions in total form, gives
D( x) = N ( x) M z ( x) M y ( x)
T

y dA z dA = dA

(3)

where denotes the stress at cross-section sampling points. Adopting the NavierBernoulli hypothesis, the incremental form of compatibility is obtained as

( x) = r ( x) y z ( x) z y ( x)

(4)

where, (x) is the axial strain increment at the material points, r(x) average axial strain increment through the section and z(x) and y(x) are increment of section curvature about z- and y-axis. Substituting Equation 4 and the constitutive law in Equation 3 yields
1 D( x)=t k s ( x) d( x) = t f s ( x) d( x)

(5)

E(t ) dA 1 (t ) k s ( x ) = (t ) f s ( x ) = y E(t ) dA z E(t ) dA

2 y E(t ) dA y z E(t ) dA
y E(t ) dA

z E(t ) dA y z E(t ) dA 2 z E(t ) dA

(6)

289

Recent advances in security technology

where ks(x) and )fs(x) are section stiffness and flexibility matrixes, respectively, and E(t) is the tangent modulus at the integration point. A composite Simpsons scheme is used for estimating the section integrals rather discritising the section to discrete fibres (Figure 1b). Using the principle of virtual work for the cantilever configuration AB shown in Figure 1a and subjected to a load vector QA at end A, together with Equations 1, 5 and 6, and then repeating the same procedure for the cantilever clamped at end A and (t) subjected to the load vector QB at end B gives the element stiffness matrix, Ke, (Valipour & Foster 2007) as
(t ) Ke = T

(t)

(t

b ( x) f s ( x) b( x) dx 0 1 l bT ( x) e (t ) f s ( x) b( x) dx 0
l T e (t )

1 b ( x) f s ( x) b( x) dx 0 1 l bT ( x) e (t ) f s ( x) b( x) dx 0 l T e (t )

(7)

where is a transformation matrix. While different solution schemes can be used within the present flexibility formulation, a modified nested iterative algorithm developed by (Neuenhofer & Filippou 1997) is adopted in this study.

3. Constitutive law and rate dependency


The concrete and steel bars strength increase by loading rate. This rate dependent behaviour of material affects the response of the reinforced concrete members under impact/blast and has to be considered by an appropriate method. It has been a common practice to use a dynamic increase factor (DIF) for transforming the material strength and stress-strain relationship to their dynamic counterpart and this approach is partly adopted for taking account of strain rate in this study. However, for materials such as concrete, due to effect of rate history, the adequacy of this approach is questionable in some aspects and in this study a rate dependent damage model is adopted for concrete material which does not require any DIF. 3.1. Concrete For the current level of development, the effect of shear on the nonlinear response of material is neglected and shear failure is checked at a section level. In this way, the non-linear behaviour of the concrete can be modelled using a uni-axial material law. The strain rate effect on the concrete behaviour is taken into account by a rheological bi-mass element which separate at a specific strain. In this model, the rate independent stress-strain law is transformed to a rate dependent one by reducing the damage parameter with respect to a delay function, g(t). Taking the rate independent constitutive law as
290

Nonlinear dynamic analysis of beam-column subjected to impact of blast loading

= [1 D ( )]. E . + fr

(8)

then the rate dependent counterpart of this stress-strain relationship is obtained by

= 1 Ddyn ( , t ) E + fr
Ddyn ( , t ) = D( )
= t d D( ) d g (t ) d =0 d dt

(9) (10)

The first term on the right side of Equations 8 and 9 represent the damage and fr denotes the stress of the frictional unit in the rheological model and corresponds to irreversible (plastic) deformations. The delay weight function, g(t), can take any form which leads to a reasonable correlation with the experimental results. In this study the following linear function is used which shows reasonable performance (Zheng et al. 1999).

g (t ) = 1

t ... for t + t0 ; t0

else g (t ) = 0

(11)

For the sake of simplicity in the presented work a linear elastic stress-strain relationship is adopted for tensile concrete whereas a solely damage model is employed for compressive concrete and the irreversible part of constitutive law, fr, is neglected. The damage parameter for compressive concrete is calculated from the Mander et al. (1988) constitutive law

D ( ) = ( / 0 )r r 1 + ( / 0 )r

(12)

where 0 is the strain corresponding to compressive strength and r = Eo/( Eo - Esp), Eo is initial modulus of the concrete and Esp is secant modulus at the point of maximum compressive stress (fcp). In this model the effect of confinement on the ductility and compressive strength is taken into account by modifying fcp and r. 3.2. Reinforcing steel In this study, the DIF (dynamic increase factor) approach is used to account for the strain rate effect on the yield (fy) and ultimate strength (fu) of the reinforcing bars. Malvar and Crawford (1998) suggests that for steels with 290 f sy 710 MPa and

10 4 225 sec 1
291

Recent advances in security technology

DIF fy = 4 10

fy 0.074 0.04 414 ;

DIF fu = 4 10

fy 0.019 0.009 414

(13)

where is the strain rate (sec-1) and fy is in MPa. 3.3. Shear cap In the present formulation a shear cap is used to take account of shear failure at section level. In this study ultimate shear resistance of the sections Vr is taken from ACI 318 (2005) as

Vr = 0.17 f cp b d + Av f yh d / sh
As Equation 14 lacks consideration of the strain rate effect, it is conservative.

(14)

4. Blast loading
The nature of an above-ground explosion may be defined by considering typical characteristics of an idealized explosion of a spherical charge. At the moment of the shock front arrival, the intensity of air pressure goes from zero to a peak nearly instantaneously. It is, therefore, reasonable to ignore the pressure rise time, Tr (Figure 2). The intensity of overpressure then decreases exponentially with distance and time.

p(t)

P so

Positive phase Negative phase

Pa
To

Tr

Time

Figure 2. Overpressure-time relationship


Due to centrifugal forces, gaseous particles keep moving and cause pressure to drop below the atmospheric value (see Figure 2) until, eventually, equilibrium is restored.
292

Nonlinear dynamic analysis of beam-column subjected to impact of blast loading

The course of time from the explosion to the reach of atmospheric pressure is considered the positive phase, To. The empirical approximation of peak overpressure given by Herych (1979) as

pso = a1 Z 1 + a2 Z 2 + a3 Z 3 + a2 Z 4 MPa

(15)

where Z is a scaled distance (Z=R/W1/3), R is the distance from the centre of explosion to the point of measurement (in metres) and W is an equivalent TNT charge weight (in kg). The coefficients are: a1 = 1.407, a2 = 0.554, a3 = -0.357, a4 = 0.000625 for 0.05 Z < 0.3 ; a1 = 0.619, a2 = -0.0326, a3 = 0.213, a4 = 0 for 0.3 Z < 1 ; and a1 = 0.0662, a2 = 0.405, a3 = 0.329, a4 = 0 for 1 Z 10 . The relationship of overpressure with time is expressed by

ps (t ) = pso (1 t / To )e kt / To k = 1 / 2 + 10 pso for pso 0.1 MPa, and k = 1 / 2 + 10 pso [1.1 (0.13 + 0.2 pso )(t / To ) for 0.1 pso 0.3 MPa.
The negative phase is will not be taken into account due to its small maximum negative pressure (Horoschun 2007).

(16)

where k depends on characteristics of the explosion. Henrych (1979) proposed that

(17a) (17b)

The shock waves expand from the blast source at supersonic speed and are reflected when obstructed by any objects in its passage. The reflected pressure on a flat surface has a magnitude several times larger than the incident shock pressure and its intensity is governed by the intensity of incident wave and the angle of impingement. The amplified magnitude of a normal reflected shock pressure is estimated as (Henrych 1979)

pr = 2 pso +

2 6 pso for pso 4 MPa pso + 7.2

(18)

5. Numerical examples
5.1. Simply supported beam subjected to simulated blast pressure In this example a simply supported beam (specimen WE5) tested by Seabold (1967) under the simulated blast pressure is analysed. The geometry of the beam, section detail and pressure time history are shown in Figure 3. The material properties are, fcp = 27 MPa, o = 24,000 MPa, 0 = 0.003 (concrete crushing strain), ft = 2.7 MPa (concrete tensile strength), fy = 480 MPa, s = 200,000 MPa (steel modulus of elasticity), c = 21 kN/m3 (concrete weight density) and s = 69 kN/m3 (steel weight density). One half of the beam is modelled by 3 elements using the flexibility method (Figure 3a).
293

Recent advances in security technology

In the flexibility formulation a composite Simpsons integration scheme with 17 integration points through the section depth is used (see Valipour & Foster, 2007). The mid span deflection and velocity history are shown in Figures 4 and 5, respectively. In this example, the damping effect is neglected and a Newmark direct integration technique with maximum time step of 0.5 ms is adopted for solving the incremental dynamic equilibrium equations. A lumped mass matrix approach is adopted with a diagonal component scaling method used for constructing the matrix.
3 18.1 (A s=774 mm2 )
17

Uniform Pressure

0.5

197 mm

A
M
L=3658 mm 127 mm
Transverse Integration Points

Pressure (N/mm )

0.4 0.3 0.2 0.1 0 0 10 20 Time (msec) 30

381 mm

127 mm
FE Model

Section s

4 20.2 (A s=1290 mm2 )

M
1829 mm 127 mm

(a)

(b)

Figure 3. (a) geometry of beam and section (b) Pressure time history (Seabold 1967).

40 Mid Span Deflection (mm)

30

Experiment (Seabold 1967) ADINA- (Farag & Leach 1996) This study Linear elastic Rate dependent (Beshara 1993) ABAQUS-(Rong & Li 2006)

20

10

0 0 5 10 15 20 Time (milliseconds) 25 30

Figure 4. Mid span deflection time history.

294

Nonlinear dynamic analysis of beam-column subjected to impact of blast loading

Experiment (Seabold 1967) 3 Mid Span Velocity (m/sec) 2 1 0 0 -1 -2 5 10 15 20 25 30 This study Linear elastic ABAQUS-(Rong & Li 2006)

Time (milliseconds)

Figure 5. Mid span velocity time history.


The total analysis time for constructing the dynamic equilibrium path using the flexibility approach was 0.57 seconds which shows the computational efficiency of the method. The analysis performed on a Pentium-M notebook computer with a 1.6 GHz processor. 5.2. Two storey frame subjected to air blast pressure In the second example, the behaviour of a frame uniformly loaded from a surface explosion of 560 kg TNT at a distance of 16 m is examined. The front face of the column will experience a reflected pressure of 600 kPa at time of shock front arrival. After time T1=1.64 milliseconds (ms), the pressure acting on front face is reduced and is given by the sum of incident pressure and dynamic wind load. At instant T2=1.18 ms, the shock front approaches the rear face, the pressure on its begins to increase and attains the maximum at time (T2+T3) = 3.91 ms. The loading of the column is shown in Figure 6 for drag coefficients of Cf = 1 and Cr = 0.8 for the front face and rear face, respectively.

295

Recent advances in security technology

Pressure

P r p(t)+C q(t) f T1 To T
Pressure (MPa)

0.6

0.4

Pressure on front Pressure on rear Net resultant pre

0.2

(a)
Pressure

0 0 -0.2 4 8 12

p(t)-Cr q(t) T2 T2+T3 To T

time(milliseconds)

(c)

(b) Figure 6. Blast loading on frame: (a) front face, (b) rear face and (c) pressure time history.
The TNT explosion parameters are determined from the equations for an aboveground explosion (above) with a charge weight of 1.7W being substituted for W due to the fact that the explosive energy attacks a smaller area (Mays & Smith 1995). The geometry of frame, section details, lumped masses and loading scenario are shown in Figure 7. The material properties are, fcp = 25 MPa, o = 25,000 MPa, 0 = 0.002, ft = 2.0 MPa, fy = 400 MPa, s = 190,000 MPa, c = 22 kN/m3 and s = 69 kN/m3. This frame on the right face is subjected to air blast pressure history shown in Figure 6c.
300 mm 12 18 Con. 10@150 mm 60 mm
3300 mm

60 mm 400 mm

m 7 =600 kg

m 8 =1500 kg

m 9 =800 kg

Uniform air blast pressure

Column Section
400 mm 300 mm 3 18 Con. 3 18 Con. 60 mm 10@200 mm 60 mm
3300 mm

m 4 =800 kg
1
4000 mm

m 5 =1800 kg
2

m 6 =1000 kg
3

Beam Section

5000 mm

Figure 7. Geometry of frame, section details and loading scenario.


296

Nonlinear dynamic analysis of beam-column subjected to impact of blast loading

In the flexibility formulation a Gauss-Lobatto integration scheme with 8 integration points along the element and a composite Simpsons method with 17 points through the section depth are used (Valipour & Foster 2007). The lateral displacement history of the node 8 is shown in Figure 8. The frame is modelled with 10 elements corresponding to beam and column members. A Rayleigh proportional damping with mass factor of =0.8 and stiffness factor of =0.0 have been adopted. The time step size is limited to 0.5 ms within the implicit Newmark method. The total analysis time using the flexibility method was 6.05 seconds.
Non-linear analysis (damping neglected) Linear elastic analysis (damping neglected) Non-linear analysis (damping included) Linear elastic analysis (damping included)

12 Lateral displacement of node 8 (mm) 8 4 0 0 -4 -8 -12 50

100

150

200

250

Time (millisecond)

Figure 8. Lateral displacement of node 8 due to air blast pressure.

6. Conclusions
A flexibility-based frame element is developed for nonlinear dynamic analysis of frames subjected to blast loading. The efficiency of the solution with reasonable accuracy makes it an appropriate tool for dealing with nonlinear dynamic analysis of multi-storey multi-bay frames. The effect of strain rate is considered by a rate dependent damage model for concrete and a dynamic increase factor for the reinforcing steel. The results obtained from the new formulation correlate well with the experimental and other more expensive numerical approaches. However, at the present stage of this research, the shear effect at the material level is neglected. Developing the formulation to take account of shear effect at the material level will be undertaken in the next stage of this research.

297

Recent advances in security technology

7. References
Beshara, F.B.A. 1991. Nonlinear finite element analysis of reinforced concrete structures subjected to blast loading. PhD thesis, city university London. Brode, H. L. Blast wave from a spherical charge. The Physics of Fluids 1959; 2:217-229. Henrych, J. 1979. The dynamics of explosion and its use. Amsterdam: Elsevier. Horoschun, G. 2007. Design for blast and terrorist attack. Concrete in Australia 32(3):42-48 Kaewkulchai, G. & Williamson, E.B. 2004. Beam element formulation and solution procedure for dynamic progressive collapse. Computers and Structures 82(7-8): 639-651. Luccioni, B.M., Ambrosini, R.D. & Danesi, R.F. 2004. Analysis of building collapse under blast loads. Engineering structures 26: 63-71. Mander, J.B., Priestly, M.J.N. & Park, R. 1988. Theoretical stress-strain model for confined concrete. Journal of structural Engineering Div.-ASCE 114(8): 1804-1826. Mays, G.C. and Smith, P.D. (eds) 1995 Blast effect on Buildings. Thomas Telford. Neuenhofer, A. & Filippou, F.C. 1997. Evaluation of nonlinear frame finite element models. Journal of structural Engineering ASCE 123(7): 958-966. Rong, H.C. & Li, B. 2007. Probabilistic response evaluation for RC flexural members subjected to blast loadings. Structural Safety 29: 146-163. Seabold, R.H. 1967. Dynamic shear strength of reinforced concrete beams, Part 2. Naval Civil Engineering Laboratory, Port Hueneme CA. Valipour H.R. & Foster, S.J.F. 2007. A novel flexibility based beam-element for nonlinear analysis of reinforced concrete frames. Uniciv Report University of New South Wales. Wu, C. & Hao, H. 2005. Modelling of simultaneous ground shock and air blast pressure on nearby structures from surface explosions. International Journal of Impact Engineering 31: 699-717.

298

Behaviour and Modelling of Glazing Components Subjected to Full-Scale Blast Tests in Woomera

27
Behaviour and Modelling of Glazing Components Subjected to Full-Scale Blast Tests in Woomera
R. Lumantarna, T. Ngo, P. Mendis and N. Lam
University of Melbourne, Australia
Abstract
In March 2007, full-scale blast trials involving a 5000kg TNT charge were conducted in Woomera, Australia. In the trial, blast resistant glazing units were tested against the blast load with satisfactory results. This paper presents a description of the trial and the preliminary numerical modelling, which is performed with the finite element analysis code, LS-DYNA. In addition, the in-house testing of the monolithic glazing material in a search for the mechanical properties of the glazing element is presented. A comparison of the results from the numerical model and the experiment will confirm the reliability of the available finite element code in modelling the blast resistant glazing unit.

Biography
Raymond Lumantarna is a PhD Candidate in the Department of Civil and Environmental Engineering, the University of Melbourne, and is a member of the APTES group. Currently, he is investigating the behavior of glazing under blast load. Priyan Mendis is an Associate Professor and Reader at the University of Melbourne. He is the Convener of the ARC Research Network for a Secure Australia and the head of the APTES group. Tuan Ngo is a lecturer and research fellow in the Department of Civil and Environmental Engineering at the University of Melbourne. His research interests include extreme loading (blast, impact and fires) and computational fluid dynamics. Nelson Lam is an Associate Professor and Reader at the University of Melbourne. He is an expert in earthquake engineering and impact dynamics.

299

Recent advances in security technology

1 Introduction
The glazing panel is the weakest part of a building faade, which is designed to provide protection from external hazards to the occupants and contents of the building. In extreme situations, these external hazards can include strong wind, blast waves, and acts of terrorism with the intent of causing damage or perpetrating the building. Under these extreme loads, an untreated monolithic glazing panel will break into hazardous shards, which could be a major source of injuries. In light of recent acts of terrorism, which involve the use of explosives, such as the Oklahoma City bombing, the Jakarta Australian Embassy bombing, the Bali Bombing, and the London tunnel network incident, security and government agencies have priortized the development of a blast resistant glazing unit. This has led to the development of the glass-clad polycarbonate glazing system. However, the development stage of the glazing system has been predominantly empirical and numerous costly trials are required to develop an optimal composite system to satisfy the required protection level. The full-scale blast trials in Woomera, Australia, provided an opportunity to test the system. The use of computer modeling and numerical simulations has become a reliable and practical method in the investigation of blast effects on structural components. However, full-scale trials, which require a significant input of both time and money; have not been modeled extensively through numerical simulations. The wellestablished analytical approaches of Air3D and the empirical model provided by CONWEP have been shown to be reliable tools for blast wave quantification (Gupta et al., 2006; Gupta et al., 2007). The analytical approach to the glazing unit response has veered in the direction of probability risk analysis, which is based on singledegree-of-freedom idealization (Stewart & Netherton, 2005). This publication focuses on the computer modeling and performance of the blast resistant glazing component tested in the 2007 Woomera trial. An overview and observation of the trial are presented in this paper. Several small-scale experiments have been carried out to determine the mechanical properties of the glass element in the composite, the results of which are presented here as well. Finally, Non-Linear Finite Element analyses were performed using LS-DYNA to simulate the response and performance of the blast resistant glazing unit.

2 Woomera Blast Trial 2007


On 6 May 2006 and 23 March 2007, full-scale blast trials took place in the Woomera Prohibited Area, South Australia. The blast trials were conducted in accordance with the regulations stipulated by the UK Ministry of Defence, and were managed by the Australian Department of Defence. The trials attracted participants from the UK, USA and Singapore, as well as Australian-based researchers and consultants from various organizations who were coordinated by the APTES group. There were also a number of commercial participants who took the opportunity to assess the behavior of their products when exposed to blast loading.
300

Behaviour and Modelling of Glazing Components Subjected to Full-Scale Blast Tests in Woomera

The blast trials attempted to simulate as closely as possible the hemispherical blast wave produced by 5000kg of TNT. In both trials, the APTES group tested two identical single storey concrete modules with four openings on the front face. This publication will focus on the March 2007 trial, and the details of the May 2006 trial can be found in Gupta et al.(2006) and Gupta et al. (2007). 2.1 Module details Several industry participants provided fitments for the openings with the intention of assessing their respective product performance. Module 1 was located at a 40m standoff distance from ground zero, and the fitments are summarized as follows. Opening 1A was fitted with a 140mm thick concrete panel with N16-120 reinforcement. Opening 1B was fitted with a glass-polycarbonate composite. Opening 1C was fitted with a blast resistant laminated glass. o Upper window 25mm thick blast rated glass-clad polycarbonate with 45mm edge bite. o Lower window 45mm thick ballistic rated (R2) polycarbonate composite with 45mm edge bite. Opening 1D was fitted with a 140mm thick concrete panel with wire mesh reinforcement Module 2 was located at a 68m stand-off distance from ground zero, and the fitments are summarized as follows.

Opening 2A was fitted with a blast door incorporating laminated glass specimen. Opening 2B was fitted with a double glazed panel with air gap in between layers. Opening 2C was fitted with a blast resistant glass clad polycarbonate. Opening 2D was fitted with a blast door incorporating glass-clad polycarbonate and polycarbonate composite specimens.

Figure 1. Fully fitted module 1

301

Recent advances in security technology

Figure 2. Fully fitted module 2

Figure 3. Cross section of the glazing unit


The APTES group has several publications that focus on the performance of the fitments on the module from the 2006 trials and 2007 trials (Gupta et al., 2006; Gupta et al., 2007). The focus of this paper will be the glass-clad polycarbonate fitted into the upper window on opening 1C from the March 2007 trial. The glazing panel used in the window unit and its cross-section is shown in Figure 3. 2.2 Instrumentation The measurements on this trial were limited to pressure measurements and the maximum deflection measurement. Pressure transducers, located on the front face and side face of the modules, were used to capture the peak reflected overpressure and peak incident overpressure of the blast wave respectively. Figure 4 shows the pressure transducers locations on the module. The pressure data and blast modelling is published in Gupta et al. (2006) and Gupta et al. (2007). The maximum deflection measurements were done through mechanical devices, which were designed to measure the maximum deflection of the panels, as shown in Figure 5. These measurement devices were fitted on opening 1A, 1C, and 1D for the March 2007 trial. However, they were not fitted on module 2, as the predicted blast pressure values would not be adequate to induce substantial damage on the module and fitments. However, due to a glitch during the trial, the recorded value on opening
302

Behaviour and Modelling of Glazing Components Subjected to Full-Scale Blast Tests in Woomera

1C was permanent deflection rather than maximum deflection. The device consists of a vertical steel hollow section, which was bolted to the base of the cubicle. Embedded in the steel hollow section is a measurement rod, which is located at the desired measurement height (ie, at the panel mid-point).

Figure 4. Pressure transducers as installed in the May 2006 trial modules

Figure 5. Displacement measurement device


2.3 Observation Figure 6 and 7 show the damage to module 1 and opening 1C respectively. The concrete module itself was heavily damaged, with major cracks and spalling. However, the module maintained its structural integrity throughout the blast, which suggests that the full reflected pressure on the specimens would have been fully developed. The upper window on opening 1C was damaged, but there was no breach and the entire panel remain attached to the glazing unit frame. The final displacement recorded at the upper panel was 25mm.

303

Recent advances in security technology

Figure 6 Damage to the concrete module

Figure 7. Damaged glazing panel on opening 1C

3 Experimental program
3.1 Experimental setup In addition to the full-scale blast trials, several in-house experiments were carried out to establish the glass basic mechanical properties. The experimental program was divided into two phases. Phase 1 was the static test phase, which included a one-way and two-way bending test. This phase is described in detail in Lumantarna et al. (2006). The second phase was an impact test, which was used to obtain the dynamic flexural tensile strength of the glazing element. This test will be briefly overviewed in this publication. With the impact test, 350mm by 50mm glazing specimens of 5mm thickness were subjected to a dropped steel ball bearing. The specimens were placed above a timber frame and then clamped at the support. The preliminary parametric study shows that the frame clamps provide rotational restraint close to a fully fixed condition. A steel ball bearing with a mass of 0.67kg was dropped on to the midpoint of the specimen through a PVC tube from heights varying from 100mm to 200mm. For measurement purposes, a DYTRAN high sensitivity Low Impedance Voltage Mode (LIVM) model 3192A seismic accelerometer was attached to the steel ball bearing, in order to measure the impact force on the glazing panel. Figure 8 and 9 show the frame setup and the accelerometer setup respectively.

304

Behaviour and Modelling of Glazing Components Subjected to Full-Scale Blast Tests in Woomera

Figure 8. Frame setup

Figure 9. Steel ball bearing with attached accelerometer


3.2 Experimental results The static test results with the monolithic specimens exhibited linear elastic and very brittle behavior as expected, while the laminated glass specimen showed limited ductile behavior. A Youngs modulus in the order of 60GPa can be inferred from the test. The ultimate tensile strain of the extreme fibre at fracture was approximately 0.001, which is translated to a maximum tensile stress or 40-60MPa (Lumantarna et al., 2006). The force of impact in the experiment can be calculated by multiplying the measured acceleration with the mass of the steel ball bearing. The moment distribution on the specimen is shown in Figure 10, where P is the applied load and the L is the length of the specimen. In this case, the beam inertia forces contribution to the moment distribution can be ignored, as the mass of the impacting object is much greater than the mass of the glass beam. The concept of inertia contribution is discussed in detail in Lumantarna et al. (2006).

305

Recent advances in security technology

PL/8 PL/8

PL/8

Figure 10. Moment distribution on the specimens


From the results summarized in Table 1, it can be seen that the failure stress of the specimens was spread almost randomly across a very wide range, with an average maximum stress at failure of approximately 115Mpa. This is much higher than the failure stress in quasi-static conditions. The spread of the results was caused by the random nature of the surface flaw distributions on the glazing panel surface. It must be noted that the current sample size would need to be significantly increased to fully capture the variability in properties of individual test results. Thus, the obtained results could only be used as an initial indication. Currently, another research project at the University of Melbourne is investigating new methods of testing, which will reduce the range of the experimental result.
drop height (mm) 150 150 200 150 150 150 Maximum stress at failure (MPa) 68 90 175 107 90 152

Specimen 1 2 3 4 5 6

Dimension 50mm x 350mm 50mm x 350mm 50mm x 350mm 50mm x 350mm 50mm x 350mm 50mm x 350mm

Tab. 1 Test Summary

4 Finite Element Modelling


The analysis and design of glazing components to withstand the effects of blast loading is a technical challenge for structural engineers, with many additional considerations on top of the standard design issue. Due to these technical challenges the design approach seems to be leaning towards an over-reliance on field-testing, which has a high cost in terms of resources. The primary purpose of this project is to build an accurate and reliable analytical model to predict the glazing panel response to blast loading, and hence, prevent the over-reliance on field-testing. Recent advances on the finite element approach enable the behavior of the glazing panel, including its post-cracking behavior, to be estimated. The analytical model of the glazing panel is shown in Figure 11, and the analysis was carried out using the Finite Element Code LS-DYNA (Hallquist, 2001). In the finite element model,
306

Behaviour and Modelling of Glazing Components Subjected to Full-Scale Blast Tests in Woomera

several assumptions have to be made. From visual observation on the field it can be safely assumed that the frame connections to the concrete cubicle were sufficient to prevent any significant inward deflection and rotation. Considering the post-blast observation (Figure 7), it can be seen that all the shards remained adhered to the polycarbonate and its framing system. Therefore, it is also safe to assume that there will be no slip between the glazing components and the frame. Material model number 2, *Material_Plastic_Kinematic, and material model number 12, *Material_Isotropic_Plastic, were used to model the aluminium alloy frame and polycarbonate layer respectively. To simulate the glass materials behaviour, material number 110, *Material_Johnson_Holmquist_Ceramics, was used. 4.1 Blast load application The recorded blast pressures have been analysed and compared against the numerical analysis (Gupta et al., 2006; Gupta et al., 2007). The comparison has shown that CONWEP (Hyde, 1993) and Air3D (Rose, 2005) Computational Fluid Dynamics simulation could accurately predict the experimental blast pressures. Thus, the blast load in this model is applied using the built-in CONWEP function (*Load_Blast) with input parameters of 5000kg TNT charge weight, a hemispherical blast, and a 40m stand-off distance.

Figure 11. Finite Element Model


4.2 Results A permanent deflection of 22mm was observed from the analytical model result. As shown in Figure 12, this result is a slight underestimation of the permanent deflection measured from the experiment. The underestimation can be attributed to the possibility of slips in between the lamination layers and the possibility of a small amount of rotation on the framing systems connection to the concrete module in the actual blast trial.
307

Recent advances in security technology

30

Displacement (mm)

25 20 15 10 5 0 0 10 20 30 40 50 60 70

LS-DYNA model measured displacement

Time (msec)
Figure 12. Displacement observation

5 Summary and Concluding Remarks


The full-scale blast trials in Woomera, Australia in March 2007 provided useful insights into the performance of facades and structural components subjected to blast pressures. The glazing units performed extremely well, surviving the blast without any fragmentation at the rear face of the panels, while the framing system maintained the glazing panel in place throughout the blast duration. The LS-DYNA finite element model described in this paper can predict the response of the glazing panel with reasonable accuracy. Although several input parameters, such as bulk modulus, shear modulus, pressure at HEL state, and dynamic tensile strength were obtained or inferred from experimental results, the model is still overly reliant on the published recommended constitutive parameters (Holmquist et al., 1995; Holmquist et al., 2001; Cronin et al., 2003). Currently, analytical and experimental investigations to rectify the aforementioned constitutive parameters and to further develop the model are underway.

6 Acknowledgements
The authors would like to thank Trial Manager, Major Darren Mattison and Deputy Trial Manager, Major John Bishop for their time and for the information they provided. The authors would also like to acknowledge support from Mr Graeme
308

Behaviour and Modelling of Glazing Components Subjected to Full-Scale Blast Tests in Woomera

Coppock of KBR in the construction of the module. Many thanks are also extended to the participants for their involvement in the trials.

References
Cronin, D. S., Bui, K., Kaufmann, C., McIntosh, G. and Berstad, T. 2003. Implementation and Validation of the Johnson-Holmquist Ceramic Material Model in LS-DYNA. 4th European LS-DYNA Users Conference, Ulm, Germany. D-I:47-59. Gupta, A., Mendis, P., Lumantarna, R. and Ngo, T. 2006. 'Full Scale Explosive Test in Woomera, Australia.' Transaction of Tianjin University, 12: 56-60. Gupta, A., Mendis, P., Ngo, T. and Lumantarna, R. 2007. An Investigation on the Performance of Structural Components Subjected to Full Scale Blast Test in Woomera, Australia. Performance, Protection, and Strenghtening of Structures under Extreme Loading, Whistler, Canada. Hallquist, J. O. 2001. LS-DYNA Keyword User's Manual - Volume II. Livermore Software Technology Corp., Livermore. Holmquist, T. J., Johnson, G. R., Grady, D. E., Lopatin, C. M. and Jr, E. S. H. 1995. High Strain Rate Properties and Constitutive Modeling of Glass. 15th International Symposium on Ballistic, Jerusalem, Israel. 237-244. Holmquist, T. J., Templeton, D. W. and Bishnoi, K. D. 2001. 'Constitutive Modeling of Aluminum Nitride for Large Strain, High-strain Rate, and High-pressure Application.' International Journal of Impact Engineering, 25: 211-231. Hyde, D. 1993. CONWEP. US Army Corps of Engineers, Vicksburg, USA. Lumantarna, R., Lam, N., Mendis, P. and Gad, E. 2006. Analytical model of glazing panel subject to impact loading. 19th Australasian Conference on the Mechanics of Structures and Material, Christchurch, New Zealand. 797-803. Rose, T. 2005. Air3D v9.0. Cranfield University, UK. Stewart, M. G. and Netherton, M. D. 2005. Blast Reliability Curves and Uncertainty Modelling for Glazing Subject to Explosive Blast Loading. Proceedings Of the 6th Asia Pacific Conference on Shock and Impact loads on Structure, Perth, Australia.

309

Recent advances in security technology

28
Smart Cameras Enabling Automated Face Recognition in the Crowd for Intelligent Surveillance System
Y. M. Mustafah
National ICT Australia Ltd. (NICTA), School of ITEE, The University of Queensland

A. Bigdeli
National ICT Australia Ltd. (NICTA)

A. W. Azman
National ICT Australia Ltd. (NICTA), School of ITEE, The University of Queensland

B. C. Lovell
National ICT Australia Ltd. (NICTA), School of ITEE, The University of Queensland
Abstract
Smart Cameras are rapidly finding their way into Intelligent Surveillance Systems. Recognizing faces in the crowd in real-time is one of the key features that will significantly enhance Intelligent Surveillance Systems. The main challenge is the fact that the enormous volumes of data generated by high-resolution sensors can make it computationally impossible to process on mainstream processors. In this paper we report on the prototyping development of a smart camera for automated face recognition using very high resolution sensors. In the proposed technique, the smart
310

Smart Cameras Enabling Automated Face Recognition in the Crowd for Intelligent Surveillance System

camera extracts all the faces from the full-resolution frame and only sends the image information from these face areas to the main processing unit vastly reducing data rates. Face recognition software that runs on the main processing unit will then perform the required pattern recognition.

Biographies
Y. M. Mustafah is a PhD student of The University of Queensland. His research interest is in face detection and recognition and embedded systems for computer vision. A. Bigdeli is a researcher of SAFE Sensors group in National ICT Australia. His main research is in high performance real-time embedded systems for computer vision applications. A. W. Azman is also a PhD student of The University of Queensland. Her research interest is in the software-hardware partitioning on embedded system and embedded systems for computer visions. B. C. Lovell is a Professor in School of ITEE, The University of Queensland. He is also the research leader of the SAFE Sensors group in National ICT Australia. His main interest is the analysis of video streams for human activities recognition.

Introduction
Video surveillance is becoming more and more essential nowadays as society relies on video surveillance to improve security and safety. For security, such systems are usually installed in areas where crime can occur such as banks and car parks. For safety, the systems are installed in areas where there is the possibility of accidents such as on roads or motorways and at construction sites. Currently, surveillance video data is used predominantly as a forensic tool, thus losing its primary benefit as a proactive real-time alerting system. For example, the surveillance systems in London managed to track the movements of the four suicide bombers in the days prior to their attack on the London Underground in July 2005, but the footage was only reviewed after the attack had occurred. What is needed is continuous monitoring of all surveillance video to alert security personnel or to sound alarms while there is still time to prevent or mitigate the injuries or damage to property. The fundamental problem is that while mounting more video cameras is relatively cheap, finding and funding human resources to observe the video feeds is very expensive. Moreover, human operators for surveillance monitoring rapidly become tired and inattentive due to the dull and boring nature of the activity. There is a strong case for automated surveillance systems where powerful computers monitor the video feeds even if they only help to keep human operators vigilant by sending relevant alarms. Smart cameras can improve video surveillance systems by making autonomous video surveillance possible. Instead of using surveillance cameras to solve a crime after the event, a smart camera could recognize suspicious activity or individual faces and give out an alert so that an unwanted event could be prevented or the damage lessened. From another perspective, smart cameras reduce the need for human operators to continually monitor all the video feeds just to detect the activities of interest, thus reducing operating costs and increasing effectiveness.
311

Recent advances in security technology

Smart Cameras
Smart cameras are becoming increasingly popular with advances in both machine vision and semiconductor technology. In the past, a typical camera was only able to capture images. Now, with the smart camera concept, a camera will have the ability to generate specific information from the images that it has captured. So far there does not seem to be a well-established definition of what exactly a smart camera is. In this paper, we define a smart camera as a vision system which can extract information from images and generate specific information for other devices such as a PC or a surveillance system without the need for an external processing unit. Figure 1 shows a basic structure of a smart camera. Just like a typical digital camera, a smart camera captures an image using an image sensor, stores the captured image in the memory, and transfers it to another device or user using a communication interface. However, unlike the simple processor in a typical digital camera, the processor in a smart camera will not only control the camera functionalities, but it is also able to analyse the captured images to obtain extra information.

Figure 1: Basic Smart Camera Architecture.


There are many smart camera products available in the market today from a variety of manufacturers such as Tattile, Cognex, Matrix Vision, Sony, Philips, EyeSpector, PPT Vision, and Vision Components. However, this is still a very active area of research because of the wide range of capabilities of smart cameras that could be improved. One of the most popular works on the smart camera was by Wolf et al. (2006) where they introduced a system that can build a complete model of the torso and recognize various gestures made by a person. The work started with research on a human activity recognition algorithm and soon evolved to the implementation of the software algorithm onto hardware, including Hi8 cameras and Trimedia video capture boards. Another well-known research project was by Bramberger et al. (2006). They built a prototype camera called SmartCam which is a fully embedded smart camera system targeted for various surveillance applications such as traffic control.

Improving Smart Camera Design


We propose a smart camera system that can be used as an aid for face recognition in crowd surveillance. The camera utilizes a high resolution CMOS image capture device and an FPGA based processor for Region of Interest (ROI) extraction. The proposed system architecture is shown in Figure 2. The system has an internal
312

Smart Cameras Enabling Automated Face Recognition in the Crowd for Intelligent Surveillance System

processor to perform face detection to extract faces from the captured images in realtime. The main motivation to extract faces inside the camera is to conserve as much bandwidth as possible and to save processing time and memory on the client processor which performs the face recognition task. Note that even in the dense crowd of Figure 3, the faces suitable for recognition only represent a very small proportion of the image area. In many scenes, faces would represent less than 1% of the image. Thus the smart camera would not overload the client processor by transmitting huge amounts of high-resolution image data which is destined to be discarded immediately after face detection. Such massive data reduction at source by up to two orders of magnitude is an immediate and significant benefit of this approach.

Figure 2: Proposed System Architecture.


High Resolution Image Sensor While there are many smart camera products already available in the market today, most of them use a VGA resolution image sensor (640 x 480 pixels) and some use lower resolutions. Compared to the existing image sensor technology, VGA can be considered as low resolution and is quite unsuitable for many video surveillance applications especially in crowd surveillance. Crowd surveillance usually surveils a wide area with many of objects of interest in view, thus requiring a high resolution camera. High resolution images provide much more detailed information regarding objects in view. For example, in applications such as face recognition, higher resolution images will help improve the recognition rate indeed a very high resolution camera could even read nametags and other insignia. Figure 3 shows an example of a region of interest (ROI) extracted from a scene image of a crowd of people. In this simple experiment, the image window containing the face is extracted manually from the scene images (a) with 5 different resolutions. The face (b) extracted from a 7 MP (MegaPixel) high resolution image is much more recognizable than (f) extracted from the lower resolution (VGA) scene. The extracted image windows were tested for suitability for automatic face detection using a Viola-Jones face detection module (P. Viola and M. Jones. 2001) implemented in OpenCV library. The images (b), (c), (d) and (e) extracted from 7, 5, 3, and 1 MP images were suitable for face detection. However, the face cannot be correctly detected in the image (f) because the image does not have enough detail for the face detection module to work correctly.
313

Recent advances in security technology

Figure 3: Overall Scene (a), ROI extracted from scene with resolution of 7MP (b), 5 MP(c), 3MP (d), 1MP (e) and VGA (f).
CMOS image sensor offers high resolution and low noise output. Due to the low power and high speed of CMOS, it is expected in the future that CMOS based image sensors will outperform CCD based image sensors (D. Litwiller. 2001). There are many CMOS image sensors in the market. Table 1 shows the highest resolution sensor product from three leading CMOS image sensor manufacturers; OmniVision, Micron and Kodak. It is noticeable that the frame rate is inversely proportional to the resolution of the camera. For wide angle surveillance, since no rapid movements of object of interest are expected, 5 high resolution frames per second can be considered as acceptable baseline performance for the prototype.
Manufacturer OmniVision Micron Micron Kodak CMOS Sensor OV5620 MT9E001 MT9P001 KAC-5000 Resolution (pixel) 2608 x 1952 3264 x 2448 2592 x 1944 2592 x 1944 Frame Rate at Full Resolution (fps) 7.5 10 14 6

Table 1: High Resolution CMOS Image Sensors


High Bandwidth Communication Interface High resolution image sensors require a high data transfer rate. Although a smart camera could preprocess the captured images before they are sent to the external devices, it is sometimes necessary for the camera to output the RAW captured images. For a high resolution camera, a high bandwidth communication interface would be required. This is because, for example, a 5MP sensor working at the rate of 10 fps would require 50MP of data to be transferred every second if no compression is performed on the image. One pixel might represent several bits of data depending on the color depth of the image. Therefore without compression, a camera with a 5MP high-color (24-bit color depth) frame image working at 10 fps, would need a
314

Smart Cameras Enabling Automated Face Recognition in the Crowd for Intelligent Surveillance System

communication link with sustained transfer rate of 1200 Mega bits per second no garden variety PC could keep up with this. However, currently, there are five commonly used high bandwidth video interface standards available: FireWire 400 or IEEE 1394a, FireWire 800 or IEEE 1394b, USB2, Gigabit Ethernet or GigE, and Camera Link. Table 2 shows the general specification on the five interfaces. USB 2 and FireWire 400 can be considered as unsuitable in terms of data transfer speed if we compare them with the current resolution of CMOS image sensor technology. While Camera Link is suitable for very fast data transfer, it only supports one-to-one device connection. This means a network of cameras could not be supported by this interface. GigE and FireWire 800 interfaces can be considered as the most suitable interfaces for the purposed high resolution surveillance as they both have a considerable data transfer speed and allow for the networking of the cameras. For our first prototype, we decided to use FireWire 800 interface, because currently it is a more established interface. We will try to incorporate GigE interface in our future prototype once this interface become more mature. With the introduction of a new GigE Vision camera interface, it is expected GigE will become the dominant machine vision interface in the near future.
Interface FireWire 400 (1394a) FireWire 800 (1394b) USB 2 GigE Camera Link Data Transfer Rate (Mbps) 400 800 480 1000 3600 Max Cable Length (meter) 4.5 100 5 100 10 Max number of Devices 63 63 127 no limit 1

Table 2: Video Interface Standards


Reconfigurable Platform for Hardware and Software Processors Acquiring the appropriate target hardware for a smart camera processor is an important issue. While Application Specific Integrated Circuit (ASIC) provides a high performance and power efficient platform, it suffers from lack of flexibility and can be very expensive due to the high non-recurring engineering (NRE) cost. Digital Signal Processor (DSP) on the other hand has only a single flow of control which could pose problems in meeting real-time constraints. The general-purpose processor (GPP) also faces problems in meeting real-time constraint due to poor execution time predictability. Garcia et al. (2006) suggested that reconfigurable hardware as the best option and is quite cost effective for an embedded system. Presently, Field Programmable Gate Arrays (FPGA) are one of the most widely used and competitive reconfigurable hardware platforms in the market.
315

Recent advances in security technology

One of the key aspects of FPGA is it has large number of arrays of parallel logic and registers which enable designers to produce effective parallel architectures. Parallel processing is an important feature especially for embedded systems that require highlevel computation in real-time for example, face detection on a smart camera processor. Parallel processing allows information to be transferred effectively and obtains end results faster since processing tasks are segregated to be carried out concurrently. At the same time, parallel processing reduces power consumption considerably, especially in processes which involve back-to-back memory access. Additionally, FPGA allows incorporation of a microprocessor on the same chip. For our smart camera prototype, the Spartan-3 FPGA was chosen as the main processing. We believe that the Spartan-3 platform imposes an interesting challenge where optimum hardware resources will be utilized in every aspect of the design. Robust Face Recognition System The uncontrolled environment of crowd surveillance makes a robust face recognition system a necessity. Ideally, a robust face recognition system would be able to recognize faces regardless of the faces expression, angle, features and lighting conditions. A face recognition system consists of face detection and a face classification part. In order for the system to recognize a particular face, the face must first be detected, and then extracted from the captured scene image. The face is then normalized and forwarded to the face classification processor where it could be recognized by comparing it to the faces stored in the database. The face recognition to be implemented on the system is as proposed by Shan et al. (2006). Their system is comprised of three major components: 1) a Viola-Jones face detection module (Viola and Jones. 2001) based on cascaded simple binary features to rapidly detect and locate multiple faces, 2) a normalization module based on the eye locations, and finally 3) Adaptive Principal Component Analysis to recognize the faces. As stated earlier, the face detection part (1) will be implemented on the FPGA platform of the camera while the rest of the module will be implemented on the client PC. The face detection and face recognition processes usually require considerable computing power and could require significant time when running on a standard PC. In a hardware implementation however, processes can be decomposed and run in parallel so that less time will be taken to execute the processes. FPGAs provide a flexible and suitable reconfigurable platform for applying suitable architecture of the processor. If sufficient parallelism is applied, it is possible for the overall process to run in real-time.

NICTA Smart Camera Prototype


Our smart camera platform was designed based on the principles outlined in the previous section. Table 3 summarizes the basic specifications of our prototype while Figure 4 shows our smart camera prototype.
316

Smart Cameras Enabling Automated Face Recognition in the Crowd for Intelligent Surveillance System

Parameter Sensor Type Resolution Processing Platform Comm. Interface Dimension

Value CMOS 2592 x 1944 Spartan-3 FPGA Firewire800 90 x 90 x 150 mm

Table 3: Specification of Smart Camera

Figure 4: Smart Camera Prototype.

Conclusion and Future Work


Smart Cameras are slowly being introduced in emerging surveillance systems. They usually perform a set of low-level image processing operations on the input frames at the sensor end. This paper reported on our prototype development of a smart camera for automated face recognition using high resolution (5MP) sensors. In the proposed technique, the smart camera extracts all the faces from the full-resolution frame and only sends the pixel information from these face areas to the main processing unit. Face recognition software that runs on the main processing unit will then perform the required pattern recognition algorithm. The main challenge in this project is to build a stand-alone and low power smart camera system that integrates real-time face detection for crowd surveillance. Our future work would involve implementing a robust face detection algorithm on the camera.

317

Recent advances in security technology

Acknowledgements
NICTA is funded by the Australian Government's department of Communications, Information Technology, and the Arts and the Australian Research Council through Backing Australia's Ability and the ICT Research Centre of Excellence programs, and the Queensland State Government.

References
D. Litwiller. 2001. CCD vs. CMOS: Facts and Fiction. Photonics Spectra. M. Bramberger, A. Doblander, A. Maier, B. Rinner, and H. Schwabach. 2006. Distributed embedded smart cameras for surveillance applications. Computer. 39: 68-75. P. Garcia, K. Compton, M. Schulte, E. Blem, and W. Fu. 2006. An Overview of Reconfigurable Hardware in Embedded Systems. EURASIP Journal on Embedded Systems. 1-19. P. Viola and M. Jones. 2001. Rapid object detection using a boosted cascade of simple features. IEEE Conference on Computer Vision and Pattern Recognition. 511-518. T. Shan, B. C. Lovell, S. Chen, and A. Bigdeli. 2006. Reliable Face Recognition for Intelligent CCTV. 2006 RNSA Security Technology Conference. Canberra. 356-364. W. Wolf, B. Ozer, and T. Lv. 2002. Smart cameras as embedded systems. Computer. 35: 4853.

318

Corporate Counter-Terrorism: The Key Role for Business in the War on Terror

29
Corporate Counter-Terrorism: The Key Role for Business in the War on Terror
Luke Howie
Global Terrorism Research Centre, Department of Behavioural Studies, Monash University
Abstract
Business organisations have long been a preferred target for terrorists. As such, it is not surprising that businesses play an important role in countering terrorism. Many business leaders and managers acknowledge their obligation to provide effective security to employees, clients, customers, and the general public, but how far does this obligation extend? Should businesses be compelled to be counter-terrorists? What role do businesses already play? In this paper I address some of these important concerns in an examination of the role of business in countering terrorism. Many business leaders are in the unenviable position of juggling an insecure operating environment, particularly in cities, whilst trying to maintain efficiency and effectiveness in the companys core business. These forces are often in conflict and require that businesses exert enormous efforts to counter-terrorism. This focus on terrorism as an organisational threat, a threat that is not likely to be realised, has perhaps created unreasonable obligations and expectations for businesses. I suggest that complete, no-gaps security is almost never a goal. I argue here that business leaders seek to create pseudocertainty: a simulation of security. Simulated security acknowledges that most terrorism will be difficult to stop but some tangible steps can still be taken. Since terrorists seek to cause fear and anxiety in an audience, businesses can best fight in the War on Terror by making people feel safer. I present in this paper a crucial sample of research that was conducted in Melbourne in 2005 with managers in organisations that are potentially vulnerable to terrorism operating in the major events and public transport sectors. These informants describe their efforts in creating an image of safety even as no-gaps security proves elusive.

319

Recent advances in security technology

Introduction
Business leaders and managers play an important role in Australian counter-terrorism. There are many forces that demand that leaders and managers close the gaps in security that inevitably appear when operating in uncertain and unstable environments. These forces include legislation that compels organisations to provide safe conditions for workers and the public, moral obligations and ethical imperatives to ensure safety and security, and crucial financial decisions that ensure the protection of assets and resources. In the post-9/11 world, this has increasingly become a challenging task. It has proved exceedingly difficult for organisations to provide stringent security without damaging company image and bottom-line profits. Georgio Agamben (2001) wrote shortly after 9/11 that we face extreme and most dangerous developments in the thought of security and while discipline produces order, security seeks to regulate disorder. I argue that this type of security was once the domain of the state but now operates from decentralised stages in cities where organisations are asked to close the gaps in security to play their part in counter-terrorism in Australia. My data and analysis suggests that the reluctance that some business leaders and managers exhibit in devoting resources to countering terrorism, and their belief that terrorism poses only a minimal threat, means that only images of safety what I call simulated security is produced in countering terrorism. For one of the informants I interviewed, securing organisational spaces from terrorism boiled down to one key question: What would I say to the Coroners Court or the Royal Commission if I had to testify following an act of terrorism? In this paper I present two crucial dialogues taken from substantial research that I carried out with employees, business leaders and managers in inner city Melbourne in 2005. These two dialogues were created through interviews that were conducted at crucial moments in the city of Melbourne when terrorism was incessantly reported in the media shortly following the July 7, 2005 London bombings and the October 1, 2005 Bali bombings. These particular interviews were conducted with the security manager of a major events venue and the managing director of a public transport company. This paper is structured in the following way. First, I present some critical literature that explains what I mean by images and simulations. This literature is drawn from social theory and terrorism studies. Second, I present the methodologies I employed for conducting this research. This research is qualitative and, broadly speaking, was conducted and analysed as an constructivist phenomenology. Third, I present the two key dialogues from interviews that I conducted with two managers who were responsible for the security of people and places in Melbourne in 2005. Their stories highlight some of the difficulties associated with securing businesses from terrorism. I conclude by arguing that in confronting the threat of terrorism a threat that arrives to Australians as an image and a simulation business leaders and managers do not aim for complete, no-gaps security. Rather, they aim for simulated security through creating an image of safety. I suggest that this is an ironic yet effective method for combating the impact and affect of terrorism in cities where terrorists pose only a minimal threat.

320

Corporate Counter-Terrorism: The Key Role for Business in the War on Terror

Images and Simulations


Terrorism is witnessed by Australians almost exclusively through the media. In particular, the television news has delivered contemporary terrorism to the living rooms of millions of Australians. Australian witnesses viewed 9/11 through images transmitted to many distant locations even as it was occurring. For these distant witnesses terrorism was a moving picture on a screen. Jean Baudrillard (1987) argued that images are destructive of the event that the image represents. According to Baudrillard (1994: 2): It is a question of substituting the signs of the real for the realNever again will the real have the chance to produce itself. This argument can be used to suggest that the image is, in some respects, the most damaging and most real outcome of acts of terrorism. Whilst this certainly would not be the case for people injured in an act of terrorism or for those who have lost loved ones to terrorism, for witnesses in distant locations, I suggest terrorism is seen only as an image. Terrorism is, after all, designed to have a lot of people watching, not just a lot of people dead (Jenkins, 1987). Consider for a moment a fundamental proposition of psychological studies of the function of the senses. In particular, consider the so-called visual association area of the brain. In this view, the eyes are neutral organs. They transmit stimuli that they do not understand to a part of the brain where visual interpretation occurs. Indeed, it is the brain that we can say that truly does the seeing (Plotnik, 2005: 79). How then are we to distinguish between the real and the real as an image if it is little more than a matter of association, a matter of considering the new information in light of information that we already possess? How does the witness distinguish between what is an image of fictitious violence and what is an image of real violence? How would witnesses distinguish clearly between images of disaster movies and images of terrorism? For the great majority of the public, the WTC [World Trade Center] explosions were events on the TV screen, and when we watched the oftrepeated shot of frightened people running towards the camera ahead of the giant cloud of dust from the collapsing tower, was not the framing of the shot itself reminiscent of spectacular shots in catastrophe movies, a special effect which outdid all others, sincereality is the best appearance of itself (iek, 2002: 11). The implication here is that fiction and reality are not easily distinguishable when they are both images. Baudrillard (2002) argued that on 9/11, images absorbed the energy of fiction. This is perhaps why some changed the channels on their televisions so they could be sure that 9/11 was really happening (Miller, n.d). I argue that the images generated by terrorism are often comparable to images in catastrophe movies with one notable exception: terrorism is real. From this discussion of images the meaning of simulation emerges. Images that become the events that they represent are simulations. Since terrorism arrives as an image to witnesses of terrorism in Australia, the image is all that many Australian witnesses can see. This does not diminish the impact of terrorism, however. Quite the
321

Recent advances in security technology

contrary: for even if terrorism does not occur on the streets of Australias cities, it is still witnessed in images. Witnessing these images has real impacts and real consequences. I suggest that the simulation is powerful: simulated symptoms can present as real symptoms and the impact of terrorism and the impact of its simulation are similar and often difficult to distinguish. Whilst the real and the image can appear to be similar, the image endures by existing independently of the event that created it, long after the event has ended. The image is cut away from its grounding as a distinct totality that breaks with the continuity in life in general (OConnor, 2006: 58). As Jean-Luc Nancy argued (2000: 9, 12-13), the image is more than a mere representation of reality. Images of terrorism are real violence, but these images also look like movie fiction. For Baudrillard (1994: 3): pretending (fiction)leaves the principle of reality intact: the difference is always clear, it is simply masked, whereas simulation threatens the difference between the true and the false, the real and the imaginary. How different, for example, is 9/11 from the movies World Trade Center (2006) and United 93 (2006) that merely depict the events? In United 93 real flight crew and real pilots were used as extras. But even if images perfectly simulate events, images will still never be as real.

Methodologies
What I suggest with this literature is that witnesses of terrorism in places where terrorism does not generally occur places such as Australia do still witness terrorism but they do so on television and in other media spaces. These images I suggest are more than images: they simulate terrorism, and simulations are terrifying and powerful. I endeavoured to study this phenomenon by interviewing some of these witnesses to images and simulations in Australia. For the broader research, from which the dialogues presented in this paper are drawn, I conducted interviews in inner city Melbourne between February and November 2005 with people who work in the central business district (CBD). Each interview was semi-structured as based on the typology of interview strategies presented in Silverman (2006: 110) and 40 to 60 minutes in length. In total 55 people were interviewed and another 50 completed written long answer surveys as they were unavailable to be interviewed during the research period. Respondents were recruited when they voluntarily responded to posters that I had posted on company notice boards. I had chosen organisations that were identified as particularly vulnerable to terrorism in the terrorism and business literature (see Alexander and Alexander, 2002; Alexander, 2004; Alexander and Kilmarx, 1979). Whilst a full account of the research I conducted is beyond the scope of the present discussion, I present in this paper two dialogues created through these interviews. These dialogues were crucial outcomes from the broader research and highlight the significance of the role of business in counter-terrorism. The interviews were conducted with reference to Silvermans (2006: 129-130) suggestions for conducting constructivist qualitative research. Constructivists dispute the possibility of uncovering facts, realities or truths and argue that attempts to establish accuracy, reliability, or validity are problematic (Kitzinger in Silverman, 2006: 129). I have adapted this constructivist thinking through three
322

Corporate Counter-Terrorism: The Key Role for Business in the War on Terror

assumptions. The first assumption is that there is a context for the phenomenon being researched. To understand a particular phenomenon, researchers must interpret both the interview responses and the context in which the interviews take place. The key contextual features in this research were the global incidences of terrorism occurring in cities throughout the world, the ongoing threat of further terrorism, and working in industries and locations that may be targets for terrorism. Examples include tall buildings, oil companies, and identifiably Western businesses such as McDonalds and IBM. The second assumption is that there are influences and power flowing between researchers and respondents. As such, researchers influence the responses given by informants and the informants influence the researchers questions and responses: it is unavoidable that this occurs and when it does occur it should not be covered up or ignored. Where some natural sciences and social sciences attempt to make the researcher invisible in conducting and reporting research, no such attempt is made here. As such, these interviews are presented as dialogues. The final assumption is that the most important information may lie in the detail of the interview and the interview context. This detail may not just come from words but from body language, demeanour, attitude, pauses, humour, and the environment.

Simulated Security and the Image of Safety


In this section, I explore two dialogues with managers who work in Melbournes CBD. First, I explore Seans story. Sean was a security manager at a major sporting, cultural and social events venue in the CBD. He was directly responsible for the safety and security of public spaces, customers, and workers. Second, I explore Louis story. Louis was the managing director of a niche public transport provider that operated in Melbourne and the surrounding suburbs. He too was directly responsible for the safety and security of public spaces in Melbourne and the people who use these spaces including members of the public, customers, and workers. Seans Story Sean was a security manager at a major sporting, cultural and social events venue. He believed that he was responsible for the lives of thousands, if not millions, of Melburnians. I interviewed Sean on October 4, 2005 in the security management office: an empty, windowless and sterile room underneath the venue. It was three days after bombs were detonated in beachfront restaurants and cafes in Bali killing 26 people. I asked Sean what he believed terrorism was. Sean argued that terrorism described many acts of violence not limited to what the media labeled as terrorism. Sean noted in particular the Russell Street bombings that had occurred in Melbourne on March 27, 1986. The Russell Street bombing which, although it was a long time before September 11th, my father in law was an ambulance officer who was at Russell Street. Now that can be classed as terrorism to me although locally, things like S11 happen in Iraq and Pakistan and that all the time,
323

Recent advances in security technology

but we dont witness that all the time. So when it does happen here it is huge. Over there it is nearly everyday life. I think thats where Australia is so shocked when something does happen. Although they are starting to get used to it (Sean, October 4, 2005). I attempted to direct the conversation between myself and Sean towards how terrorism affected witnesses in Melbourne but Sean preferred to discuss the vulnerability of locations in areas outside of the CBD. [Terrorists would target] More outer Melbourne, which in turn affects the inner city. Your reservoirs up Dandenong way, your Sylvan dams, they are probably not as big and your Upper Yarra dam and the hydro electrical station up there. One of the major water infrastructures that supply Melbourne with their clean water. I really wouldnt think they would hit [major events] venues, cause everyone [attends major events], and they havent hit one yet and I dont think they would. [Major events are attended] all over the world and everyone gets involved... It is not for one culture or one religion. Take cricket for example. It is played everywhere. It is played by Muslim countries, Christian countries, Catholic countries. So I dont think they would hit a sporting event or a sporting stadium anywhere in the world. It would be political, for example, S11 they hit the Pentagon. It might be infrastructure such as London (Sean, October 4, 2005). I was surprised to hear that a man in charge of security at a major social, cultural and sporting venue did not consider it likely that a sporting event would be targeted. Sean held the firm belief that terrorists would rather target the power or water infrastructure: targets that Sean considered to be the holy grail for terrorists. For Sean it was the nature and structure of a particular city that would determined the target that terrorists would select. As such, in New York it was the financial sector and large iconic buildings. In London, targeting the public transport network had the effect of interrupting the normal function of the city. Sean believed that terrorists would target Melbourne where it would most disrupt the city. Sean argued that an indirect attack on the water or power infrastructure might be the most effective method. I asked Sean whether he believed the Commonwealth Games which were held in March 2006, some five months after I interviewed Sean would be a target for terrorists given that an attack at the Games would be likely to significantly disrupt Melbourne. I dont believe so. Obviously, things are going to be stepped up by the authorities. Your Vicpols, your army, defence forces. But I just think that because there are so many, because it is the Commonwealth Games and not the Olympic games, there is a big difference between the two although a lot of people dont see it. Commonwealth Games is the Commonwealth Games for a reason. I still dont think that they would target something like that. It is always possible though, you never know.
324

Corporate Counter-Terrorism: The Key Role for Business in the War on Terror

Its that all terrorisms are irregular, erratic. So the response is all reactive. You can only be proactive to a certain extent. Then deal with it as it happens (Sean, October 4, 2005). Sean would be unpopular in some circles for downplaying the possibility of terrorism at the Commonwealth Games. Similarly, his belief that counter-terrorism security was a reactive endeavour could make many attendees at this major social, cultural and sporting events venue feel unsafe. For Seans part, he could offer little in reassurances for people attending major events in Melbourne: Infrastructure is the target. The scaffolding that supports the system. They [terrorists] are trying to make a statement. With yourselves [suicide bombing], we can do some damage. We need to rely on our intel and our government. There is not a lot your average Joe on the street can do much about (Sean, October 4, 2005). It was difficult for Sean to imagine a scenario where security could be guaranteed. This was perhaps especially so when picturing the archetype of contemporary terrorism: the suicide bomber. It was suicide bombers that caused the destruction in New York and Washington on 9/11, on rail networks in Madrid and London, and twice in Bali. This, added to daily reports of suicide terrorism in occupied Iraq, paints a picture of terror that has had a lasting impact. A determined suicide bomber is an organic, thinking, reacting, improvising killer. Sean believed that, most likely, she/he could not be stopped. Sean was, however, obliged by legal and moral imperatives to create a secure environment for employees and patrons who attend major events at this venue. Since complete security was, in Seans view, not possible, an appearance of security was Seans goal: this I argue was simulated security through creating an image of safety. Sean and his colleagues worked tirelessly to simulate security in preparing for major events: A lot of people are asking did we do anything extra [in preparing for a major event]? We did heaps. Probably more so for peace of mind for the general public. Also to stop anything untoward coming into the [venue]. We did things like 100% bag searches which usually we would only do for [very large major events]. We do it in [other major events] because we have more time to do it. Its very hard to do something like that during [very large major events]. You havent got a lot of time to get people in. At [very large major events] you have [many thousands of] people and to get them into the [venue] it is a matter of 45 minutes to an hour at the most. To do a 100% bag search is mind boggling to say the least. We can never be sure if stuff is getting through (Sean, October 4, 2005). The importance of peace of mind in understanding terrorism and the attempts of terrorists to shatter this peace of mind has long been a topic for theoretical debate. Horgan (2005: 3) argued that terrorism was designed to create levels of heightened arousaldisproportionate to the actual or intended future threat posed by the
325

Recent advances in security technology

terrorist. In a similar vein, Friedland and Merari (1985) suggested that terrorists sought to create perceptions of terrorism disproportionate to any actual danger posed, and that terrorism has the ability to affect witnesses far beyond the initial destruction and death that it causes. It was therefore important to consider and manage public perceptions when delivering security for major events. Since terrorists seek to psychologically influence a population through a targeted and symbolic act of violence, it would seem appropriate to respond with security measures that create an image of safety and deliver perceptions of a secure venue. In many respects this is symbolic, simulated security. Witnesses to 9/11 and other terrorism need to also witness image-security in order to defy terrorists goals and to go about their daily activities feeling safe, secure, confident and comfortable. Louis Story The second interviewee was Louis: the managing director of a privately owned public transport company that provides public transport to a niche market in Melbourne and the surrounding suburbs. Our conversation occurred in Melbourne as fine weather returned to the city on November 20, 2005. We met in the boardroom of the companys head office in an historic four story building in Melbournes central business district. I had met Louis at a security conference the previous year and had learnt that he was deeply concerned for the security of his company. He believed that participating in this research provided him with the opportunity to learn something that would enable him to better pursue safety for his workforce, customers and infrastructure. I asked Louis how he would describe the terrorist threat to Melbourne: I think there is a real threat of terrorism in Melbourne in that Australia has been cited as being on the list of US allies but we are still participants in the coalition in Iraq and that our government is strongly supportive of the US actions in Iraq. We have got a significant Muslim population including some more radical elements. So I think there is the opportunity there and a reason for potentially choosing Melbourne as a target. Geographically we are actually a long way away. There are some bits of the world that are maybe more exciting for terrorists to attack. And I suppose also if I was looking at Australia just like a tourist, I would probably choose Sydney rather than Melbourne as a terrorist target (Louis, November 20, 2005). Believing that terrorism was more likely in Sydney than it was in Melbourne was not, in Louis opinion, an acceptable position to adopt given the responsibility he had to the company, its employees, customers and the general public. As the managing director of a public transport organisation and responsible for the security of places and people, Louis felt that it was necessary to engage in a variety of activities that lessened the possibility and impact of terrorism. I asked Louis whether he knew how people who worked in his organisation felt about terrorism and whether the pro-active
326

Corporate Counter-Terrorism: The Key Role for Business in the War on Terror

position he adopted towards counter-terrorism had had an impact on others in the organisation. My impression is that they perceive that there is an increase in threat in the current environment just because there is probably an increase in terrorism throughout the world. But my belief is that I have seen from the way they operate, that they dont feel that it is something that they [pause], that makes them change the way they live and work. Either the way they approach their work or the willingness for coming to work in a transport operation (Louis, November 20, 2005). Louis and the staff in his organisation had undertaken a variety of activities and measures to protect the organisation. Louis described at length his management security level plan that incorporated risk management methods into a counterterrorism protection and mitigation strategy. An evaluation of the terrorism related risks was undertaken and the possible impacts of terrorism that could affect the organisation were identified. At the management level, detailed plans were formulated for terrorism prevention, recovery and response. At the operational level, an awareness program was developed and scheduled for implementation in the weeks following our interview. This program involved communicating the management level plan and identifying and establishing better staff awareness of terrorism threats. Louis pointed out that the awareness program did not involve sitting people down in classrooms, but rather a comprehensive sharing of the risk analysis planning undertaken at the management level and the encouragement of a full and frank dialogue on the status of the terrorist threat to Melbourne and what it meant for the organisation: Which means we understand that Australia is on a medium level security alert and we dont believe that there is a significant risk of attack against [company name] (Louis, November 20, 2005). Staff would then undergo training in identifying potential threats to the organisation during operating times. Staff would be trained to look out for suspicious packages and passenger behaviour, foster security awareness, and develop techniques for responding to an incident or a threat. This would involve liaising with authorities, informing management and emergency services, and looking after themselves, passengers, customers and members of the public. These extensive measures, however, were not, according Louis, primarily about physically securing the organisation, its people and its infrastructure. Rather these measures were designed to create an image of safety, security, and terrorism awareness to satisfy employees, government regulators, customers and the public. I think it is important from both sides. That management demonstrate that we are interested, concerned and are doing something. But also that management demonstrate that we dont believe it is a big risk. That we are doing something, not because we are particularly concerned but we are dong something because we think it is the prudent thing to make sure we are at an appropriate level of alertness. But similarly if we are to be
327

Recent advances in security technology

effective staff need to be aware of what they need to do, they need to participate, they need to be alert and have that interest in participation (Louis, November 20, 2005). In this way, counter-terrorism techniques may prove to be a method to build organisational cohesion and unity towards security. Cynicism about the threat of terrorism must make way for being seen to be doing something to protect the organisation and the city from terrorism. Furthermore, Louis believed it was important for staff to see that the organisation was taking the threat seriously and was willing to do something about it. Louis has responded in kind with a program to mitigate and respond to a threat that he did not believe would materialise. Much like it was for Sean, terrorism represented a source of extra work for Louis, rather than physical danger. Public transport organisations have been targeted many times. Organisations have responded by spending money on counter-terrorism and security. Governments have encouraged the public to be vigilant and report anything suspicious. Yet Louis believed that little could realistically be done to stop terrorism occurring. Louis concerns stem less from his inability to fully secure his organisation, to close all of the gaps, than from the added burden that creating simulated security creating an image of safety involves: For me, Ive got concerns on two levels. One is the real concern about doing something to address a threat for the obvious reasons that you dont want a threat to eventuate. The other one is from a liability perspective, which I dont like to be the reason to do something, but in a way it is almost more real. That I know that as someone in charge of a companys operations, there are a certain amount of things that you need to do to cover yourself and in a lot of ways that is driving me more than concern of a threat actually eventuating (Louis, November 20, 2005). Louis abided by a simple rule, and it is the golden rule, for security simulators in distant cities eager to create an image of safety: What would I say to the Coroners Court or the Royal Commission to defend the companys and my actions? (Louis, November 20, 2005). In protecting his organisation from terrorism, Louis adopted methods for simulating security by creating an image of safety. Rather than trying to provide complete, nogaps security a goal that Louis believed was not really attainable he created security to a point where he could answer questions in front of a court, a coroners inquest or a royal commission. I argue that Louis perceived a moral imperative, but it was an imperative that extended only to appearances, images, and simulations. The company had to appear as though it was doing all that it could to prevent terrorism and mitigate its impacts and, in a way, Louis was achieving this. It is unsurprising that images of terrorism are perhaps best countered by more images: images that paint a picture of security and safety.

328

Corporate Counter-Terrorism: The Key Role for Business in the War on Terror

Conclusion: The Security-Appearance Paradox


I will conclude by suggesting that there would seem to be an inherent paradox between security and appearance. Securitising spaces and people for Sean and Louis was not about creating powerful, impenetrable, no-gaps security. Rather, it was about simulating security by creating an image of safety. I suggest that spaces and people may only need to appear secure to maintain effective protection from terrorism. I argue that what occurred in New York on 9/11 was violence but to witnesses in distant locations it was terrorism. As such, those who are charged with protecting people and places from terrorism should not necessarily be trying to prevent violence, they should be trying to prevent terrorism: that emotional feeling of terror, fear and dread. This is not to say that there should not be real security: real security can prevent ordinary crime and routine security threats. But this security should not hope to stop terrorism. Preventing terrorism with security can be seen as a metaphor much like the metaphor of the War on Drugs: police are not shooting syringes, pills and bottles on a battlefield, but they are trying to prevent people from using drugs. Security will not protect the witness against the emotion terror. But it can put their minds at ease. Creating images of safety achieves this goal: it may be the only true counter-terrorism tactic. Business leaders and managers who are responsible for spaces and people are compelled to close the gaps in security lest they find themselves before a coroners inquest or a royal commission. Yet their response of creating an image of safety by simulating security will do little to prevent violence, but it will prevent and mitigate terrorism. Images of safety foster faith and trust in the citys institutions and allow people to engage in insecure and risky activities with peace of mind.

References
Agamben, G. (2001), On Security and Terror, European Graduate School Homepage, available at http://egs.edu/faculty/agamben/agamben-on-securtiy-and-terror.html, retrieved on April 30, 2007. Alexander, D. and Alexander, Y. (2002), Terrorism and Business: The Impact of September 11, 2001, Transnational Publishers, Ardsley. Alexander, D. (2004), Business Confronts Terrorism: Risks and Responses, The University of Wisconsin Press, Madison. Alexander, Y. and Kilmarx, R. (Eds) (1979), Political Terrorism and Business: The Threat and Response, Praeger, New York. Baudrillard, J. (1987), The Evil Demon of Images, The Power Institute of Fine Arts, New South Wales. Baudrillard, J. (1994), Simulacra and Simulation, The University of Michigan Press, Ann Arbor. Baudrillard, J. (2002), The Spirit of Terrorism and Requiem for the Twin Towers, Verso, London. Friedland, N. and Merari, A. (1985), The Psychological Impact of Terrorism: A DoubleEdged Sword, Political Psychology, Vol. 6, No. 4, pp. 591-604.
329

Recent advances in security technology

Horgan, J. (2005), The Psychology of Terrorism, Routledge, London. Jenkins, B. (1987), The Future Course of International Terrorism, in P. Wilkinson and A. Stewart (eds), Contemporary Research on Terrorism, Aberdeen University Press, Aberdeen. Miller, T. (no date), Being Ignorant: Living in Manhattan, available at www.portalcomunicacion.com/bcn2002/n_eng/contents/11/miller.pdf, retrieved on February 1, 2007. Nancy, J. (2000), Of Being Singular Plural, in Being Singular Plural, Stanford University Press, Stanford, pp. 1-99. OConnor, C. (2006), Cut Together, Film Philosophy, 10.2, September, pp. 55-66. Plotnik, R. (2005), Introduction to Psychology, 7th Edn, Thomson Wadsworth, Belmont. Silverman, D. (2006), Interpreting Qualitative Data: Methods for Analyzing Talk, Text and Interaction, 3rd Edn., Sage, London. iek, S. (2002), Welcome to the Desert of the Real, Verso, London.

330

Robust Face Recognition Techniques for Smart Camera Based Surveillance

30
Robust Face Recognition Techniques for Smart Camera Based Surveillance
Ting Shan, Abbas Bigdeli, Brian C. Lovell, Conrad Sanderson, Shaokang Chen and Erik Berglund
NICTA and School of ITEE, The University of Queensland
Abstract
With the growing interest in security and surveillance, Smart Cameras are finding their way into intelligent surveillance systems. Many cameras perform a set of lowlevel image processing operations, such as motion detection, on the input frames at the image sensor. In this work we are attempting to aid the transition of face recognition to a smart camera in intelligent surveillance systems and also handheld embedded devices such as mobile phones. In 2004, we proposed Adaptive Principal Component Analysis (APCA) and Rotated APCA, which both perform well against both lighting and expression variations. Extending these methods in 2006, we have developed a face feature model which enables the synthesis of realistic frontal face images from non-frontal face images over a wide range of pose angles. The resulting recognition system can achieve remarkable recognition rates on face images despite large variations in lighting, expression, and pose. The proposed recognition techniques are not computationally intensive and are as such well suited to smart camera environments.

Biographies
Dr Ting Shan joined NICTA Queensland Lab as a researcher in the Safeguarding Australia Program- Smart Sensors Project in January 2007, after he completed the PhD degree with The University of Queensland and NICTA. Ting Shans research interest is in the field of computer vision and pattern recognition, particularly in face detection, pose variation compensation, multi-view face recognition, still-image and
331

Recent advances in security technology

video-based face recognition. He has invented 2 patents in the areas of face recognition, and has published 10 papers in book chapters and international conferences. Dr. Abbas Bigdeli has more than 6 years experience in consultancy, scientific research and technology leadership in the areas of digital signal and image processing, computer architecture, and information security. Abbas is a member of IEEE and has published over 40 papers in journals, book chapters and international conferences. He has 3 patents in the areas of information security and computer vision. He has been acting as a technical reviewer for several journals and conferences; these include Journal of Microprocessors and Microsystems, European Signal Processing Association, Australian Journal of research and practice in IT, as well as Conferences such as FPL, IEEE VLSI and EUSIPCO. Professor Brian C. Lovell was born in Brisbane, Australia in 1960. He received the BE in electrical engineering in 1982, the BSc in computer science in 1983, and the PhD in signal processing in 1991: all from the University of Queensland (UQ). Professor Lovell is Research Leader in National ICT Australia and Research Director of the Intelligent Real-Time Imaging and Sensing Research group in the School of ITEE, UQ. He was President of the Australian Pattern Recognition Society 1995-2005, Senior Member of the IEEE, Fellow of the World Innovation Forum, Fellow of the IEAust, and voting member for Australia on the Governing Board of the International Association for Pattern Recognition since 1998. Professor Lovell was Technical Cochair of ICPR2006 in Hong Kong (Computer Vision and Image Analysis), and is Program Co-chair of ICPR2008 in Tampa, Florida. His research interests are currently focused on optimal image segmentation, real-time video analysis, and face recognition. Dr. Shaokang Chen received his BE degree in Automatic Control Engineering from the South China University of Technology in 1999. In 2005, he received his PhD in Electrical Engineering from The University of Queensland (UQ), Australia. He worked as a researcher in UQ and NICTA since 2006. Shaokang has done some research in real-time image processing, face recognition and medical image processing. He currently focuses on robust real-time face recognition in NICTA Queensland. Dr. Conrad Sanderson is a researcher with the Sensor Group at NICTA. He received the PhD degree in 2003 from Griffith University (Queensland, Australia), in the area of information fusion adapted to biometric person recognition. He has worked on a number of applied research projects, including robust speech recognition at ATR Laboratories (Japan), audio-visual biometrics at the IDIAP Research Institute (Switzerland), ship classification in infra-red images at CSSIP, as well as natural language processing (author identity deduction) and bioinformatics (feature selection for cancer classification) at NICTA. His current research interests are in the areas of machine learning and computer vision, with applications such as intelligent surveillance. Dr. Erik Berglund received the equivalent of a BEng from the stfold University College, Norway, in 2000 and a PhD from the University of Queensland, Australia, in 2006. He is currently senior research fellow at UQ and researcher at NICTA, Australia. Current research interests include computer vision, chaotic properties of self-organising maps, parameter-less self-organising maps and implicit information processing
332

Robust Face Recognition Techniques for Smart Camera Based Surveillance

Introduction
Although face recognition has been researched for many years, promising laboratory systems developed with off-line face databases have not fared well when deployed in the real world. This is largely because of the deleterious effect of image acquisition problems such as lighting angle, facial expression, and head pose. Accuracy may drop to 10% or even lower under uncontrolled image acquisition conditions. Such conditions are often encountered in automatic identity capture for video surveillance and for identification of faces from a mobile camera phone. Indeed, the mobile phone is a very interesting target platform for advanced pattern recognition. Many modern phones can reliably recognize speech in noisy environments. Some recognize handwriting including Chinese characters. Newer phones are equipped with fingerprint reading sensors and software. At first glance, it seems strange that such advanced pattern recognition algorithms representing decades of research are found on garden variety mobile handsets. Yet the small size of the mobile phone encourages the development of pattern recognition interfaces rather than bulky keyboards. The high cost of licensing these algorithms is defrayed by the sheer volume of mobiles phones being sold every year. With this in mind, the authors have implemented an early prototype of a face recognition module on a mobile camera phone so that the camera could be used to identify the person holding the phone and unlock the keyboard. Our recent research has led to recognition systems that are much less sensitive to uncontrolled image capture. These methods are described below. Now the challenge is to incorporate these new algorithms into embedded platforms to help realize our dream of ubiquitous low cost face recognition.

Robust Face Recognition


Illumination and expression invariation face recognition In 2004, we developed Adaptive Principal Component Analysis (APCA) and Rotated APCA (Chen and Lovell 2004) which inherited merits from both PCA and FLD (Fisher Linear Discriminant) (Belhumeur, Hespanha et al. 1997) by warping the face subspace according to the within-class and between-class covariance to compensate for illumination and facial expression variations. We tested RAPCA, APCA and PCA on the Asian Face Image Database (I.M.Lab), which consists of 535 facial images under 5 different standardized illuminations and 428 images with 4 different facial expressions corresponding to 107 subjects. The test results show that APCA and RAPCA perform well against both lighting variation and expression change, but they only perform well with frontal face images as registered in the database. In the mobile camera phone or surveillance scenario, many face images captured by the cameras are not perfectly frontal, so the need for a pose-invariant face recognizer becomes absolutely crucial.
333

Recent advances in security technology

Head pose compensation Facial Feature Interpretation First introduced by Cootes et al (2001), Active Appearance Models are a powerful tool to describe deformable object images. Given a collection of training images for a certain object class where the feature points have been manually marked, a shape and texture can be represented by applying PCA to the sample shape and texture distributions as:

x = x + Ps c (1) and g = g + Pg c

(2)

where x is the mean shape, g is the mean texture and Ps , Pg are matrices describing the respective shape and texture variations learned from the training sets. The parameters, c are used to control the shape and texture change. Pose Estimation The model vector c is related to the viewing angle, , approximately by a correlation model:

c = c0 + cc cos( ) + cs sin( )

(3)

where c 0 , c c and c s are vectors which are learned from the training data. (Here we consider only head turning. Head nodding can be dealt with in a similar way). For each of the image labeled with pose in the training set, we perform Active Appearance Models Search to find the best fitting model parameters ci , then c 0 ,

cc and c s can be learned via regression from the vectors {ci } and vectors {(1, cos i , sin i )' } .
Given a new face image with parameters c, we can estimate orientation as follows. We first transform equation (3) to:

cos c c0 = (cc cs ) sin


cos Rc1 (c c0 ) = sin
Letting ( x , y ) ' = Rc1 (c c 0 ) , then the best estimate of the face orientation is

(4)

Using Rc1 which is the left pseudo-inverse of the matrix ( cc | c s ), then this becomes (5)

= tan 1 ( y / x ) .
334

Robust Face Recognition Techniques for Smart Camera Based Surveillance

Frontal View Synthesis After we estimate the angle , we can use the face model to synthesize new views. Here we will synthesize a frontal view face image, which will be used for face recognition. Let cres be the residual vector which is not explained by the correlation model,

c res = c (c0 + cc cos( ) + c s sin( ))


To reconstruct at a new angle, , we simply use the parameters:

(6)

c( ) = c0 + cc cos( ) + c s sin( ) + c res


As we want to synthesize frontal view face image, is 0, so this becomes:

(7)

c(0) = c0 + cc + c res
The shape and texture at angle 0can be calculated by:

(8)

x(0) = x + Qs c(0) (9) and

g (0) = g + Q g c(0)

10)

The new frontal face image then can be reconstructed. A front view synthesized from the frontal correlation model can be seen in Figure 1.

Figure 1. Frontal view, turned view, and synthesized frontal view from turned view using frontal rotation model on a face image from the Feret Database (NIST 2001)

Results from experiments


We trained our APCA face recognition model using the Asian Face Database (I.M.Lab). We selected face images from 46 persons with good AAM search results, each person has 7 pose angles, ranging from left 25, 15, to frontal 0, to right 15,
335

Recent advances in security technology

25. We then formed two datasets: one is the original image set and the other is the synthesized frontal image set, each of them contains 322 images. We only register the frontal view images into the gallery and apply both PCA and APCA on the high pose angle images for testing (276 images on each of the dataset). The overall recognition results from the 3-fold cross-validated trials are shown in Figure 2. It can be seen from Figure 2 that the recognition rates of PCA and APCA on synthesized images is much higher than that of the original high pose angle images. The recognition rate increases by more than a factor of 6 from 9% to 57% for images with view angle of 25. Yet even for smaller rotation angles of less than 15, the accuracy increases by more than 50% from 50% to 79%.

100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 25 L 15 L 15 R 25 R
PCA on original images APCA on original images PCA on synthesized images APCA on synthesized images

Figure 2. Recognition rate for PCA and APCA on original and synthesized images with small rotation angles.

Real-time automatic face recognition systems


To date the majority of the research work on Automated Face Recognition (AFR) has primarily focused on developing novel algorithms and/or improving the efficiency and accuracy of existing algorithms. As a result, most solutions developed (similar to the examples given in previous sections) are typically high-level software programs targeted for general purpose processors that are expensive and usually non-real-time solutions. Since face recognition is typically the first step and is frequently a bottleneck in most solutions due to the large search space and computationally intensive operations, it is reasonable to suggest an embedded implementation specifically optimized to detect faces and recognize them. An embedded solution would entail many advantages including cost and miniaturization as only a subset of the hardware components are required compared to the general computer based
336

Robust Face Recognition Techniques for Smart Camera Based Surveillance

solutions. The resulting solution can then be integrated with other technologies such as security cameras to create smart devices. Now that reliable, accurate, and efficient face recognition algorithms are available; low-cost implementations of robust real-time face detectors can be explored. The following section discusses the common embedded technologies and known embedded implementations of automatic face recognition (AFR) systems. The most

common target technologies are: pure hardware, embedded microprocessors, and configurable hardware.

Target embedded technologies


Pure hardware systems are typically based on very large scale integrated circuit (VLSI) semiconductor technology implemented as application specific integrated circuits (ASIC). Compared to the other technologies, ASICs have a high operating frequency resulting in better performance, low power consumption, high degree of parallelism, and well established design tools. However, a large amount of development time is required to optimize and implement the designs. Also, due to the fixed nature of this technology the resulting solutions are not flexible and cannot be easily changed, resulting in high development costs and risk. Theocharides et al (2004) investigated the implementation of a neural network based face detection algorithm in 160 nm VLSI technology based on algorithm proposed by Rowley et al in Rowley et al (1998), which has a high degree of parallelism. On the other hand, software programs implemented on general purpose processors (GPP) offer a great deal of flexibility, coupled with very well established design tools that can automatically optimize the designs with little development time and costs. GPPs are ideally suited to applications that are primarily made up of control processing. However, they are disadvantaged because minimal or no special instructions are available to assist with data processing (B.D.T. Inc., 2004). Digital signal processors (DSP) extend GPPs in the direction of increasing parallelism and providing additional support for applications requiring large amounts of data processing. The drawbacks of microprocessors (both GPPs and DSPs) are high power consumption, and inferior performance compared to an ASIC. The performance of the final solution is limited to the selected processor. Finally, configurable platforms such as field programmable gate arrays (FPGA) combine some of the advantages from both pure hardware and pure software solutions. More specifically, the high parallelism and computational speed of hardware, and the flexibility and short design time of software. By inheriting characteristics from both hardware and software solutions, the design space for FPGAs is extended for better trade-offs between performance and cost. These design trade-offs are far superior to that of pure hardware or software solutions alone. From an efficiency point of view, the performance measures for FPGAs, that is, operating frequency, power consumption, and so on, are generally half way in between the corresponding hardware and software measures.
337

Recent advances in security technology

Several configurable hardware based implementations exist, including that by McCready (2000) and (Sadri et al 2004). McCready specifically designed a novel face detection algorithm for the Transmogrifier-2 (TM-2) configurable platform. The Transmogrifer-2 is a multi-board FPGA based architecture proposed by (Lewis et al 1998). The algorithm was intentionally designed with minimal mathematical operations that could execute in parallel engineering effort has been put in to reduce the search speed. The implemented system required nine boards of the TM-2 system, requiring 31,500 logic cells (LC). The system can process 30 images per second with a detection accuracy of 87%. The hardware implementation is said to be 1,000 times faster than the equivalent software implementation. On the other hand, (Sadri et al 2004) implemented the neural network based algorithm proposed by (Rowley et al 1998) on the Xilinx Virtex-II Pro XC2VP20 FPGA. Skin color filtering and edge detection is incorporated to reduce the search speed. The solution is partitioned such that all regular operations are implemented in hardware while all irregular control based operations are implemented on Xilinxs embedded hardcore PowerPC processor. This partitioning allows the advantages of both hardware and software to be simultaneously exploited. The system operates at 200 MHz and can process up to nine images per second. The examples presented illustrate the obvious compromises between accuracy and algorithm robustness versus the amount of resources required. That is, to improve the performance of the face detection algorithms, we must either increase the embedded design complexity, which generally results in higher power consumption and hardware costs, or settle for a lesser solution. Design Considerations As discussed in previous sections, in practice, most cameras such as those used in CCTV, are positioned so that capturing frontal image is not possible. Furthermore, in real world applications, lighting condition is usually not desirable. In order to achieve real-time performance, a combination of optimization techniques that is low resolution images, fixed-point arithmetic, conversion of key functions to low-level codes and most importantly through custom instruction, are applied to improve the overall system speed and performance.

Conclusions and future work


In this chapter, we described the face recognition algorithm Adaptive Principal Component Analysis (APCA) and Rotated Adaptive Principal Component Analysis (RAPCA) which are insensitive to illumination and expression variations. We then extend our previous work to multi-view face recognition by interpreting facial features and synthesizing realistic frontal face images when given a single novel face image. The experimental results show that after frontal pose synthesis, the recognition rate increases significantly especially for larger rotation angles.
338

Robust Face Recognition Techniques for Smart Camera Based Surveillance

Furthermore, we examined how an Automated Face Recognition system can be implemented on embedded systems. We also explored various design approaches. We currently have two prototype systems for the real-time Automated Face Recognition. The first prototype was entirely implemented on an Analog Devices Blackfin DSP processor capable of verifying a face from a database of 16 faces under a second. This was done as a replacement for PIN identification on a NOKIA mobile phone. The second prototype was developed using a hardware-software approach on a NIOS II processor with extended instructions. The NIOS II processor was configured on an Altera FPGA. In our continuing work, we will extend the work described in this chapter to pose angle change larger than 25. We also will use the face models and correlation models developed in this chapter to synthesize virtual views under different lighting conditions, facial expressions and poses when given a frontal face image. These synthesized virtual images can be used as training samples for face recognition algorithm, like Support Vector Machine (SVM) or Neural Network. Thus, we can form a face recognition system using dual algorithms, one is Adaptive Principal Component Analysis, the other one is SVM or Neural Network based algorithm, which would enhance the system performance.

Acknowledgements
This project is supported by a grant from the Australian Government Department of the Prime Minister and Cabinet. NICTA is funded by the Australian Government's Backing Australia's Ability initiative, in part through the Australian Research Council.

References
Aarajt, N., S. Ravit, A. Raghunathant, and N. K. Jhat (2006). "Architectures for Efficient Face Authentication in Embedded Systems", in Proceedings of Design, Automation and Test in Europe Conference. Vol. 2: pages: 1-6 Belhumeur, P. N., J. P. Hespanha and D. J. Kriegman (1997). "Eigenfaces vs. Fisherfaces: recognition using class specific linear projection." IEEE Transaction on Pattern Analysis and Machine Intelligence Vol: 19(7): pages: 711-720. Bigdeli, A., M. Biglari-Abhari, S. H. S. Leung, and K. I. K. Wang (2004) "Multimedia extensions for a reconfigurable processor", in Proceedings of 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing. pages: 426-429 Chellappa, R., C. L. Wilson and S. Sirohey (1995). "Human and machine recognition of faces: a survey". Proceedings of Proceedings of the IEEE. pages: 705-741 Chen, S. and B. C. Lovell (2004). "Illumination and expression invariant face recognition with one sample image". Proceedings of 17th International Conference on Pattern Recognition, 2004. ICPR 2004. pages: 300-303 Vol.1 Cootes, T. F., G. J. Edwards and C. J. Taylor (2001). "Active appearance models." IEEE Transaction on Pattern Analysis and Machine Intelligence Vol: 23(6): pages: 681-685.
339

Recent advances in security technology

I.M.Lab "Asian face image database", http://nova.postech.ac.kr/ Lewis, D. M., D. R. Galloway, M. Van Ierssel, J. Rose, and P. Chow (1998). "The Transmogrifier-2: a 1 million gate rapid-prototyping system", in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 6: pages: 188-198 McCready, R. (2000). "Real-Time Face Detection on a Configurable Hardware System", in Proceedings of the The Roadmap to Reconfigurable Computing, 10th International Workshop on Field-Programmable Logic and Applications. pages: 157-162 NIST (2001), "Feret Database", http://www.itl.nist.gov/iad/humanid/feret/ Rowley, H. A., S. Baluja, and T. Kanade (1998). "Neural network-based face detection", in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, pages: 23-38 Sadri, M. S. , N. Shams, M. Rahmaty, I. Hosseini, R. Changiz, S. Mortazavian, S. Kheradmand, and R. Jafari (2004) "An FPGA Based Fast Face Detector", in Global Signal Processing Expo and Conference Theocharides, T., G. Link, N. Vijaykrishnan, M. J. Irwin, and W. Wolf (2004). "Embedded hardware face detection", in Proceedings of the 17th International Conference on VLSI Design. pages: 133-138

340

A face recognition approach using Zernike Moments for video surveillance

31
A face recognition approach using Zernike Moments for video surveillance
Arnold Wiliem, Vamsi Krishna Madasu, Wageeh Boles and Prasad Yarlagadda
School of Engineering Systems Queensland University of Technology, Australia
Abstract
In this paper, a face recognition approach using Zernike moments is presented for the main purpose of detecting faces in surveillance cameras. Zernike moments are invariant to rotation and scale and these properties make them an appropriate feature for automatic face recognition. A Viola-Jones detector based on the Adaboost algorithm is employed for detecting the face within an image sequence. Preprocessing is carried out wherever it is needed. A fuzzy enhancement algorithm is also applied to achieve uniform illumination. Zernike moments are then computed from each detected facial image. The final classification is achieved using a kNN classifier. The performance of the proposed methodology is compared on three different benchmark datasets. The results illustrate the efficacy of Zernike moments for the face recognition problem in video surveillance.

Biographies
ARNOLD WILIEM is a PhD student within the Smart Systems research theme in the School of Engineering Systems at the Queensland University of Technology (QUT). He received a BS degree in Computer Science from University of Indonesia, Indonesia in 2007 and is now pursuing his research in the field of Image Processing and Computer Vision with an emphasis in smart video surveillance systems.
341

Recent advances in security technology

VAMSI MADASU is a Research Fellow in the School of Engineering Systems at QUT. His current research theme is Smart Systems where he is developing new Image Processing and Computer Vision technologies for diverse applications. Vamsi has a PhD in Computer Science and Electrical Engineering from the University of Queensland and a Bachelor of Technology degree with distinction in Electronics & Communication Engineering from JNTU, India. He is deeply involved in security research through his work in the field of behavioural and physical biometrics for identity verification. Vamsi is a member of IEEE. WAGEEH BOLES is an Associate Professor within the School of Engineering Systems at QUT. He has many years of experience in a range of computer vision research problems, including text and object recognition, biometrics and computervision based sensing. He also maintains a strong interest in engineering education and has previously served as the Assistant Dean of Teaching and Learning in the Faculty of Built Environment and Engineering at QUT. PRASAD YARLAGADDA is a Professor in the School of Engineering Systems at QUT. He serves as the Director of Smart Systems research theme in the Faculty of Built Environment and Engineering. He is a Fellow of Institution of Engineers, Australia, Fellow of World Academy of Manufacturing and Materials, Fellow of Institution of Engineers, India, Senior member of Society Manufacturing Engineers, USA, Member of Institution of Mechanical Engineers, UK, Member of American Society of Mechanical Engineers, USA, and number of other professional organisations around the world. Over last two decades, he has published over 200 research publications in various international journals and conference proceedings, edited number of conference proceedings and special issues of international journals.

1. Introduction
In recent years, face recognition research has gained prominence owing to the heightened security situation across the western world. Face recognition software has been incorporated in a wide variety of biometrics based security systems for the purposes of identification, authentication and tracking. Unlike humans who have an outstanding capability of recognizing different patterns and faces in varying conditions, machines are still dependent on ideal face images and their performance suffers when there are variations in illumination, background, pose angle, obstacles, etc. Therefore, the problem of automatic face recognition is a very complex and challenging task. Face recognition methods can be classified into two broad classes: structural and statistical approaches. The structural approaches are based on extracting structural or geometrical features of a face, for example, the shapes of the eyes, nose, lips and mouth. These methods deal with local data instead of global data and suffer from the unpredictability of face appearance and environmental conditions as they are heavily dependent on local facial features. An example of this type of approach is Elastic Bunch Graph Matching (Wiskott et al. 1999). On the other hand, the statistical-based approaches extract features from the whole image. Since the global data of an image is used to determine the feature vectors, irrelevant data pertaining to facial region such as hair, shoulders and background may contribute to these thus adversely
342

A face recognition approach using Zernike Moments for video surveillance

affecting the recognition results. The most important examples of statistical approach are Principal Component Analysis (Turk & Pentland 1991) and Linear Discriminant Analysis (Etemad & Chellapa 1997). Other examples include Gabor filters (Wang & Chua 2005), wavelets, Independent Component Analysis (Bartlett et al. 1998). Statistical approaches for feature extraction based on moment invariants have been utilised for classification and recognition applications because of their invariance properties (Belkasim et al. 1991). An image feature is considered invariant if it remains neutral to changes in size (scale), position (translation), orientation (rotation), or/and reflection in an image. Haddadnia et al (Haddadnia et al. 2002) and Pang et al (Pang et al. 2004) were one of the first researchers who explored the use of Pseudo Zernike moment invariants as facial features for face recognition. Pseudo Zernike moments are a good feature representation and provide more information about facial image and reduce the dimension of the feature vector leading to improved results. This is in start contrast to LDA, the most dominant method in face recognition, which is computationally intensive and requires high-speed processing and a large memory. Face-recognition cameras have been successfully employed in various places for diverse applications but one area where face recognition has failed to make an impact is: surveillance. Face recognition for surveillance is considered extremely difficult because CCTV cameras photograph people at tilted angles or in weak light, both of which create poor images. The goal of this paper is to develop a face recognition system for recognizing facial images obtained from surveillance cameras by comparing them with a database of digital photographs. In the proposed system, a pre-processing step is used to detect and normalize the static face image obtained from the noisy video camera feed. Feature extraction is carried out via Zernike moments and final classification is achieved using a simple Nearest Neighbour classifier. The proposed method has advantages of geometrical invariance, robustness to noise, optimal feature representation, fast convergence and high recognition rates.

2. Pre-processing
Pre-processing of a facial image consists of face detection and normalization. Face detection (Yang et al. 2002) is a process of localizing and extracting the face region from the background. Face detection and normalization is crucial for the success of a face recognition system as features can only be computed from a face only when it has been detected properly and segmented from all other irrelevant data. The detected faces vary in scale, rotation, brightness, size, etc. in different images even for the same individual. Since the features extracted from images are dependent on detected faces, it is pertinent that there should be some kind of uniformity within the detected images. One way of solving this problem is by normalizing the detected faces. The objective of face normalization is to reduce the effect of the extraneous and redundant information, such as background, hair, caps, scarves, etc so as to enhance the recognition. The different sample images of an individuals face are normalized to a uniform face orientation, size, rotation and illumination. Hence, to ensure a more robust and accurate face recognition performance, the exact location of the face is extracted from the two dimensional video image and then normalized.
343

Recent advances in security technology

2.1 Face detection Several methods are given in literature for face localization. In this paper, we have employed the most successful of all face detection methods, the Viola-Jones detector which is based on the Adaboost algorithm (Viola & Jones 2001).The Viola-Jones face detector is capable of processing images extremely rapidly while achieving high detection rates. Its inherent strength lies in three critical components. The first is the Integral Image representation which allows the features to be computed rapidly. The second is a simple and efficient classifier based on the AdaBoost learning algorithm (Freund & Schapire 1997) which selects a small number of critical visual features from a very large set of potential features. The third and last concept is a cascading combination of several classifiers which allows background regions of the image to be quickly discarded while potentially face like regions are computed. The results of the face detection procedure are illustrated in the below given Figure 1.

Figure 1. Face detection on two sample images


2.2 Illumination correction The performance of a face recognition system is severely affected if the facial image is poorly illuminated. Thus, illumination correction is a necessary step before feature extraction. Although, there are many well known image enhancement algorithms such as the histogram equalization method but most of them adopt a global approach and treat the image as a whole and are therefore perform unsatisfactorily over local regions. In this work, we have applied a fuzzy logic-based image enhancement method (Hanmandlu & Jha 2006) for illuminating dark regions of a face. The colour intensity property of the image is fuzzified using a Gaussian membership function, which is well suited for under exposed images. The fuzziefied image is then enhanced using a sigmoid type general intensification operator which is dependent on the crossover point and the intensification parameter. The optimum values of these two
344

A face recognition approach using Zernike Moments for video surveillance

parameters are obtained by the constrained fuzzy optimization using a modified univariate method which involves learning by gradient descent. The results of face image enhancement using this method are found to be far better than those obtained by histogram equalization and are presented in Figure 2.

Figure 2. Illumination correction using Fuzzy Image Enhancement

3. Feature extraction using Zernike Moments


Moment features are a set of non-linear functions on the regular, geometric moments of an image. Moment invariants extracted from an image function can be used directly as features and scaled to the appropriate range. The moment features of the compressed image representations describe the shape and geometric properties of the image under consideration which in this case is the detected face image. 3.1. Geometric moments Geometric (or regular) moments map the image function f(x, y) onto the monomial xp yq. The (p + q)th order of a geometric moment for an N M image function, f(x, y), is defined as,

m pq = x p y q . f ( x, y )
x =1 y =1

(1)

where, p, q Z Furthermore, the (p + q)th central geometric moment is defined so as to normalize regular moment calculations with respect to the image centroid, thus yielding moments invariant to object translation:
345

Recent advances in security technology

pq

_ _ = x x . y y . f ( x, y ) x =1 y =1 M N

(2)

where,

x, y are coordinates of the image centroid.

3.2. Zernike moments Unfortunately the basis set for geometric moments is not orthogonal and the resulting moments, therefore, lack many desirable properties in the context of feature selection. The complex-valued Zernike polynomials, however, form an orthogonal basis set over the unit circle, x 2 + y 2 1. Orthogonal Zernike moments are defined by the projection of the image function f(x, y) within the unit circle onto the complex Zernike polynomials. The Zernike invariants are the magnitudes (the hypotenuse) of the real and imaginary components of the resulting moments. When images are normalized in terms of scale and translation or in terms of regular low-level central geometric moments, the derived Zernike invariants will be invariant to rotation, translation and scale, given an image function of sufficient resolution within the unit disc. The set of Zernike polynomials is denoted by {Z nm ( x, y )}, or equivalently in their polar form by {Z nm ( , )} . The general form of the polynomials is:

Z nm ( x, y ) = Z nm ( , ) = Rnm ( )e jm
where,

(3)

x, y and , correspond to Cartesian and Polar coordinates respectively ,

n Z + and m Z ; constrained to n m even, m n .

Rnm ( ) is a radial polynomial given by,

Rnm ( ) =

(n m ) / 2

s =0

( 1)s [(n s )!] n 2 s


n+ m n m s! 2 s ! 2 s !

(4)

The complex, orthogonal Zernike moments are defined by,

Anm =
Or,
346

n +1

f (x, y ).Z ( , )
x =1 y =1 * nm

A face recognition approach using Zernike Moments for video surveillance

Anm =

n +1

x 2 + y 2 1

* f (x, y )Z nm ( , )dxdy

(5)

3.3. Properties of Zernike moments Among the noteworthy properties, Zernike moments are rotationally invariant and orthogonal, besides being translation invariant. Assume Anm is the moment of order n and repetition m, associated with f ( x, y ) , obtained by rotating the original image

f ( x, y ) by an angle with respect to the x-axis. Then,

* Anm = f ( x, y )Z nm ( , )dxdy = Anm e jm

(6)

This suggests that Zernike Moments merely acquire a phase shift on rotation and magnitude remains constant after rotation. Also it can be shown that

x 2 + y 2 1

Z nm ( x, y )Z * ( x, y )dydx = 0 pq

(7)

if, n p or m q . Thus, Zernike moments are orthogonal by definition. Because of their orthogonality, it is expected that a small set of moments can be used to estimate parameters associated with different models. In fact, orthogonality of Zernike moment-form ensures that mutually independent shape-information underlying the intensity surface of an image is captured by Zernike moments of different orders. For example, the zeroth-order moment represents the mean intensity value in an image neighbourhood and first-order moments are related to the centre-of-gravity of the intensity surface, whereas the second-order moment captures the variance of the intensity levels present in the local neighbourhood. Thus, a discontinuity in local intensities results in a high first-order moment, a discontinuity in local gradients results in a high second-order moment, and so on. 3.4. Computation of Zernike moments Zernike Moments provide substantial mutually independent shape information along with the facial image intensity information which are important cues for automatic face recognition. The system diagram (Fig. 3) below illustrates all the pre-processing and feature extraction steps leading up to the computation of circular Zernike moments from each detected face.

347

Recent advances in security technology

Figure 3. Pre-processing and feature extraction


We calculated Zernike moments up to 12th order which gave rise to 49 moments in total. Table 1 depicts the first 12 orders of the moments with their repetitions. The size of the circular disc size for the calculation of Zernike moments is set to the image size and the centre of the circle is taken as the origin. The moments are computed using Equations 3 & 4.
Order 0 1 2 3 4 5 6 7 8 9 10 11 12 Dimensionality 1 2 4 6 9 12 16 20 25 30 36 42 49 Zernike moments A0,0 A1,1 A2,0, A2,2 A3,1, A3,3 A4,0, A4,2, A4,4 A5,1, A5,3, A5,5 A6,0, A6,2, A6,4, A6,6 A7,1, A7,3, A7,5, A7,7 A8,0, A8,2, A8,4, A8,6, A8,8 A9,1, A9,3, A9,5, A9,7, A9,9 A10,0, A10,2, A10,4, A10,6, A10,8, A10,10 A11,1, A11,3, A11,5, A11,7, A11,9, A11,11 A12,0, A12,2, A12,4, A12,6, A12,8, A12,10, A12,12

Table 1. List of the first 12 order Zernike moments

348

A face recognition approach using Zernike Moments for video surveillance

4. Implementation & Results


This section presents the implementation of the proposed face recognition methodology and documents the results obtained. A brief comparative study is also given to show the superiority of the proposed method over those given in the literature. 4.1. Databases The performance of our method was evaluated on three different sets of face recognition databases. CMU AMP and Essex Faces94 databases are chosen for their significant facial expression variations whereas the Yale database has considerable illumination variations. The details of these databases are as follows:

CMU AMP face expression database (Liu X et al. 2001) consists of 13 individuals with 75 images of each subject showing different expression. These samples were taken in the same lighting condition using CCD camera. All of them have been well registered by eye locations. Essex Faces94 database (Spacek 2000) contains face images from 153 subjects (20 females and 133 males) in total. 20 colour photographs are obtained from each subject using an S-VHS camcorder. The database has considerable variations in expressions but very minor variations in head turn, tilt and slant. Each subject sits at fixed distance from the camera and is asked to speak, whilst a sequence of images is taken. In this experiment, the samples were transformed into grayscale images first before they were processed. Yale Face Database (Yale Face Database) consists of 15 individual sets, each containing 11 grayscale image samples. The samples have variations in lighting condition (left-light, center-light, right-light), facial expression (normal, happy, sad, sleepy, surprised and wink), and with/without glasses. If a subject is using glasses, then there is at least one sample image of that subject without glasses.

Table 2 summarises the key points of the three face datasets used in this study. Sample images are presented in Figure 4.
Database name CMU AMP face expression Essex Faces94 Database Yale Database Image size 64 64 180 200 320 243 Number of subjects 13 153 15 Number of samples per subject 75 20 11

Table 2. Face Recognition Databases

349

Recent advances in security technology

Figure 4. Sample images from CMU, ESSEX Face 94 and Yale face databases
4.2. Implementation We carried out extensive experiments to test the effectiveness of Zernike moments for the face recognition problem. It would be safe to say that Zernike moments perform well over facial images with uniform pose angles. Pre-processing steps were applied wherever they were needed. No pre-processing steps were applied for CMU AMP face database because the samples in this are well-registered and do not have any lighting variations. Face detection and resizing step were applied in both Essex and Yale face database. Resizing step is important because the size of detected faces is not always the same. All images were resized to the smallest sized detected face image in each subject. Illumination correction was applied in Yale database using the Fuzzy enhancement method described in Section 2. Parameters of fuzzy enhancement t, c were set to 5 and 0.50 respectively. In each experiment we randomly select k images for training with the remaining constituting the testing set. This process was repeated 50 times till we achieved convergence with the best possible training set. 4.3. Recognition Results Face recognition is carried out using the Nearest-Neighbour classifier (kNN). This classifier relies on a metric or distance function between different patterns. The distances between a test set and all the training sets are computed and the k nearest distances set is selected. The decision as to which class a particular test element
350

A face recognition approach using Zernike Moments for video surveillance

belongs is taken by choosing the class having more vectors in k nearest distances set. In our experiments, k was set to 1. Probabilistic Neural Network (PNN) and Linear Discriminant Analysis (LDA) were also employed for comparison purposes. PNN is an extension from Parzen-window classifier. One of the advantages of using PNN is that their learning speed which the training vectors only need to be processed once. The Gaussian width for PNN was set to 0.1 based on experimentation.
100 90 80 70 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Number of order

Recognition rate

CMU Yale Essex

Figure 5. Recognition rate graph for different orders


Figure 5 depicts the graph showing the best recognition rate for all orders of Zernike moments which are being used. Pre-processing steps were applied in this experiment. It is observed that Yale database has a lower recognition rate in comparison with other two databases. This is because that Yale database possesses more variations in the image samples. Table 3 quantifies the recognition rates obtained on the three databases. Yale database gets the highest increase in recognition rate after preprocessing steps have been applied. In addition, since we have not dealt with pose variations in detail, the recognition accuracy is lower than expected. The relationship between number of training sets and recognition rate is illustrated in Figure 6. This confirms the accepted fact that the more training sets we have, the better the recognition rate.
Database CMU AMP Essex Faces94 Yale Order 9 11 12 Recognition rate No Pre-processing 100.00 % 99.08 % 80.83 % Pre-processing N/A 99.80 % 88.33 %

Table 3. Best recognition accuracy on different databases

351

Recent advances in security technology

100 99.8 Recognition rate 99.6 99.4 99.2 99 98.8 98.6 98.4 98.2 10 20 30 40 50 60 70 80 90 Percentage of training sets

Figure 6. Recognition accuracy on different number of training sets


The relationship between the order of Zernike moment and the recognition rate is explored in Table 4. This table shows the relationship between the first ten orders and recognition rate. It is noted that the rise in the order, the recognition also improves. Furthermore, the computation time (including pre-processing steps, feature extractions and recognition step) also increases as the number of dimensions increase.
Recognition rate 36.67 % 60.83 % 66.67 % 68.33 % 74.17 % 75.00 % 79.17 % 82.50 % 86.67 % 87.50 % Computation time (in seconds) 33.60 37.47 46.91 58.69 75.67 97.50 125.28 157.64 194.08 243.47

Order 1 2 3 4 5 6 7 8 9 10

Dimensionality 2 4 6 9 12 16 20 25 30 36

Table 4. Recognition rate with ten first orders


4.4. Comparative Analysis The performance of Zernike moments as a feature descriptor is evaluated via three different classifiers on the Yale database and the results obtained are enumerated in Table 5. It is quite apparent that the difference in recognition rates between classifiers is negligible after pre-processing steps are applied. PNN gets the highest recognition rate before and after pre-processing steps are applied. However, for PNN we have used a higher order of Zernike moments which implies that PNN uses more
352

A face recognition approach using Zernike Moments for video surveillance

dimensions and hence takes more time to converge. Before pre-processing is applied, kNN is the lowest among the others. This is because the decision in kNN depends on each of the training vectors. Pre-processing steps change the vectors and groups them differently leading to the distance between vectors from the same subject being minimised.
Classifier kNN LDA PNN Recognition Rate Order No Pre-processing 12 80.83 % 12 81.67 % 15 88.33 % Pre-processing 88.33 % 88.33 % 89.17 %

Table 5. Best recognition accuracy on Yale database using different classifiers


We have also compared the Zernike moments with other statistical approaches such as the Eigenfaces and Fischerfaces. Principal Component Analysis (PCA) or Eigenfaces technique is a popular unsupervised method which has aim to extract a subspace where the variance of the projected data is maximized. Eigenvectors and eigenvalues of all training sets in each subject are calculated. Then, all test sets are projected to the subspaces. Within these subspaces the distance are calculated to determine test vector membership. Fisherface, or so-called Linear Discriminant Analysis (LDA), on the other hand, aims to maximize variance between classes while minimize variance data within class. LDA-based algorithms often perform better than PCA-based ones. However, LDA suffers from the small-sample-size problem which is problematic in face recognition application (Wang et al. 2007). It is clearly noticeable from Table 6 that Zernike moments outperform the other two popular face recognition methods although it has more dimensions than them.
Method Eigenfaces Fisherface Zernike moments Dimensionality 35 14 49 Recognition Result 78.56 % 83.22 % 88.33 %

Table 6. Comparison between Zernike moment and other methods on Yale database

Conclusions
This paper presents an efficient statistical approach based on Zernike moments for obtaining an optimum feature vector set for the purpose of face recognition in video surveillance. The face within each 2D digital image captured from a video sequence is first detected using the Adaboost detector and then normalized to take care of variations in scale, size and illumination. Zernike moments are computed from the detected face image by enclosing a circular disc within the image. We show that higher order moments carry more information but the use of an optimum number of Zernike Moments feature vector set gives better accuracy. The validity of the proposed method is demonstrated by means of several experiments.
353

Recent advances in security technology

The results show that by the use of Zernike moments as a face feature, a face image could be represented using fewer dimensions as compared to other popular approaches. In addition, recognition rate is higher, processing is faster and less memory is required. Pre-processing using fuzzy enhancement method further improves the recognition rate. Finally, a comparative study with other approaches demonstrates the superiority of the proposed approach. The proposed face recognition approach incorporates robustness to rotational, scale and illumination variance.

Acknowledgments
We would like to acknowledge the effort of Dr. Libor Spacek. The Essex face database used in this paper is part of Spaceks 1994 collection of facial images.

References
Bartlett, M.S., Lates, H.M. & Sejnowski, T. 1998. Independent component representation for face recognition. Proc. SPIE Symposium on Electronic Imaging: Science & Technology 528-539. Belkasim, S.O., Shridhar, M & Ahmadi, M. 1991. Pattern Recognition with Moment Invariants: A Comparative Study and New Results. Pattern Recognition 24(12): 11171138. Duda, R.O., Hart, P.E. & Stock, D.G. 2001. Pattern Classification. New York: Wiley. Etemad, K. & Chellapa, R. 1997. Discriminant analysis for recognition of human face images. Optical Society of America 14 (8): 1724-1733. Freund, Y. & Schapire, R.E. 1997. A decision theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1): 119-139. Gabor, D. 1946. Theory of communication. Journal of the IEEE 93: 429-459. Haddadnia, J., Ahmedi, M. & Faez, K. 2002. An efficient method for recognition of human faces using higher orders pseudo Zernike moments invariant. Proceedings of the 5th IEEE Intl. Conf. on Automatic Face and Gesture Recognition 315320. Hanmandlu, M. & Jha, D. 2006. An Optimal Fuzzy System for Colour Image Enhancement. IEEE Trans. Image Processing 15(10): 2956-2966. Khotanzad, A. & Hong, Y.H. 1990. Invariant image recognition by Zernike moments. IEEE Trans. Pattern Analysis and Machine Intelligence 12: 489-497. Liu X, Chen T., & Vijaya Kumar, B.V.K. 2001. Robust Face Authentication for multiple subjects using Eigenflow, Carnegie Mellon Technical Report: AMP01-05. Pang, Y.H., Andrew, T.B.J. & David, N.C.L. 2004. Face authentication system using pseudo Zernike moments on wavelet subband. IEICE Electronics Express 1(10): 275-280. Spacek, L 2000. Face Recognition Data. http://cswww.essex.ac.uk/mv/allfaces/index.html Turk, M & Pentland, A. 1991. Face recognition using eigenfaces. Proc. Intl. Conference on Computer Vision & Pattern Recognition 84-91. Viola, P. & Jones, M. 2001. Rapid object detection using a boosted cascade of simple features, Proc. IEEE Conference on Computer Vision and Pattern Recognition 511-516.
354

A face recognition approach using Zernike Moments for video surveillance

Wang, F., Wang, J., Zhang, C. & Kwok, J. 2007. Face recognition using spectral features, Pattern Recognition 40(10): 2786-2797. Wang, Y. & Chua, C.S. 2005. Face recognition from 2D and 3D images using 3D gabor filters. Image and Vision Computing 11(1): 1018-1028. Wiskott, L., Fellous, J.M., Krger, N. & Malsburg, C. 1999. Face recognition by elastic bunch graph matching, in Jain, L.C. et al (ed.) 1999. Intelligent Biometric Techniques in Fingerprint and Face Recognition. Boca Raton: CRC Press. Yale Face Database, http://cvc.yale.edu/projects/yalefaces/yalefaces.html Yang, M.H., Kriegman, D. & Ahuja, N. 2002. Detecting faces in images: A survey. IEEE Trans. Pattern Analysis and Machine Intelligence 4 (1): 34-58.

355

Recent advances in security technology

32
Forensic Challenges in Service Oriented Architectures
Andrew Marrington, Mark Branagan and Jason Smith
Information Security Institute, Queensland University of Technology, Australia
Abstract
Digital forensics relates to the investigation of a crime or other suspect behaviour using digital evidence. Previous work has dealt with the forensic reconstruction of computer-based activity on single hosts, but with the additional complexity involved with a distributed environment, a Web services-centric approach is required. A framework for this type of forensic examination needs to allow for the reconstruction of transactions spanning multiple hosts, platforms and applications. A tool implementing such an approach could be used by an investigator to identify scenarios of Web services being misused, exploited, or otherwise compromised. This information could be used to redesign Web services in order to mitigate identified risks. This paper explores the requirements of a framework for performing effective forensic examinations in a Web services environment. This framework will be necessary in order to develop forensic tools and techniques for use in service oriented architectures.

Biographies
Andrew Marrington is a PhD candidate in the Information Security Institute at Queensland University of Technology, where he studies computer forensics. The title of his thesis is Computer Profiling for Forensic Purposes. Mark Branagan is a PhD candidate and researcher at the ISI. His research interests include risk assessment and management for information security, information infrastructure protection and general information security issues. Jason Smith is a research fellow with the Queensland University of Technology. His research interests are in the areas of network security and intrusion detection with a specific focus on denial of service resistance, wireless communications and control systems environments.
356

Forensic Challenges in Service Oriented Architectures

Introduction
Service oriented architectures and Web services facilitate the integration of enterprise applications between businesses and government organisations. The cost of integration and enhanced flexibility is increased complexity. As more organisations adopt Web services for increasingly sensitive, mission-critical data the potential impact of breaches of Web services increases both for individuals and organisations. Increasing impacts can result in a worsening of the risk environment for all parties. Web services security and auditing is therefore an important concern. The services oriented architecture paradigm presents a number of significant challenges with respect to the auditing and monitoring of transactions. The need to provide tools which can aid in the forensic investigation of breaches of security, deliberate or accidental, in such an environment is obvious. Such techniques increase the possibility of detection and apprehension of criminal actors and aid in assurance of the transaction process for all involved. An increased level of assurance of such systems should ease concerns with the utilisation of Web services technologies, thus opening opportunities for government, business and individuals. In this paper we explain how digital forensics can contribute to the security and assurance of service oriented architectures, improving the confidence of stakeholders, and reducing the confidence of potential attackers that they may act undetected and unidentified. We discuss challenges in forensic investigations involving Web services, describe how they can be overcome, and we identify the need for a framework for developing Web services which record enough evidence to support post-hoc investigation. Finally we discuss directions for future forensics research in the field of service oriented architectures, and draw conclusions.

Related Work
Service Oriented Architecture and Web Services Service oriented architecture describes a paradigm for the development, deployment and use of online software systems working on the basis of a service provider publishing a description of the services it can provide in a form of registry, which is queried by clients in order to discover and then dynamically invoke the desired services (Hashimi 2003). In this paper we shall use the abbreviation SOA both to refer to the paradigm and to specific systems implementing it. This paper focuses on Web services, the best known examples of SOAs, in which the mechanism of publication, discovery and invocation is facilitated through the use of standard Web formats and protocols (World Wide Web Consortium 2004). There are at least two participants in any SOA transaction the service provider and the service requester. Both are software agents, representing different individuals or organisations (or perhaps different sections of the same organisation). A given service requester may not know which service provider has the desired service; it simply knows which service it requires, and interrogates the registries of known service
357

Recent advances in security technology

providers to find the service. The service requester can then select its desired service and invoke it. Web services use Internet standardised technologies such as XML to implement a platform independent and interoperable SOA. A Web service has an interface described in a machine-processable format called Web Service Description Language (WSDL). This WSDL interface defines the message formats, datatypes, transport protocols, and serialisation formats which a Web service requester should use when it interacts with the Web service. It is, in essence an agreement not dissimilar to the contract programming model of agreed specifications of APIs, except it is machineprocessable and thus machine-enforceable (Jones 2005). In practice, many Web service clients are configured with pointers to the WSDL describing the services a company wishes to provide. The initial vision of the SOA community was that Web service requesters would obtain the WSDL for a Web service through querying the Web service providers Universal Description, Discovery and Integration (UDDI) registry (OASIS 2004). Simple Object Access Protocol (SOAP) is used as the message format for messages between the Web service requester and the provider, consisting of formatted XML requests and XML responses. Through the use of these standard formats for the registry, the interface, and messages, any conforming software agent, no matter the language in which it was written or the platform for which it was written, can take the place of the provider or the requester.
Authentication Management Infrastructure Cryptographic Misconfiguration Denial of Service Race Condition Hijacking Input Manipulation Information Disclosure Other

Table 1. Categories of vulnerability to attack in Web services


Web services (and SOAs more generally) are vulnerable to a variety of attacks. In the table above we list some broad categories of vulnerability to attack, adapted from (Yu, Aravind and Supthaweesuk 2006). While a detailed discussion of each of these categories is out of the scope of this particular work, the need to facilitate the investigation of Web services is obvious. Computer Forensics The term computer forensics describes the discovery, examination and analysis of digital evidence typically stored on or generated by a computer or computer system. Computer forensics is the investigation of situations where there is computer-based (digital) or electronic evidence of a crime or suspicious behaviour (Mohay, Anderson, Collie, de Vel and McKemmish 2003). Investigations of breaches of security or suspicious events in, or transaction auditing of SOAs, would employ digital evidence in an effort to reconstruct the events under investigation. The distributed nature of SOAs poses particular challenges to forensic investigations, but the standards-driven nature of SOAs also provides an opportunity to address those challenges.
358

Forensic Challenges in Service Oriented Architectures

The challenges faced in forensic investigations of SOAs include challenges faced in forensic investigations of any distributed network system. In conventional computer forensic investigations of stand-alone computer systems, there is one primary source of digital evidence the computers hard disk. In network forensics, there are a number of different potential sources of digital evidence. However, the technical difficulty and expense involved in recording large volumes of network data, coupled with the lack of economic incentive to collect such information, means that the wider variety of potential sources does not translate into a larger volume of digital evidence. In fact, most network forensic systems are highly ad hoc in nature, depending on network eavesdropping tools such as packet capture software to monitor key points in the network (Shanmugasundaram, Brnnimann and Memon 2005). There are significant technical challenges in accurately reconstructing network traffic through analysing the recordings of such eavesdropping tools even in ideal circumstances. Eavesdropping tools are also vulnerable to simple confusion techniques making it easy for an attacker to deliberately obfuscate their actions (Cronin, Sherr and Blaze 2006). Regardless of the difficulty of undertaking forensic investigations in a distributed network environment it is nevertheless desirable to have the capability. The following section outlines one set of rationales for building such capabilities.

The Necessity of Forensics


The relationship between forensics and overall system security is harder to see than the direct relationship seen between, for example, a firewall and network security. No security system is perfect however and herein lies one of the important roles of forensics. Robust and accurate forensic techniques increase the likelihood both of detection of malfeasance and final attribution of the illicit actions to the perpetrator. There is no suggestion at present that the use of Web services provides a new set of actual criminal aims. It may, however, provide a new set of ways that criminal acts may be committed. The set of influences that may contribute to an adversarys decision to act is complex. Once a target is defined for any attack an adversary will require some set of capabilities and resources to undertake the attack (Parker, Shaw, Strotz, Devost and Sachs 2004). The nature of the system itself and the security measures in place will to a large extent determine these requirements. Simply having the capability and resources to act does not make the action inevitable. A combination of factors such as perceived benefit, level of potential punishment, and so on will come into play before any actor will take action. It may not necessarily follow that an adversary will perceive a system with a high degree of security measures in place as a higher risk target. Ideally, however, the aim is to make the system both difficult to attack and to increase the attackers perception of risk in attacking the system. Various studies of risk perceptions have identified the affect heuristic as a factor in determining the level of perceived risk for some action or event. If the potential benefit is high then risk is inferred to be lower, and if risk is perceived as low then the benefit is seen as higher. If, however, the benefit is seen as
359

Recent advances in security technology

low then the risk is perceived as higher and if risk is perceived as higher then benefit is perceived as lower (Peters, Burraston and Mertz 2004). Perception of likelihood of detection and consequent identification have also been identified as possible modifiers on the behaviour of potential adversaries (Parker, Shaw, Strotz, Devost and Sachs 2004). Therefore the role of digital forensics is to increase the perceived risk for an actor. That is, the likelihood of detection and identification is increased, potentially reducing the possibility of attack. The use of forensics is not limited, however, to deliberately criminal acts. The ability to reconstruct some set of transactions allows for an increase in trust for all parties. Primarily, it allows for some reasonable expectation that disputes over transactions may be solved in something other than an arbitrary manner. Digital forensics provides a set of tools to produce information, which can, to some degree of accuracy, reconstruct the sequence of events involved in a transaction. This level of surety is obviously useful for civil dispute resolution. While the behaviour of actors is complex and factors other than those discussed here will clearly come into play, this section has outlined one set of advantages to providing the ability to undertake forensic investigations in a Web services environment. The following section discusses the challenges faced in undertaking such investigations.

Investigative Challenges in Service Oriented Architectures


Given the documented difficulties facing forensic investigations of networks, it seems desirable to avoid similar difficulties in forensic investigations of SOAs. A greater commonality of interest exists between a service requester and a service provider in an SOA transaction than exists in a generic network transaction. This commonality of interest should make it easier for both parties to work together to introduce forensic systems which will allow them to improve security in service oriented architectures. The major issue facing forensic investigations of network systems is the lack of relevant evidence collected for the specific purpose of such an investigation. The collection of forensic data in networks is, for the most part, an ad hoc process, dependent on the likes of firewall logs, intrusion detection system logs, network eavesdropper logs, and so on. The scope of data collected from such sources is too narrow for many purposes (Shanmugasundaram, Brnnimann and Memon 2005). However, in SOAs all major stakeholders have an interest in facilitating the post hoc forensic investigation and audit of SOA transactions. There are, nevertheless, a number of challenges which confront both the collection of adequate digital evidence to facilitate post hoc investigation and the actual post hoc forensic investigation of SOAs. These challenges originate from either social or technical considerations. Challenges to the actual forensic investigation are mostly technical in nature. Challenges to developing the ability to collect adequate data to conduct such an in investigation can be both technical and social.
360

Forensic Challenges in Service Oriented Architectures

Technical Challenges Web services are platform independent, which is to say that they are completely interoperable irrespective of the network configuration, hardware and software employed by the provider and requester. It is this platform independence which poses the most obvious technical challenge to a forensic investigation. Each platform involved will require a particular set of tools and techniques to be used in evidence recovery. This will be especially true of any data collected by an operating system or runtime environment specific tool, such as system logs, or by a network monitoring tool specific to a certain network configuration, such as firewall logs. Each platform has inherent its own issues which can further complicate matters for a forensic investigator. For example, the amount of detail in system event logs on Windows systems is highly dependent on the auditing configuration of the Windows host involved, and the procedure is different altogether on Unix-like systems. Difficulties for forensic investigations dependent on the general logging and audit tools provided for particular operating systems or platform are likely to persist while Web services transactions take place between disparate hosts, configurations and platforms. Likewise, the disparity between the information found from the firewall logs, IDS logs, and other sensor logs of two different networks is unlikely to be resolved while forensic investigations of Web services are dependent on this sort of generic network sensor information. Forensic systems dependent on eavesdropping tools face a number of difficulties, and are vulnerable to deliberate confusion techniques by attackers with obvious interest in obscuring their actions. A forensic data collection system dependent on traffic interception must be sufficiently sensitive, which is to say that it receives all messages exchanged between the service provider and requester. It must also be selective, meaning it rejects spurious data which can make it difficult for investigators to recognise data relevant to the investigation. Whilst sensitivity is a well-understood requirement, selectivity of traffic sensors in network forensics is often misunderstood and thought to be easily achieved through only a token evaluation of traffic metadata (Cronin, Sherr and Blaze 2006). The sheer quantity of data collected in forensic investigations has been recognised as a challenge which confronts researchers and investigators alike. Excessive volumes of data can make the search for relevant digital evidence somewhat like searching for a needle in a haystack. Solutions to excessive datasets in stand-alone computer forensics include datamining (Beebe and Clark 2005) and automated profiling to help narrow the field of search (Marrington, Mohay, Clark and Morarji 2007), but it seems that by focussing on traffic sensor selectivity, this problem could be largely avoided in SOA forensic investigations. Web services are generally stateless from one invocation to the next (Zimmerman, Tomlinson and Peuser 2003), meaning that it may not always be necessary for SOAP traffic monitoring systems to attempt to keep a record of the state of a Web service provider. In the case of certain complex Web services, however, it may still be necessary to track the state of each invocation. The technical standards which specify the Web services architecture themselves pose a challenge to the introduction of forensic data collection into a Web services
361

Recent advances in security technology

environment. Within the standards which apply to Web services there is a lack of consideration for the collection and storage of digital evidence for the purposes of post hoc investigation or auditing. The storage of raw network data is impractical due to the high volume of data which would be recorded. The high storage capacity requirements of raw network data would introduce excessive expense, or longevity concerns, due to the need to overwrite old data to conserve space (Shanmugasundaram, Brnnimann and Memon 2005). Storing higher-level data would reduce the required storage capacity, thereby allowing the record of a longer time period to be maintained. Given the standardised nature of the technologies employed in Web services, it should be possible to collect a narrower set of data rather than simply collect all network data. As an example, SOAP requests for service invocation and the Web services SOAP response could be stored, providing investigators with the requesters input and the services output. This would allow investigators to reconstruct the SOA transaction. The solutions to the range of technical challenges described in this section are tractable, if daunting. The SOA is not a purely technical system however; social influences will play their part. While the ability to gather the required information may exist the actual will to do so may not. In the next section we briefly discuss possible social challenges which may need to be considered. Social Challenges For the purposes of this paper we consider social challenges to be problems that arise not from a direct technical difficulty but from the parties involved in the use or development of the SOA. We consider social problems to be those where a technical solution may exist or be capable of development but there is resistance to such deployment or development. The possible ways in which social challenges may manifest are many and space considerations prevent a comprehensive review. Some possible considerations are outlined in this section. One difficulty in deciding on how to approach the problem of making provision for forensic investigation in SOAs is the difficulty in defining exactly what it is they will do. By their very nature SOAs form webs of applications which are put together in an ad hoc, as needed basis. Therefore deciding in advance exactly what information is involved in an exchange is almost impossible. The unknown nature of the exact transactions makes it difficult to decide what may be safe to store or not store or indeed what may be required to reconstruct the transaction. It is therefore difficult to answer privacy and confidentiality concerns before the actual transaction takes place. A company, for example, may wish their use of a certain service to remain confidential. The requirement of a forensic investigation that actions may be able to be assigned to a fixed party then may become problematic. A challenge then is to balance the competing needs of forensic investigations and concerns of the parties involved in the transactions. For the developers themselves social problems exist. Security measures in general have been considered to be an impediment to development of product. Add to this the
362

Forensic Challenges in Service Oriented Architectures

possibility that extra resources may be required to fulfil the requirement, and forensics may then become a source of weakening the business case for the use of SOAs in the first place. In concert with these factors, developers may be overconfident in their ability to produce a completely secure system, reducing impetus to facilitate investigation. For example, Oracles declaration of an unbreakable system which proved to be optimistic (Reuters 2001; ComputerWire 2002)54. It is difficult to conceive of a system so secure that there exists no possibility of a breach, and where that possibility exists then so does the requirement for investigation. Finally, there is a potential for conflict between requesters and providers of Web services. The use to which information can be put is a topic of growing concern, as witnessed by measures in Australia such as the Do Not Call Register (Australian Government 2007), and the activities of the Office of the Privacy Commissioner (Office of the Privacy Commissioner). The need that different parties see for forensic information may differ and could require different kinds of information. This has the potential to cause conflict between what the parties are willing to provide and what is necessary to provide. Therefore any definition of information requirements must have means for conflict resolution. This section has outlined potential social impediments to implementing a comprehensive system for carrying out forensic examinations in SOAs. Having described technical and social challenges to such a system and defining, at least in part, the desirability of such a system the next section will define what the requirements for a framework are likely to be.

The Need for a Framework


The standards-driven nature of Web services in particular, and service oriented architectures generally, provides an opportunity to incorporate forensic data collection standards as an intrinsic part of SOAs. Providing for forensic data collection in a standardised form has benefits for an investigation. These benefits include greater efficiency through similarity in the process of discovering digital evidence, and greater confidence in the quality of that digital evidence. We therefore propose that a framework to support forensic investigations be incorporated into the standards which govern SOAs. Standardising forensic data collection in SOAs would address the challenges discussed earlier through the process of establishing a standard as an industry, and by providing a mechanism for the parties in an SOA transaction to negotiate what information would be recorded. The process of expanding existing SOA standards or writing new ones to accommodate forensic data collection would necessitate enumerating the concerns of legitimate SOA participants. These concerns could be evaluated from the perspectives of all involved parties, and a middle ground could be determined which was in the common interest. Such a standard could also incorporate

54

The number of cases are many, the Oracle example simply being a high-profile case.
363

Recent advances in security technology

a handshaking stage, at which both the service requester and the service provider would agree to the amount of information stored about a service invocation. If the parties could not agree on a required level of identifying information the invocation request could be rejected. The introduction of such a stage would mean that both parties had to find a mutually agreeable level of information which could be recorded about the transaction before that transaction took place. If forensic data collection were to become part of the standards-mandated framework for SOAs, any potential competitive disadvantage to conducting such data collection would be eliminated. While building the capacity to collect forensic data about SOA transactions remains optional, providers who choose not to implement such measures may enjoy a competitive advantage, especially in time-to-market terms. Such an advantage is gained at the expense of the overall security posture of the system, which could have repercussions for the service providers clients. Without the capacity for forensic data collection being included in an accepted standard, potential customers have no capacity to build confidence in the security of a given service. The incorporation of forensic data collection systems into SOA standards would level the playing field between all service providers. A forensic data collection system for SOAs must include a sensor and a log for the monitoring and storage of messages. As investigations into SOAs will primarily concern higher-level application logic (e.g. the details of a service invocation) rather than lower-level network traffic, every piece of network traffic need not be monitored and recorded. It may be desirable to allow configuration as to which messages are logged, in-line with privacy or other concerns. The sensor must be placed logically within the SOA to intercept incoming SOAP messages prior to their processing, as well as outgoing SOAP messages. The sensor should not process message payloads, it should merely record them in the log. Many attacks on SOAs consist of messages with payloads containing attack code, or which are over-sized and take an excessively long period of time to parse. For example, Web services are vulnerable to attacks which cause a denial of service by providing well-formed XML documents to a service which are oversized or contain excessive nesting of elements (Singhal and Winograd 2006). Yu categorises just under 60% of attacks against Web services and applications as input manipulation attacks, which prey on processing an attackers input (Yu, Aravind and Supthaweesuk 2006). In order to avoid the forensic data collection system failing to record or even being brought down by the very sorts of attacks it is supposed to help investigate, its sensor must record messages in its log prior to any processing of the messages content. A framework to support the post hoc forensic investigation of service oriented architectures can be established through the incorporation of forensic data collection into SOA standards. Such a framework needs to ensure that such a forensic data collection system provides continuous service even during attacks on the SOA. It can provide a mechanism for the parties involved in an SOA transaction to negotiate about the level of information to be stored about the transaction. A standardised framework would make forensic investigations more efficient, and raise consumer
364

Forensic Challenges in Service Oriented Architectures

confidence in SOA security. We propose the adoption of a standards-driven framework for data collection to facilitate forensic investigations of SOAs.

Conclusion
This paper has discussed the value of facilitating post hoc forensic investigations of service oriented architectures. The challenges, both technical and social, which hold back the integration of data collection mechanisms to facilitate such investigations have been identified. It is clear that, in some form, these challenges will need to be overcome. It will be necessary to undertake investigations on service oriented architectures; the question is how effective, in costs and accuracy such investigations can be. By including provision in the standards for effective methods of collecting and maintaining forensically significant information the extra effort required will be the same for all parties. It will also allow the implications of such requirements to be considered before the urgency of an immediate need may make for ill-considered decisions. The integration of forensic requirements in the underlying standards will also allow for the most efficient means to be defined and implemented. Future considerations will include; the finer grained details of the actual information requirements, how these requirements may be best integrated with SOAs in general and which mechanisms may be put in place to secure such information. As the SOA space matures and develops it is essential that our ability to provide investigative and assurance services expand simultaneously.

References
Australian Government. (2007). "Do Not Call Register." Retrieved 31 July 2007, 2007, from https://www.donotcall.gov.au/. Beebe, N. L. and J. G. Clark (2005). Dealing with Terabyte Datasets in Digital Investigations. Research Advances in Digital Forensics. M. Pollitt and S. Shenoi. Norwell, Springer: 3-16. ComputerWire. (2002). "Oracle posts fix - servers 'unbreakable' again?" Retrieved 31 July 2007, 2007, from http://www.theregister.co.uk/2002/02/08/oracle_posts_fix_servers_unbreakable/. Cronin, E., M. Sherr, et al. (2006). On the Reliability of Network Eavesdropping Tools. Advances in Digital Forensics II. M. Olivier and S. Shenoi. Orlando, Springer: 199-213. Hashimi, S. (2003, 18 August 2003). "Service-Oriented Architecture Explained." Retrieved 28 July, 2007, from http://www.ondotnet.com/pub/a/dotnet/2003/08/18/soa_explained.html. Jones, S. (2005). "Toward an acceptable definition of service [service-oriented architecture]." Software, IEEE 22(3): 87-93. Marrington, A., G. Mohay, et al. (2007). Event-based Computer Profiling for the Forensic Reconstruction of Computer Activity. AusCERT Asia Pacific Information Technology Security Conference 2007, Gold Coast, AusCERT. Mohay, G., A. Anderson, et al. (2003). Computer and Instrusion Forensics. Norwood, Artech House.
365

Recent advances in security technology

OASIS. (2004, 19 October 2004). "UDDI Version 3.0.2: UDDI Spec Technical Committee Draft." Retrieved 28 July, 2007, from http://www.oasis-open.org/committees/uddispec/doc/spec/v3/uddi-v3.0.2-20041019.htm. Office of the Privacy Commissioner. "Office of the Privacy Commissioner." Retrieved 31 July 2007, 2007, from http://www.privacy.gov.au/. Parker, T., E. Shaw, et al. (2004). Cyber Adversary Characterization Auditing the Hacker Mind. Rockland, Syngress Publishing Inc. Peters, E. M., B. Burraston, et al. (2004). "An Emotion-Based Model of Risk Perception and Stigma Susceptibility: Cognitive Appraisals of Emotion, Affective Reactivity, Worldviews, and Risk Perceptions in the Generation of Technological Stigma." Risk Analysis 24(5): 1349-1367. Reuters. (2001, 12/10/2001). "Hackers take up Larry Ellison's challenge." Retrieved 31 July 207, 2007, from http://www.usatoday.com/tech/news/2001/12/10/oracle-hackerschallenge.htm. Shanmugasundaram, K., H. Brnnimann, et al. (2005). Integrating Digital Forensics in Network Infrastructure. Research Advances in Digital Forensics. M. Pollitt and S. Shenoi. Norwell, Springer: 127-140. Singhal, A. and T. Winograd (2006). Guide to Secure Web Services (DRAFT). Recommendations of the National Institute of Standards and Technology, National Institute of Standards and Technology: 140. World Wide Web Consortium. (2004, 11 February 2004). "Web Services Architecture - W3C Working Group Note 11 February 2004." Retrieved 27 July, 2007, from http://www.w3.org/TR/ws-arch/. Yu, W. D., D. Aravind, et al. (2006). Software Vulnerability Analysis for Web Services Software Systems. Computers and Communications, 2006. ISCC '06. Proceedings. 11th IEEE Symposium on. Zimmerman, O., M. Tomlinson, et al. (2003). Perspectives on Web Services: Applying SOAP, WSDL and UDDI to Real-World Projects. Berlin, Springer-Verlag.

366

A Comparison of Media Response to Recent National Security Events

33
A Comparison of Media Response to Recent National Security Events
Holly Tootell
University of Wollongong, Australia
Abstract
The media coverage of national security events has a significant impact on the shaping of public opinion. This study has analysed a series of national security events and the subsequent newspaper coverage at specified periods after the time of the event. The comparison of coverage at these intervals uncovered interesting reflections on the dominant themes shaping public opinion. The results were analysed using qualitative content analysis software which created maps of the key concepts. It is the comparison of these maps that gives insight into the shifting attitude towards events of national security.

Biography
Holly Tootell is a Lecturer in the School of Information Technology and Computer Science at the University of Wollongong. Hollys primary research interest is the social implication of technology, with a focus on issues relating to national security. Prior to joining the university, Holly worked with ETC, Electronic Trading Concepts as an ecommerce and privacy consultant. Holly is the Secretary of the Australian chapter of the IEEE Society on Social Implications of Technology (SSIT).
367

Recent advances in security technology

Introduction
It has been widely perceived that the media coverage of national security events has had a significant impact on the shaping of public opinion. This research examines September 11, Bali Bombing and the Jakarta Embassy Bombing in order to investigate the impact the media response to these events have had on public understanding and awareness of privacy, security and liberty. Each case begins with a description of the background to set the context of the event. The content analysis addresses the relationship between the event and the lifeworld shapers: privacy, security, liberty and terrorism. The idea of lifeworld can be understood as a world view being created from what has happened in the past (Tootell, 2006). The results were analysed using qualitative content analysis software which created maps of the key concepts. It is the comparison of these maps that gives insight into the shifting attitude towards events of national security. The final section draws together the findings across the cases.

Justification of Study Selection


Terrorism is an issue that has been high in public consciousness since the September 11 attacks. One of the peculiarities of researching in this area is that subject selection is dependent upon the work of terrorists, and selection for inclusion is based on how successful the effort was, measured by the impact of the event in terms of casualties. The terrorist events selected for the study were chosen because of their focus on western targets. In each of the three cases one of the aims was to devastate westerners, either on home soil, or symbolic western tourist locations. Data Collection The data used in this study comprises news articles retrieved from the Factiva database. The selected articles retrieved were published within specific time periods from the date of the event. This investigation is looking for influencing factors contributing to the definition of lifeworld. The quarterly intervals from the date of the event until the first anniversary provide the necessary breadth to see how reactions are changing and/or being shaped by the collection of numeric counts of headlines related to the search issues. The newspapers were constrained to high circulation papers from Australia, the United States of America and the United Kingdom. The papers included: The Australian, The Daily Telegraph (AUS), The Sydney Morning Herald, The New York Times, USA Today, The Wall Street Journal, The Daily Telegraph (UK), The Times (UK) and The Guardian. The selection of headlines was considered sufficient for this part of the study. Previous studies (Bauer and Bauer 1960; Marshall 1979; McQuail 1979; Martin 1995) have shown that headlines are specifically designed to attract the attention of readers, and so are suitable to use as an indicator of shaping opinions. The data collection is described in Table 1.

368

A Comparison of Media Response to Recent National Security Events

Event September 11 Bali Bombing Jakarta

Date 11/9/2001 12/10/2002 9/9/2004

0-3 months 11/9/2001 11/12/2001 12/10/2002 12/1/2003 9/9/2004 9/12/2004

3-6 months 12/12/200112/3/2002 13/1/2003 13/4/2003 10/12/2004 10/3/2005

6-9 months 13/3/200213/6/2002 14/4/2003 14/7/2003 11/3/2005 11/6/2005

9-12 months 14/6/2002 14/9/2002 15/7/2003 15/10/2003 12/6/2005 12/9/2005

Table 1. Data Collection Periods for Quarterly Headline Analysis

September 11, 2001


The September 11 terrorist attacks consisted of a series of coordinated terrorist suicide attacks by Islamic extremists on the United States on 11 September 2001. Nineteen al-Qaeda terrorists hijacked four commercial passenger jet airliners. Two of the airliners (United Airlines Flight 175 and American Airlines Flight 11) were crashed into each tower of the World Trade Centre in New York City. A third airliner (American Airlines Flight 77) was crashed into the Pentagon in Arlington County, Virginia. The final airliner (United Airlines Flight 93) crashed into a field in Shanksville, Pennsylvania after passengers and flight crew attempted to take control of the plane from the hijackers. The death toll of this attack includes 19 hijackers, 2973 people and 24 missing and presumed dead (National Commission on Terrorist Attacks Upon the United States 2004). Trends in media coverage The initial impact of September 11 was understandably focused on the issue of terrorism, illustrated in Figure 1. In the immediate aftermath, the severity and surprise of the attack dominated media focus as this unexpected new form of terrorism was realised. Second to this was the issue of security, be it in the form of homeland, national or generic reference to the concept. The strong focus of these two issues indicates that the greatest initial effect was at a societal level. Terrorists had attacked the worlds symbol of democracy, and in grappling with that, the impact of a terrorist attack and its implications for all aspects of security were foremost in the public consciousness. The more personal values of privacy and liberty were, in comparison, hardly addressed. From three months on, the focus between the four issues of terrorism, security, privacy and liberty stabilizes. Terrorism was still the strongest focus which, given the unexpected nature of the attack and its impact, is understandable. At the 9-12 month mark, the issue of security climbs slightly. The increased focus on this issue reflects the stabilizing of responses to the event, where motivation is shifting away from revenge to the strong need for acceptance and a way to move forward. Throughout the 12 month period, the issues of privacy and liberty remain steady, however at relatively insignificant percentages in comparison to the focus on terrorism.

369

Recent advances in security technology

Proportion of Articles by Interval - September 11


80 70 60 50 40 30 20 10 0 0 to 3 3 to 6 6 to 9 9 to 12 % Total Articles

Time Interval (Months)

Sept 11 - privacy Sept 11 - terrorism

Sept 11 - security Sept 11 - liberty

Figure 1. September 11 Headlines


What the newspapers were reporting The following section looks in more detail at the key themes identified in the collected data. Figure 2 represents the key ideas in the media reports of the week of the event. The primary themes identified in the content analysis are attack and towers. These themes are representative of the initial emotional reaction to the event. Many of the media reports at this time were looking at the mechanics of the event, and the factual accounting detailed times and related events over the day. The subject of a culture of fear was present, as was the threat to liberty, and all things linked to American and Western civilization. There was also recognition of a growing rhetoric of war and propaganda. Revenge is spoken of in terms of overcoming the threat to liberty and democracy. These reflect the use of a language of war, and align closely with the shift in political thinking caused by September 11. It should be noted that privacy does not feature as one of the concepts identified in the initial reaction.

Figure 2. September 11 Week of Event

370

A Comparison of Media Response to Recent National Security Events

Figure 3. September 11 6 Months After Event


The following data is from the six month anniversary of the event. There is a distinctly different feel to the sentiments being expressed. Figure 3 shows the major themes of US and life which reflect the stage of the recovery. There is still a strong focus on war and weapons. The events of September 11 and the War on Terror have become synonymous. The concept of security is present in both Figure 2 and Figure 3. In Figure 2, security is connected to the attacks. In Figure 3 it is aligned with more personal terms: people, life, New York. This reflects the shift in understanding from the time of event where security was an immediate consideration in terms of infrastructure, to personal security. The media coverage points towards a need for justification and revenge for the attacks. There is significant debate on the morality of the decision to invade Iraq and the consequences it might have on the global stage. There is recognition that the government initiatives may well be out of alignment with the personal lifeworld experience. The urgency with which the government is focused on avenging the attack on home soil is not matched with the same urgency of public sentiment.

Figure 4. September 11 One Year After Event


The third series of data explored in Figure 4 is the one year anniversary of the event. As in the six month data, there is a continued focus on the war, balanced with the
371

Recent advances in security technology

remembrance of the attacks. The indication from the media reports supports an increasing move towards acceptance and self-awareness. The heat of the government rhetoric on war has cooled somewhat, and provided the people with a place for a more balanced perspective of their lifeworld to be both understood and lived.

Bali Bombing
The 2002 Bali Bombing occurred on 12 October 2002 in the tourist district of Kuta, Bali. It was the highest fatality terrorist attack in the history on Indonesia with 202 people killed. Of those, 164 were foreign nationals, including 88 Australians. Another 209 people were injured. The attack was made with two bombs: a backpack-mounted device carried by a suicide bomber and a large car bomb, both detonated near popular Kuta nightclubs. A third, smaller bomb was detonated outside the United States consulate in Denpasar (Australian Federal Police 2006). Trends in media coverage The Bali Bombing, occurring so close to the first anniversary of the September 11 attacks, stirred up public emotion and reaction to terrorism to a very high level, as indicated by the 0-3 month data (see Figure 5). Again, terrorism and security were the highest focus issues of media. Of interest in the Bali Bombing data is the change in the reaction pattern over time. There is a much steeper decline in the awareness of terrorism and security to the 3-6 month mark, and then a steady increase over time of the focus on security and terrorism. The recovery time from the event was quicker than for September 11, and the increase in focus over the time periods in terrorism and security indicates that the issues rose in importance once the initial emotional reaction was dulled.
Proportion of Articles by Interval - Bali Bombing
80 70 60 50 40 30 20 10 0 0 to 3 3 to 6 6 to 9 9 to 12 Time Interval (Months) Bali bomb - privacy Bali bomb - terrorism Bali bomb - security Bali bomb - liberty

% Total Articles

Figure 5. Bali Bombing Headlines


The issues of privacy and liberty are negligible in comparison to security and terrorism. This suggests two positions: firstly that the media are more interested in conflict than peace, or secondly, that by not being represented in the same way, privacy and security cannot be contemplated concurrently. The focus throughout this
372

A Comparison of Media Response to Recent National Security Events

event and the following 12 months indicates in terms of lifeworld focus that social consciousness is still centred on the impact of the event, rather than implementation changes that could assist in coping with similar future events. What the newspapers are reporting Figure 6 represents the key ideas occurring in the media reports of the week of the event. The primary themes identified in the content analysis are Bali and life. In this context life refers to the raised awareness of the sanctity and frailty of human existence. The focus on Bali is event-driven, with most of the surrounding attention given to the attacks and talk of terrorism. The issue of life in this terrorist event is of significance because of the area targeted: a popular Western holiday town where families choose to holiday, a vibrant tourist destination. The primary reaction from the media collected is aligned with the reaction to September 11 in that most early reporting is focused firstly on the mechanics of the attack and then the victims. There is little detail regarding reflection of the attack.

Figure 6. Bali Bombing Week of Event

Figure 7. Bali Bombing 6 Months After Event

The six month snapshot of lifeworld forces (Figure 7) shows a theme of Bali and people. Although war and Iraq are still high in public awareness, there is a strong connection with the people and the impact those people can have going forward.

373

Recent advances in security technology

Figure 8. Bali Bombing One Year after Event


The one year anniversary of the Bali Bombing reflects a sense of acceptance of the events and the due process that followed. Figure 8 highlights the focus on the event and the victims of the event. Mubarok, one of the accused perpetrators is there as a result of the court case that was occurring.

Jakarta Bombing
The bombing of the Australian embassy in Jakarta took place on 9 September 2004. At 10:30am local time, a car bomb exploded outside the Australian embassy and killed 11 people including the suicide bomber. Over 140 others were wounded, mainly by the broken glass from the blast. None of the Australian embassy workers were killed, but embassy guard, Anton Sujarwo, and four Indonesian policemen were killed (Moore and Rompies 2004). Trends in media coverage The Jakarta Bombing was a much smaller terrorist attack compared to September 11 and the Bali Bombing. Because of this, the newspaper reaction is noticeably different to the previous two events (see Figure 9). At the initial 3 months of the attack, again, terrorism and security are the two most reported issues. The graph shows that these plummet to negligible percentages in the following six months, with only a slight upturn in the final period. Without the same magnitude of disaster as the two previously analysed attacks, the public awareness through media indicates that although there was high impact of this event at the time of it, there was a swift recovery, without it having to exist as a long term issue in the media, and therefore as a shaping factor in the lifeworld building. The minor awareness of liberty and privacy is consistent with the previous terror events.

374

A Comparison of Media Response to Recent National Security Events

Proportion of Articles by Interval - Jakarta Bombing


80 70 60 50 40 30 20 10 0 0 to 3 3 to 6 6 to 9 9 to 12 Time Interval (Months) Jakarta - privacy Jakarta - terrorism Jakarta - security Jakarta - liberty

% Total Articles

Figure 9. Jakarta Bombing Headlines


What the newspapers are reporting The Jakarta Bombing, given the significantly fewer casualties, did not have the same level of emotional impact as the previous events. The two main themes identified in the media reports were about the attack and Manny, a young girl injured in the attack, who lost her mother. The reporting and significance of the attack is reflected by the focus on the event and the people involved in the event as shown in Figure 10.

Figure 10. Jakarta Week of Event

Figure 11. Jakarta 6 Months After Event

The six month analysis of media regarding the Jakarta Bombing (Figure 11) reveals a similar reaction to the Bali Bombing, that emotion and anger toward the event is controlled and of little consequence to maintaining a sense of normality. The role of the police is significant in that it reinforces the notion of the justice system in action to recover some sense of control over events that cannot be controlled.
375

Recent advances in security technology

The one year anniversary reflects a different side of the recovery process which leads to the trial of one of the accused attackers. The concept of terrorism appears more strongly in the one year anniversary data than it has in the previous sections. As the trial progressed, interest in the terrorist connections was reignited, as were the facts of the case and the issue of suicide attacks.

Figure 12. Jakarta One Year After Event

Conclusion
It is of interest that the theme of terrorism throughout each event follows a similar response trend. The recovery time to a level in line with awareness before the event is sustained over a longer period of time in the Jakarta Bombing. Awareness of security is identified throughout each of the cases as an issue of great importance. Throughout these cases, security has taken on two main meanings: national/homeland security and personal security. It is an issue that has shown a similar trend across all events, most notably in an upturn at the point of the one year anniversary of each event. The content analysis showed that security was an issue that featured strongly in initial reactions. Yet it has a longevity that comes with it being considered an issue and/or solution as responses to events of national security are reviewed over time. Given that it is usually a breach in security measures which has allowed the event to occur in the first place, the initial heightened focus makes sense. This supports the idea of security being a measure that can be taken up in the planning stages of how to respond, should another event occur. In terms of lifeworld impact, security is an issue that tends to change from a fearbased response to a confident assurance over the course of reaction to an event. The more assured a person is of their security, both on a personal level and national level, the more positive the focus might be on adopting and implementing national security initiatives. The prime example of this is the reaction to September 11, where the focus on war detracted from the attainment of security by inflaming other political
376

A Comparison of Media Response to Recent National Security Events

relationships, therefore putting society at a potentially higher level of risk. However, the alternate position to this is that if there is a strong feeling of security, there might be hesitation toward adopting an initiative that could be seen to be unnecessary. Privacy and liberty, as concepts shaping the lifeworld, have had little impact in the cases described in this chapter. With the exception of the Jakarta Bombing they have remained in public consciousness for the duration of the year after each event. The significance of these findings in relation to the power of the media can be seen in the way that privacy and liberty were disregarded by the press. McLuhan (2003) strongly argues that the press exists to give the public what they think they want to hear. It can be very difficult to filter fact from fiction in the media, especially when the topic is one that stirs up fear or other strong emotion to colour the reaction. By using media coverage as the input for this study, a basis for understanding public reaction and perception of these events has been provided. In all events there have been noticeable trends in attitude toward various components of the event when looked at over a period of twelve months.

References
Australian Federal Police. 2006, Bali Bombings 2002, accessed 1 May 2007, http://afp.gov.au/international/operations/previous_operations/bali_bombings_2002 Bauer, R. and A. Bauer (1960). "America, mass society and mass media." Journal of Social Issues 16: 3-66. Marshall, E. (1979). "Public attitudes to technological progress." Science 205: 281-285. Martin, C. D. (1995). "ENIAC: press conference that shook the world." Technology and Society Magazine, IEEE 14(4): 3-10. McLuhan, M. (ed.) 2003, Understanding Media: The Extensions of Man, Gingko Press, Corte Madera. McQuail, D. (1979). The influence and effects of mass media. Mass communication and society. J. Curran, M. Gurevitch and J. Woollacott. Beverly Hills :, Sage: Ch. 3. Michael, K. and A. Masters (2006). Realized applications of positioning technologies in defense intelligence. Applications of Information Systems to Homeland Security and Defense. H. Abbass and D. Ellam. USA, IDG Press: 196-220. Moore, M. & Rompies, K. 2004, '9 dead, 161 hurt in Jakarta bombing', Sydney Morning Herald, 10 September. National Commission on Terrorist Attacks Upon the United States (2004). The 9/11 Commission Report, U.S. Government Printing Office. Tootell, H (2006). The application of critical social theory to national security research. Prometheus, 24 (4), 405-411.

377

Recent advances in security technology

34
Security Strategies for SCADA Systems
J. Wang and X. Yu
RMIT University, Australia
Abstract
The integration of traditional SCADA systems and modern corporate IP networks has brought many benefits, along with the security threats faced in the IT world. Sophisticated security mechanisms and security management are crucial for secure and smooth operations of today's integrated SCADA systems. In this paper, security threats and vulnerability in SCADA systems are categorized and analysed. Security technologies such as firewall architectures in SCADA networks and Modbus/TCP filter firewall designs are presented. Also addressed are access control, authentication, intrusion detection and web based SCADA system security. Future challenges and prospective for the SCADA security strategy and management are discussed. Dr Jidong Wang (BE, ME , PhD) is a senior lecturer in School of Electrical and Computer Engineering, RMIT University, Australia. He had worked as senior specialist in OmniVision Technology, USA and Ericsson Asia Pacific Laboratory respectively before he joined RMIT in 2003. His current research interests include industrial information security, IP network QoS, biometrics and data compression. Dr Wang has published over 40 papers in international conferences and journals. He has also served as PC members in number of international conferences in the areas of multimedia, network and web technologies in recent years.

Biographies
Xinghuo Yu (BE, ME, PhD) is Professor of Information Systems Engineering and the Director of Platform Technologies Institute in Royal Melbourne Institute of Technology (RMIT) Australia. His research interests include intelligent systems and control, and industrial information technologies. He has published over 300 papers in journals, books and conference proceedings as well as co-edited 9 research books. Prof Yu has served as an Associate Editor of IEEE Transactions on Circuits and Systems Part I (2001-2004), IEEE Transactions on Industrial Informatics (2005-Present), IEEE Transactions on Industrial Electronics (2007-Present) and several other journals. He is a Fellow of Institution of Engineers Australia.
378

Security Strategies for SCADA Systems

1. Introduction
Protecting critical infrastructures such as power, food, gas and water systems is of paramount importance. Underlining the operation and management of these infrastructures are process control systems (PCS) which have increasingly been integrated with todays advanced computer networks. Accompanied with the benefits such as effective, efficient and flexible system operating, information processing, sharing and management, the integration has also brought in security threats faced by Internet and computer networks. Due to the sophistication and special features of industrial PCS, the security issues have not been fully addressed in the general systems and network security research. However, there is an urgent need to address this problem due to its threat to the productivity and safety of industrial systems. For example, in 2006 a Zotob worm penetrated the firewall of the SCADA system in the Australian Holden Car Factory, resulted in $6m dollar loss overnight. Studies on industrial infrastructure and information security have started appearing recently. Supervisory control and data acquisition (SCADA) systems are a process control system that monitors and controls devices in an industrial environment. Traditional SCADA systems and Process Control Network (PCN) were designed for monitoring and controlling field devices or industrial processes using proprietary protocols and operating systems on stand alone network environments. The monitoring and controlling objects in a SCADA system could be switches, valves, temperature of a component, the pressure of a pipe to name a few. A SCADA system could continuously monitor and collect the field data from many points. An authorized operator of the system could send commands to a device, for example to shut down an automation process in a factory, or read the data collected from a specified device. Generally, the SCADA system involves two kinds of operations, namely, monitoring and controlling. On top of that there are information storage and access via databases. With network evolution, most SCADA systems are integrated with TCP/IP based corporate IT networks, which are generally connected to the outside world via the Internet. Therefore, the SCADA system operation and system information are subjected to potential attacks from anywhere in the Internet world. The threats can be classified into active attacks, which include hijacking the SCADA system controlling and modifying or damaging SCADA information base, and passive attacks, such as eavesdropping the SCADA system controlling and intercepting SCADA information access. Also within the SCADA systems, various communication links between field devices and control centres are also exposed to potential attacks. The existing SCADA systems are huge investments accumulated in various industries in many years. It is not practical and economical to replace them with new secure SCADA systems, let alone that the security issues may not be fully addressed in new SCADA systems yet. The best approach is to add security mechanisms in operating the SCADA systems to gradually improve their security. Our discussions on SCADA security strategies are based on that consideration.

379

Recent advances in security technology

2. SCADA Network Review


SCADA networks have gone through three generations from evolution point of view according to (McClanahan, 2003). The first generation is basically concentrated on computing and communication between the central station and field devices or remote terminal units (RTUs) based on proprietary protocols. The system is virtually monolithic and standalone with virtually no connectivity with the outside world. If there are external connections, these connection would be through simple interface such as RS-323, based on proprietary protocols. There are no network security problems in the first generation as they are not necessarily networked. The network security became an issue only when the second generation arrived. Local area networks (LAN) are the backbones of the second generation SCADA systems. Multiple workstations are interconnected in a system to handle communication with RTUs and providing database services and human-machine interfaces (HMI). As the LAN protocols and communication protocols with RTU are generally proprietary and the physical layout of the LAN are limited, the security concern is within the enterprises premises. The third generation has adopted open network standards for the communication among all the workstations in the SCADA systems. The operating systems are generally popular commercial systems and many off-the-shelf software products are used. Most importantly, the SCADA LAN and corporate general IT networks are connected using widely accepted WAN protocols, such as TCP/IP. As most corporate IT networks are connected to the outside word, (e.g. Internet), and the SCADA systems are exposed in a certain degree to the outside world even security mechanisms are deployed. Our discussion is surrounding the third generation, which represents most of todays SCADA systems in use and poses the greatest security challenge. The network context of a SCADA system has three parts, namely, SCADA LAN, corporate IT network and global Internet. In the SCADA LAN, the communications between control stations (or master stations) and RTUs mostly still follow the client server protocol models. The communications among the control stations generally follow peer to peer models. There were hundreds of SCADA proprietary protocols developed by individual SCADA vendors. The number of protocols has dramatically reduced as open standards are gradually introduced to improve performance and inter operations. However there are still a large number of protocols out there as different organizations are developing and pushing their own standards. MODBUS TCP/IP and PROFIBUS are two of them. The variety of the SCADA protocols has made it impossible to find an universal security mechanism solution. However, the security threats originated from the connection to the corporate IP networks are common.

3. Security Threat in SCADA Network


SCADA system threats could come from inside and outside. An unhappy insider within a corporate could conduct malicious action and launch various attacks from the
380

Security Strategies for SCADA Systems

corporate LAN and this is very difficult to prevent. However, recent survey (Igure, 2006) shows that it is the attack from outside that are on the sharp rise. Based on the statistics provided by British Columbia Institute of Technology in Canada, since 2001, 70% of the reported incidents on industrial networks are outside attacks. As long as the corporate networks are connected to the outside world, no matter it is single point connection, multi point connection, through phone line or broadband, the threats from outside is there, especially for critical infrastructures. Similar to general network security (Stallings, 2003), SCADA security threats can be classified as passive attacks and active attacks. Passive attacks include eavesdropping, i.e. interception of the message content, and traffic analysis, by which the nature of message contents can be guessed by observing the patterns of the messages. Passive attacks are generally very difficult to detect as they do not involve alternation of data and commands. Active attacks involve modifying message contents and creating faking messages (or commands). The four main categories of active attacks are masquerade, replay, modification of messages and denial of services (DOS). A masquerade attack takes place when one entity(or user) pretends to be a different entity to gain privileged access to some data or initiate some dangerous actions in the name of the hijacked entity. Reply involves the passive capture of messages( generally commands) and later retransmission to produce an unauthorized effects. Modification of messages means that the messages have been altered and the integrity of the messages are lost. A DOS attack generally targets a specific service in a network. The attack may suppress all messages to a particular destination, e.g. a database server, or disrupting a service either by disabling the network or overloading it with messages to degrade performance. Active attacks can be detected and the damages can be reduced, but they are very difficult to prevent absolutely unless the protected networks are physically disconnected from the global network. The adoption of standard network protocols and off-the-self commercial software products in SCADA networks has made it easy for attackers to gain in depth knowledge about the operation of these SCADA networks. The security holes in these standards and products can be exploited by the attackers to intrude SCADA networks. Described in the following are some of the attack scenarios. If an attacker can intrude the SCADA network, he can sniff the data transmitted cross the network as most SCADA protocols does not provide any kind of cryptography. He could learn about the data and commands. With the information intercepted, he could send a control command to a device later. The result could be disastrous; An attacker could gain an unauthorized access to databases in a SCADA network. Change of the data in the databases could lead to wrong configuration or setting of some RTU devices. An attack, once penetrated the network, could launch an DOS attack to database services in a SCADA network, which could lead to the network operation collapse. The SCADA network security issue can be addressed from the security mechanism and security strategy. The former includes access control and authentication. This is

381

Recent advances in security technology

the first defense against the attacks especially intrusion from outside. Security mechanism should also address the inside security of the network. That is, an insider or an intruder will find it difficult to carry out any attack. This can be done with deploying security monitoring tools which can detect all the strange activities and suspicious behaviors. An intelligent control system could take action to alert system operators or stop suspicious operations. Security strategy and management covers broad areas from good password practice to security regulation for all SCADA network users. Our discussion on SCADA security mechanism starts on firewall deployment in the following.

4. Firewall in SCADA Network


Firewalls are security mechanisms widely used in general IT networks. Their functions are controlling and monitoring the traffics to and from the protected networks. Firewalls are generally implemented as a hardware/software unit although they can also be software components installed in some network devices, such as gateways or workstations. From functionality points of view, firewalls can be classified into following classes

Packet filter firewall: Static rules are applied in this kind of firewalls. The rules could specify what kind of network traffic can get through. The rules can apply to the network protocols, IP addresses and port numbers of packet sources and destinations. This is the simplest class of firewalls and can be deployed directly in layer-3 devices, such as switches and routers. Stateful Firewall: A firewall which can track the history of the traffic through the firewall. It is more sophisticated than the static packet filter firewall. With its memory on the history, it can form the context of outgoing or incoming packets. The control of the traffic through the firewall is based on the intelligent analysis of the state of the traffic. Application Proxy Firewall: A firewall can perform more specific control over certain application or protocols, such as HTTP. Application firewalls can be combined with stateful firewalls to form hybrid firewalls. Deep Packet Inspection (DPI) Firewall: A firewall can look into the application layer of the packets and make decision to control the traffic. DPI firewalls are more advanced than proxy firewalls, the later do not inspect deep into the application data.

Other firewalls could offer functions of intrusion detection system (IDS), anti-virus scanning, authentication, virtual network (VPN) or network address translation (NAT). Deployment of firewalls in SCADA networks requires choosing appropriate firewalls and firewall architecture. The UK governments National Infrastructure Security Coordination Centre(NISCC, 2005) has published a firewall deployment guidelines, which proposed a number of firewall deployment architectures and an evaluation of these architectures on security, scalability and manageability are presented as well. From the security point of view, NISCCs recommendation favours more on a 3zone architecture, in which the interconnection of SCADA network (or PCN) and Enterprise Network (EN) has two firewalls deployed on each side. A
382

Security Strategies for SCADA Systems

Demilitarized Zones (DMZ), situated in between, has a switch connected to both firewalls and a critical component such as a database (Historian). As shown in Figure 1, the DMZ, an intermediate network, is also referred as Process Information Network (PIN). The EN side of the firewall will block arbitrary packets from proceeding to the PCN and the historian and the firewall on the PCN side can prevent unwanted traffic from a compromised server from entering the PCN. If these two firewalls are from different manufacturers, the protection of PCN from attackers penetration is naturally stronger.

Figure 1. Paired Firewalls with Demilitarized Zone for Shared Enterprise/PCN Assets ( NISCC, 2005)
It would be desirable that the firewall on the PCN side could recognize SCADA protocols. So far, not many SCADA aware DPI type firewall are reported. Cisco has developed a Modbus Firewall, which is a Linux/Netfilter that can filter on Modbus, a UDP based SCADA protocol. It is claimed to be the first implemented firewall for SCADA protocols. More SCADA protocol supports are needed from the firewall manufacturers. NISCCs guide has recommended a twelve points general practice rules on the PCN side firewall. Each SCADA system has different network configuration and control circumstance. The rules provide a reference only and special rules are always needed to address special environment. Ideally, an Intrusion Detection System (IDS) combined with a PCS firewall could provide stronger protection on the PCS. The design of IDS requires the deep knowledge about the SCADA protocols and SCADA operations. IDS should be able to detect suspicious behavior even the operations are legal within the protocols. An IDS is more than a combination of a stateful firewall and a DPI firewall. An IDS should target one specific type of SCADA network. The design of such system requires the expert knowledge on the specific SCADA network and its operation.
383

Recent advances in security technology

6. Security Strategy and Management


Firewall is an important SCADA security mechanism, but it is only part of the solution. The sound security of a SCADA network requires a complete security strategy and management to be adopted. General IT security strategy and practice can address most of the security concerns in SCADA networks. Specific security issues are most likely deployment and special industrial sector related. Apart from the individual security strategy, a good security management is equally important. The following discussion is around both security strategy and security management. 6.1 Access Control and Authentication The access control of a SCADA network is the primary security mechanism to ensure that only the authorized users can gain access to the SCADA network. Password system is generally the front line access control in most SCADA systems and advanced user authentication can provide strengthened control. A general password system issues a pair of user ID and password to a legitimate user for he/she to use to gain the access to a network or a service. Users should be classified into different classes. A user can only access certain information or carry permitted operations defined for his/her class. The vulnerability of the password system comes from the selection of passwords and protection of the passwords. A poorly selected password can be easily guessed or tried out by an attacker. Good practice on password selection, generation, delivery and storage is a big factor in any network security. No matter how sophisticated or secure a network can be from design point of view, if there are loose holes on users' side, the network is at big risk being intruded by an attacker using hijacked ID and password. The protection of users passwords in SCADA systems should also follow some established encryption and authentication schemes such as the SALT in UNIX system (Stallings, 2003). These schemes can prevent an attacker from finding an individual user's password even he has obtained the protected password file. SCADA networks should avoid keep any passwords in their original clear text in any password files. User authentication based on password is a very primary approach even it is the widely used method in practice. If an attacker has gained an user's ID and password, he can enter the network using the stolen identity and carries various attack on the system. Using smart cards or biometrics, including fingerprint, facial recognition, or speaker recognition can enhance the authentication. However, there will be a cost associated with these technologies. Also the deployment of advanced authentication on the current SCADA networks has a lot of restrictions. They have to be on the control station layer or up as the field devices, such as RTUs, do not have enough computing power to support authentication of individual users who communicate with them. This also reveals the vulnerability of RTUs. That is, RTUs receive commands and respond regardless of the sources where this commands come from. Smart RTUs should have the capability of access control at the field level, i.e. give specific operation permissions to different individuals or groups of users. In that case if the links between the control stations and field devices are secure, the control
384

Security Strategies for SCADA Systems

station can authenticate the users on RTUs behalf. However, these links, in many cases, such as wireless links are not secure. Link security is an important part of SCADA security. 6.2 Link Security Traditional SCADA protocols do not support any cryptography. It is very difficult to add encryption and authentication on most of existing filed devices, as the link data rate is generally low, the system has real time nature and the RTUs' computing power is very restricted. There are moves on developing new generation of SCADA links. In (Igure et al, 2006)' paper, it was suggested to adopt the similar encryption protocols implemented in Wireless Sensor Network (WSN) in SCADA links. American Gas Association (AGA) is also developing a set of secure SCADA protocols including encryption and key management. These link security protocols target the integrity and confidentiality of messages or commands passed over the SCADA links. The key management is a difficult issue in secure SCADA protocols. As pointed out (Igure et al, 2006), some SCADA networks, such as electric distributions or gas pipelines, many field devices are left in the open without any physical protection. Keys stored in such devices are vulnerable to attackers. The research on secure SCADA protocol is a challenging task. 6.3 Intrusion Detection Intruders to SCADA networks can be classified into three classes, that is, masquerader, misfeasor and clandestine. Masquerader is about an outside attacker who gains access to the system. Misfeasor is concerned with an insider who gains unauthorized services or privileges. Clandestine is about an outside or inside attacker who seizes supervisory control of the system and takes over the control of auditing. Three popular approaches are widely discussed in general system intrusion detection (Stalling, 2003).

Statistical anomaly detection: a profile of activities for each user or class of users are defined. it is used to detect a suspicious behavior. Normally, threshold are defined for the frequency of occurrence of various events. These thresholds will trigger the detection alarm. Rule Based Detection: A set of rules are defined and that can be used to decide if a given behavior or a sequence of behaviors are from an intruder. Honeypots: Homeypots are decoy systems designed to lure a potential attacker away from the critical systems. The honeypots can collect information about attacker's activities and encourage the attacker to stay on the system long enough for administrators to respond.

No general SCADA network intrusion detection products have been reported so far. The design of such a system would require a lot of in-depth information about the SCADA system and its operation. This should be an interesting research and development challenge.
385

Recent advances in security technology

6.4 Web Based SCADA Security Web technologies have been adopted in many SCADA systems for easy network access and operation (Fan, 2005). Generally, web interfaces are used for access to SCADA databases and can be used for system operations such as sending command to field devices. However, apart from the real-time consideration, the security is a critical factor for web based SCADA interface. HTTP protocol itself does not provide any security mechanism. Fortunately, security protocols around HTTP have been developed and they are proven to be successful in practice in many web applications, such as e-commerce and Internet banking. Secure Socket Layer (SSL) and Transport Layer Security (TLS) protocols are the dominant web security protocols using encryption. However they only addressed the confidentiality and integrity of the connection from web clients and web servers. However, the security threat on web interface could also come from other aspects such as authentication, denial of service (DOS) and Trojan horse browsers. Security mechanism on SCADA web interfaces should be integrated with other mechanism to have maximum protection on the system. The location of a web server is an important consideration in the SCADA firewall architecture design. In the dual firewall structure shown in Figure 1, the DMZ (or PIN) is a good candidate location for such a web server. The corporate network users can access the service through the firewall, which will filter unauthorized visit based on rules, IP addresses and port number checking. For visitors from outside, direct access to the web server should be avoided. a proxy server (or firewall) in the corporate network can be set up to provide maximum access control and authentication for any visit from outside. New web technologies such as XML, SOAP and web services are rapidly being adopted in web applications. Web based SCADA network will also follow this trend. Security around these new technologies needs new attention. For example, XML could be used to transmit and store SCADA data. Secure storage and communication in XML format will be extremely important for XML based SCADA networks. Research work in these specific areas haven't been found. However, XML based IP network management are hot topics and even products are released (CISCO). There are analogy between SCADA network and IP network management, as both involve device monitoring, device control and information(or data) base operation. Similar concept would apply on security consideration.

7. Conclusion
The picture of SCADA networks in the context of the general IP networks is a bit messy somehow. There are old, new, simple and sophisticated technologies twisted together. The security of industrial SCADA networks, especially in major infrastructures is of paramount importance. Negligence in the area could lead to serious or disastrous consequences. Although professional organizations and industry bodies have put efforts in standardization and directing practices, there are still technical and management challenges in many aspects. The reality is that SCADA
386

Security Strategies for SCADA Systems

networks are in the process of evolution. Upgrade of a system to a homogeneous advanced IP network overnight is not practical. There is no simple solution. When all field devices are IP capable one day, then the SCADA security will become a mainstream IP network security issue. In this paper, we have highlighted the major threats and vulnerability in SCADA networks. Security strategies and management problems are discussed and challenges in various security mechanisms are identified. Our investigation on the topic could form the basis for future research and network planning to improve the current and future SCADA networks.

References
Fan L., Cheded L., Toker O., Designing a SCADA system, IEE Computing & Control Engineering, Nov. 2005 Pages 31-39 Igure Vinay M., Laughter Sean A. Williams Ronald D., 2006, Security Issues in SCADA networks, Computer & Security, Vol 25, pages 498-506 Kalam, A., Amanulla M.T.O, Zayegh A., Network Security Vulnerabilities in SCADA and EMS, IEEE/PES Transmission and Distribution Conference and Exhibition: Asia and Pacific, 2005, 6 pages Kelapure Shekhar M. , Akella S.S.K. Sastry, Rao J. Gopala, 2006, Application of Web Services in SCADA System, International Journal of Emerging Electric Power System, Vol 6, Issue 1, Pages 1-15 Kropp, T., 2006, System threats and vulnerabilities, IEEE Power and Energy Magazine, Volume 4, Issue 2, Page(s):46 50 McClanahan, Robert H. 2003, SCADA and IP: is network convergence really here? IEEE Industry Applications Magazine, Volume 9, Issue 2, Page(s):29 - 36 Miller A. 2005, Trends in Process Control System Security, IEEE Security & Privacy, October 2005, Pages 57-60 Naedele, M, 2007 Addressing IT Security for Critical Control Systems, HICSS 2007. 40th Annual Hawaii International Conference on System Science, Jan. 2007 Page(s):115 115 NISCC, 2005, Good Practice Guide on Firewall Deployment for SCADA and Process Control Networks [Online] http://www.niscc.gov.uk/niscc/docs/re-20050223-00157.pdf Paukatong, T., 2005, SCADA Security: A New Concerning Issue of an In-house EGATSCADA, IEEE/PES Transmission and Distribution Conference and Exhibition: Asia and Pacific, 2005, 5 pages. Pollet, J., Developing a solid SCADA security strategy, 2nd ISA/IEEE Sensors for Industry Conference, 19-21 Nov. 2002 Page(s):148 - 156 Stallings W., 2003, Network Security Essentials: Application and Standards, New Jersey, Pearson Education Inc. Pothamsetty, V. , 2004, Netfilter Extensions For Modbus/TCP, http://www.cisco.com/web/about/security/security_services/ciag/research/CRP_netfilter_exten sions.html

387

Recent advances in security technology

35
The potential for using UAVs in Australian security applications
S. Russell
University of South Australia
Abstract
This paper provides an analysis of the utility of Unpiloted Aerial Vehicles (UAVs) globally and in Australia, the costs of development and operation, barriers to operation in a civil environment, and outlines the potential police and security applications of UAVs relevant to the Australian context. The objective of the paper is to understand the potential needs of police and security organisations for UAVs, and the many issues that must be addressed to allow relatively unrestricted use of UAVs in civil airspace, so that researchers can target the most critical needs. Research into satisfying the needs will show the way for future collaborations between academia, policing agencies, government and industry to form strategic alliances to reap the benefits of UAVs while mitigating the risks.

Biography
Dr Stephen Russell is Senior Lecturer in systems engineering at the Defence And Systems Institute at the University of Adelaide. He holds a PhD in astronomy, and worked as Assistant Professor of Astronomy at the Dublin Institute of Advanced Studies for eight years. Prior to joining DASI, Stephen was Senior Technology Coordinator at the Australasian Centre for Policing Research. Before that he was Senior Systems Engineer and Operations Manager on the Australian FedSat project. His current research interests include the development of small Unmanned Aerial Vehicles for policing and security applications, and applying systems engineering techniques to disaster recovery.

388

The potential for using UAVs in Australian security applications

Introduction

The American Department of Defence defines a UAV as: a powered aerial vehicle that does not carry a human operator; can be land-, air-, or ship-launched; uses aerodynamic forces to provide lift; can be autonomously or remotely piloted; can be expendable or recoverable; and can carry a lethal or non-lethal payload, (Curtin, 2004). The advantages of UAVs for gathering real time intelligence while reducing pilot casualties in war zones have encouraged the US Department of Defence to dramatically increase funding for development and acquisition of UAVs, as shown in Figure 1 (Cambone et al., 2005). While the need for UAV capabilities was brought about by military contingencies, the satisfaction of those needs would not have been possible without major advances in a number of relevant technologies. These include: the steady increase in computing power; higher resolution sensors; improved detection of fixed and moving targets in a variety of environments; improved communications; faster image processing; better image exploitation; and the development of robust long endurance platforms.
US Funding for UAVs (US$M) 3500 3000 2500 U S$M 2000 1500 1000 500 0

There have been barriers to the introduction of UAV systems in the military context, and while these are being addressed, many still remain to be overcome. Some of the barriers relevant to the Australian context include:

Competition for funds with legacy systems Significant costs involved in developing and operating complete UAV systems High accident rates and unreliable systems Bandwidth limitations
389

19 88 19 90 19 92 19 94 19 96 19 98 20 00 20 02 20 04 20 06 20 08 20 10
Year

Figure 1. US Department of Defence annual funding for UAVs (from Cambone et al., 2005)

Recent advances in security technology

Lack of interoperability Image processing and exploitation problems In addition, many of the technologies that are developed for the military are not being passed on to the civil industry. This indirectly impacts on military UAV systems development since the lack of certification to fly in civil airspace restricts freedom of movement for military UAVs and impedes the development of homeland security capabilities based on UAV platforms.

Australian Context

Australia enjoys two marked advantages over many areas of the world, including the US and Europe, for developing UAVs. These are that testing is easier in the wide expanses of virtually uninhabited land area, and the regulations imposed by Australias Civil Aviation Safety Authority (CASA) are less restrictive. These, and the reciprocal agreements in place between nations, mean that Australia has the opportunity to provide testing facilities for UAVs for civil applications for countries around the world. The Australian military have many years of experience operating UAVs, having flown Jindivik Unmanned Aerial Target Systems (UATS) since the maiden flight in 1950, up until 1997. This role has been taken over only recently by Kalkara, one of the most successful UATSs in the world, after being awarded the first Australian Military Type Certificate for a UAV in August 2000. A great deal was learned through operation and upgrades of Jindivik and Kalkara. The Human Machine Interface (HMI), for instance, was simplified so that trained pilot operators were no longer necessary, and the system could be operated by a far smaller team (Stackhouse, 1999). Australian military forces are operating smaller craft, such as Scan Eagle and Skylarks, today in Iraq, while others are under test. The AIR7000 - North West Shelf Trial was a 2004 election promise of John Howard to provide military with surveillance capabilities for the northern approaches, replacing the fleet of AP-3C Orion maritime patrol aircrafts. The trial was intended to assist with the requirements definition of AIR7000 Phase 1 for a High-Altitude Long-Endurance (HALE) capability, with an overall budget of $1.5 billion. The outcomes of the trial were not wholly satisfactory as a surveillance solution as the UAV needed to descend to low altitudes to identify and inspect targets of interest. As a result, the government is waiting for the outcomes from the US Broad Area Maritime Surveillance trials before making a decision on how to continue (Thomas, 2006).

System cost

UAVs are only one component of a larger system. The overall system includes the flight control station, a means for transmitting and receiving information, a means for processing information, facilities for launch and retrieval of the vehicle, means for maintaining the vehicle, and often a means for transporting the vehicle. The wider
390

The potential for using UAVs in Australian security applications

context includes the means for managing operations, for training and managing personnel, and for integrating the system into the operational environment. Figure 2 gives a generalised UAV scientific mission scenario (Wegener & Papadales, 2004).
UAV flying UAV on ground Travel time Payload install/remove Planning & Support One Mission UAV committed TIME

Figure 2. Generalised UAV mission scenario (derived from Wegener & Papadales, 2004)
This illustrates that it is essential to consider cost per mission, rather than cost per flying hours, since the majority of UAV mission time is devoted to planning, and managing the mission, and processing the data returned. More time is spent in loading and unloading the UAV with mission relevant hardware, and moving the vehicle to the desired location, and returning it to base. Only a small portion of the total time spent on a mission is spent flying, and a significant portion of flying time is spent in achieving station. However, it should be noted that the time spent on different elements of a mission changes with increasing experience. Wegener & Papadales (2004) shows how a four week scientific mission by NASA required eight weeks for upload and transit in year one, but only three weeks in years four and five. Wegener & Papadales (2004) shows, in Figure 3, that the mission costs per kilogram, per flight hour, are between ten and a hundred times more for UAVs than for a piloted aircraft.

391

Recent advances in security technology

100000 Cost per Flight Hour

$10/kg/F-H
10000

$1

UAVs
1000

Piloted

100 1 10 100 1000 10000 100000 Payload Mass (Kg)

Figure 3. Aircraft mission costs per flight-hour (F-H) per kilogram of payload (from Wegener & Papadales, 2004)
It is still true, however, that unit costs are lower for UAVs, which means that units remain expendable relative to their more expensive piloted cousins. In the US the costs of UAV units range from US$30,000 for the small Dragon Eye, up to US$57 million for Global Hawk (Bone & Bolkcom, 2003). This compares with US$37 million for the Joint Strike Fighter, to US$1.2 billion for the B-2 Spirit bomber. However, the lower cost UAV units are balanced somewhat by their need for more frequent replacement (discussed in the following section). While new sensor technologies are being developed, including Multispectral (tens of bands) and hyperspectral (hundreds of bands) imagers, LIDAR (Light Detection and Ranging) devices, and nuclear detection systems, the US military are increasingly concerned that sensors are starting to dominate the cost of UAV units. Much of the reason for this has been attributed to the lack of competition between rival companies trying to satisfy military demands for highly sensitive and capable instrumentation. This is pushing UAV out of the market through erosion of their expendability (Roche, 2002), so the US military are seriously considering how to reduce the costs of sensors.

Mishap rates

It is widely believed that the numbers of mishaps with UAVs are many times the rates of military aircraft or civil aircraft. This simplistic picture is deceiving, as even piloted aircraft tend to have high mishap rates during the initial stages of deployment. A more useful picture is provided by the graph in Figure 4, showing the Class A or B mishap rate (significant aircraft damage or total loss) per 100,000 flying hours, against the cumulative flying hours. This shows that the more expensive and highly capable UAVs (Global Hawk and Predator) compare favourably with piloted military
392

The potential for using UAVs in Australian security applications

aircraft, but the smaller and less capable UAVs have significantly higher mishap rates. As a result, the US military must contend with two schools of thought: either to purchase numerous cheaper systems, and put up with a higher attrition rate, or purchase fewer more expensive systems, give them greater survival capabilities, and accept their decreased expendability. Another significant problem is that pilots are not motivated to sit in a control room rather than fly a plane. This translates into a shortage of skilled personnel for controlling UAVs. In addition, UAV controllers spend all the time between take-off and landing looking for something that might have changed or indicate an emergency. Hopcroft et al. (2006) argue that this is a job that humans perform badly. Instead, control needs to be automated for jobs that machines do well, while human operators should be trained to operate multiple UAVs without the need for prior pilot training.

Figure 4. Mishap rate comparison (Cambone et al., 2005)

Accreditation

Currently, US military operators of UAVs are required to obtain a Certificate of Authorisation from the Federal Aviation Administration (FAA) in a process that can take up to 60 days. Two of the most critical issues that the US military must address to avoid this process, are to improve UAV reliability through aircraft airworthiness certification, and implement aircrew qualification standards that satisfy air traffic management procedures and safety requirements. In other words, there is a critical need to demonstrate the safety and reliability of UAV systems. Schneider (2004)
393

Recent advances in security technology

outlines the American Department of Defence view of how to decrease mishap rates. Those relevant to police and security applications in the Australian context, include: 1. 2. 3. 4. 5. 6. 7. 8. 9. Implement reliability specification standards Implement reliability metrics to track reliability and mishaps Feed mishap investigations back into reliability improvement programs Include provision for reliability improvements in budgets Improve bandwidth for data and communications (currently three satellites are required to service one Global Hawk mission in Afghanistan) Conserve bandwidth needs (by increasing on-board processing and autonomy) Develop a common video data link standard Develop interoperability between systems and communications Develop (and fund) common mission management system for all UAVs

In summary, design specifications for UAV systems need to be constrained by requirements for acceptable failure rates under expected or likely operational environments. It is essential that standards are developed for building UAV platforms and technologies, and for common ground control systems that are capable of controlling diverse multi-unit UAV systems. While decreasing the mishap rate is important, there are other pressing needs to be addressed before the FAA will consider granting licences for UAVs to fly in civil airspace. The technologies that need particular development or