You are on page 1of 206

Dawson RJ, Walsh CL and Kilsby CG (eds.

) (2012) Earth Systems


Engineering 2012: A technical symposium on systems engineering for
sustainable adaptation to global change, Centre for Earth Systems
Engineering Research, Newcastle University, U.K.



The ESE2012 Symposium organisers gratefully acknowledge
support from a number of organisations:









Centre for Earth Systems
Engineering Research
ii

Introduction
In 2000 Brad Allenby wrote a seminal article
1
that recognised in a rapidly changing world it is:
necessary to expand the definition of engineering, design, and management to the scale of the
technological and cultural systems that are, in fact, now beginning to dominate the dynamics of
many natural systems. This is a principal rationale for earth systems engineering - a new paradigm
that sought to analyse, design, engineer and manage coupled human, environmental and engineered
systems. Thus, Earth Systems Engineering (ESE) seeks to analyse, design, engineer and manage
coupled human, environmental and engineered systems. The notion that engineers are involved in
changing the world is not at all new but the conscious attempt to rationally and ethically use
technology to manage, or engineer Earth systems at all scales is new.
In the following decade a number of initiatives have sought to advance these concepts. For example,
the National Academy of Engineering programme on Earth Systems Engineering and Management,
or the National Science Foundation funded programme on the Dynamics of Coupled Natural and
Human Systems to develop interdisciplinary modelling approaches for analysing the complex
interactions between these systems at a range of scales. ESE provides a counterpart to the already
fast-developing field of Earth Systems Science that has provided an interdisciplinary framework to
help understand the Earths major chemical, physical and biological interactions.
ESE has been associated with proposals for planetary scale engineering of the climate, but an ESE
approach explicitly recognises that focusing on a single system variable (e.g. atmospheric CO
2
, or
global mean temperature) is a nave optimisation that will ultimately lead to much wider
disturbance of other systems. An ESE approach takes a much broader view than focusing on the
climate and seeks to understand how the effects of engineering, social and economic interventions
accumulate and propagate across scales and subsystems including cities, river basins and coasts, as
well as at a global scale.
As a subject Earth Systems Engineering has taken root at several universities in the U.S.A., whilst
centres such as the Centre for Earth Systems Engineering Research (CESER
2
) at Newcastle
University in the UK and the Centre for Earth Systems Engineering and Management
3
at Arizona
State University have started specific research programmes to tackle some of the challenges of ESE.
However, there are many other groups and centres around the world that are addressing ESE
challenges from engineering, economic, social, physical and natural science backgrounds. Because
of the great breadth of methodological and sectoral challenges that ESE poses, different groups
have focused their attention in different ways.
At the Centre for Earth Systems Engineering Research at Newcastle, hosts of Earth Systems
Engineering 2012, the focus of activity has been around the development a new generation of
integrated methods for analysis and decision making for complex coupled technological, human and

1
Allenby B. Earth Systems Engineering and Management, IEEE Technology and Society Magazine, Winter 2000: 1024.
2
http://www.ncl.ac.uk/ceser/
3
http://enpub.fulton.asu.edu/cesem/
iii
natural systems under conditions of long term change and often severe uncertainty. The research is
structured around a number of themes that form an iterative learning loop:

The Hazard theme provides quantified evidence of hazards (typically environmental, but recent
work has seen consideration of manmade hazards) and the drivers of long term change. These are
combined with multi-scale monitoring activities (from crowd-sourcing, full scale infrastructure,
through to satellite data) in the Observation & Monitoring theme. Informatics activities focus on
the management and structuring of the large and complex spatial datasets generated by these first
two themes and provides a platform for our coupled human-natural-engineered Systems
Modelling activities. Evidence from these themes is communicated through Decision Support
tools that seek to disentangle multiple sources of information and its uncertainties to present
decision-relevant information to end-users. This subsequently forms a platform for Impact and
Engagement with stakeholders, which have included outputs such as the UK Climate Programme
(UKCP09) weather generator
4
; the National Flood Risk Assessment for England and Wales
5
; the
Urban Integrated Assessment Facility and its application to London
6
; and the Eden Demonstration
Test Catchment
7
. At the centre of these activities sit the Integrated Demonstrations of
Infrastructure systems, Catchments and Cities. These represent the culmination of the research,
interacting continuously with the other themes, so testing and demonstration can stimulate new
ideas and approaches.

4
http://ukclimateprojections.defra.gov.uk/ and Burton A, Kilsby CG, Fowler HJ, Cowpertwait PSP, O'Connell PE.
RainSim: A spatialtemporal stochastic rainfall modelling system. Environmental Modelling & Software 2008, 23(12),
13561369.
5
Hall, J.W., Dawson, R.J., Sayers, P.B., Rosu, C., Chatterton, J.B. and Deakin, R. (2003) A methodology for nationalscale
flood risk assessment. Water and Maritime Engineering, ICE, 156(3) 235247.
6
http://www.ncl.ac.uk/ceser/researchprogramme/outputs/launch%20brochure%202.5mb.pdf
7
http://www.edendtc.org.uk/
iv
Papers presented in this symposium cover a number of key advances in ESE thinking since the
millennium Earth Systems Engineering conference, and highlight and discuss emerging methods
and issues. The symposium papers explore a number of important and inter-related themes:
Cities and Infrastructure: National infrastructure systems underpin the safety, health and wealth of
societies. Increased complexity, interdependency coupled with environmental and socio-economic
changes is making these systems harder to manage. This is particularly evident in urban areas
where interactions between people and infrastructure are most heavily concentrated.
Catchment and coastal management: Catchments and coasts are outstanding examples of coupled
human and natural systems. These systems incorporate management of flooding, erosion and the
ecosystem services they provide such as food, water resources and biodiversity.
Engineering within resource constraints: Despite uncertainties over the timescale of peak oil, the
notion that oil reserves are finite is well established. Food scientists have also highlighted concerns
on the long term availability of nutrients such as Phosphorus. However, many new engineering
technologies some of which are being developed to reduce our reliance on oil - place demands on
our planets resources because they require scarce elements or energy intensive processes in their
manufacture and deployment.
Geoengineering: Large scale interventions in the climate system, or geoengineering, are
increasingly being investigated. An ESE approach to climate manipulation explicitly acknowledges
that our climate is already geoengineered and is the outcome of many years of engineering and
socio-economic choices and that a silver bullet approach misunderstands the nature of technology.
Moreover, future geoengineering technologies may not primarily be focussed on climate
manipulation but driven by other objectives. For example, cultivation of in vitro meat may be
motivated by increasing food demands but the resultant impact on land and energy systems will be
enormous and therefore ultimately impact the climate.
Modelling of coupled human, natural and technological systems: Sustainable management of
complex systems will not be achieved in practice unless interactions between coupled technological,
human and natural systems are better understood and represented. Qualitative methods and
quantitative approaches such as land use and agent-based modelling have been developed, but
remain in their infancy.
Foresight and understanding long term change: The first era of sustainability appraisals, which
used only simple extrapolations of climatic and socio-economic trends, is now drawing to a close.
Sophisticated approaches to downscaling and forecasting are delivering more information on how
climatic and socio-economic systems might change. These are complemented by advanced Earth
observation for field monitoring and remote data acquisition which provide essential information to
understand processes of change within the natural, human and built environment. However, nano-,
bio-, cognitive science and other technologies and infrastructure are rapidly developing, yet we are
often blind to the potential for these technologies to completely change the way society engages
with the planet.
Decision-making and management of complex systems: The many sources of evidence, the
interpretation of complex models, uncertainties as well as social and governance barriers can
obstruct the implementation of ESE principles and the planning of sustainable transitions.
Advanced approaches to support decision-making and implementation of ESE include methods for
analysis and treatment of uncertainty, application of decision theory, interfaces and visualisation as
well as stakeholder engagement.
v
The papers highlight a number of important challenges, but also point the way forward to help
ensure future engineering investment decisions are not made without sufficient understanding of the
long term drivers of change, their potential impacts, interactions and uncertainties. The engineering
community must work to address the above, and related, intellectual challenges by engaging with
scientists and representatives of other disciplines to better understand the dynamics of the economic,
environmental and social systems within which engineering is embedded. Engineers must design
and deliver infrastructure systems, services, products and processes that will enhance wellbeing,
quality of life and ensure a healthy environment. Crucially, the engineering community needs to
work closely with decision makers to co-develop policies that achieve these aims to sustain society.

Richard Dawson
Symposium Chair





vi

Contents
An Earth Systems Engineering critique of geoengineering
Brad Allenby 1
Changing the metabolism of coupled humanbuilt
natural systems
Bruce Beck, Rodrigo Villarroel Walker and Michael Thompson 11
Complex adaptive systems engineering improving our
understanding of complex systems and reducing their
risk
Theresa Brown, Stephen Conrad and Walter Beyeler 33
Analysis of infrastructure networks
Sarah Dunn, Sean Wilkinson, Gaihua Fu, Richard Dawson 41
Cities as geoengineering building blocks
Jonathan Fink 59
Tunnelling through the complexity of national
infrastructure planning
Jim Hall, Justin Henriques, Adrian Hickford, Robert Nicholls 65
Advancement of natural ventilation technologies for
sustainable development
Ben Hughes, John Calautit and Hassam Chaudhry 71
Using an urban futures tool to analyse complex long
term interactions between technological, human and
natural systems
Dexter Hunt, Ian Jefferson and Chris Rogers 85
vii
Approach towards sustainability of growing cities: An
Indian case study
Mukesh Khare and Priyajit Pandit 105
Mapping the limits of knowledge in flood risk
assessment
Bruno Merz 115
Critical materials for lowcarbon infrastructure: The
analysis of local vs. global properties
Phil Purnell, David Dawson, Katy Roelich, Julia Steinberger and
Jonathan Busch 127
Modelling of evolving cities and urban water systems in
DAnCE4Water
Christian Urich, Peter Bach, Robert Sitzenfrei, Manfred Kleidorfer, David
McCarthy, Ana Deletic and Wolfgang Rauch 141
The challenges of assessing the cost of geoengineering
Naomi Vaughan 157
A spatiotemporal modelling framework for the
integrated assessment of cities
Claire Walsh, Alistair Ford, Stuart Barr and Richard Dawson 163
Ecovulnerability assessment and urban ecozoning for
global climate change: A case study of Shanghai, China
Xiangrong Wang, Yuan Wang, Zhengqiu Fan and Yi Yong 175
The Loughborough University TEmperature Network
(LUTEN): Rationale and analysis of stream temperature
variations
Robert Wilby, Matthew Johnson and Julia Toone 187


1

An Earth Systems Engineering Critique of Geoengineering
Brad Allenby
1

1
Lincoln Professor of Engineering and Ethics; Presidents Professor of Civil, Environmental, and Sustainable
Engineering; Professor of Law; and the founding director of the Center for Earth Systems Engineering and
Management at Arizona State University in Tempe, Arizona.
Email:braden.allenby@asu.edu

Abstract
Growing concerns about anthropogenic climate change, and the lack of progress on responsive
policy initiatives such as the UN Framework Convention on Climate Change, have led to increasing
interest in geoengineering technologies. The costs and benefits of these options, however, have not
been determined, and the framework within which current analysis occurs is inadequate. Earth
systems engineering and management provides a basis not just for effective critique of current
approaches, but also principles that can advance rational, ethical and responsible management of
such complex adaptive systems.

Allenby An Earth Systems Engineering Critique of Geoengineering
2
1 Introduction
Continuing concern about anthropogenic global climate change and a growing recognition of the
inadequacies of the Kyoto Protocol and the on-going climate change negotiating process has led to
increasing interest in geoengineering as a response. This question has, if anything, become more
pressing since the 2011 17th Conference of the Parties (COP17) to the United Nations Framework
Convention on Climate Change (UNFCCC) and the 7th Session of the Conference of the Parties
serving as the Meeting of the Parties (CMP7) to the Kyoto Protocol in Durban, South Africa
(Allenby, 2012). While the United Nations concluded that the Conference had delivered a
breakthrough on the future of the international communitys response to climate change, noting
that governments decided to adopt a universal legal agreement on climate change as soon as
possible, but not later than 2015 (United Nations, 2011), environmentalists strongly disagreed.
Greenpeace (2011) claimed that, today, vulnerable people are dying because of climate related
impacts, and that the governments involved in the failed talks should be ashamed. Marc Gunter
in Greenbiz (2011) commented that, [m]ore interesting than parsing the texts is understanding why
two decades of UN climate talks have produced so little progress, and concluded that Durban and
the UN process arent getting us where we need to go. No way, no how. More actively, Canada
registered its reaction by withdrawing from the Kyoto Protocol the day after the Durban talks ended,
while the Russian Foreign Ministry publically declared that the 1997 Kyoto Protocol has lost its
effectiveness in the context of the social and economic situation of the 21
st
century. (AP, 2011;
Gillies, 2011).
If it were only the Durban conference that had faltered, one might view such developments as over-
reactions. Indeed, despite the almost universal conclusion of failure, the UN blithely announced
that the next major climate change conference, COP 18, would take place in 2012 in Qatar.
Nonetheless, the failure of the political process intended to address anthropogenic climate change
raises important questions from an earth systems engineering and management (ESEM) perspective.
How robust is geoengineering as an engineered approach to climate change, for example? Can
ESEM help reframe the geoengineering discourse in useful ways? Thinking about such questions is
particularly important because (perceived?) political failure will strengthen the arguments for
developing geoengineering, if for no other reason than to provide insurance in case unexpected
tipping points are encountered, and it is not premature to try to improve the geoengineering dialog if
possible (Bracmort et al., 2010).
2 Geoengineering
In order to effectively evaluate geoengineering from an ESEM perspective, it is critical to
understand that it has come to have a very peculiar definition. Reflecting mainstream
understanding, the Royal Society in an authoritative 2009 report defined geoengineering (at ix) as
deliberate large-scale intervention in the Earths climate system, in order to moderate global
warming. This definition is immediately notable because it defines a major technology system
from the beginning only by its effects in one domain, the climate system. Geoengineering
technologies can be further categorized as carbon dioxide removal techniques (CDR) which
physically remove carbon dioxide from the atmosphere, or solar radiation management
techniques (SRM) which reflect some incoming solar energy back into space before it can be
directly or indirectly absorbed by the atmosphere (Royal Society, 2009).
Allenby An Earth Systems Engineering Critique of Geoengineering
3
CDR technologies can be further broken down into two categories. The first is biological, including
forest plantations, biofuels of various sorts, algae absorption systems, and more exotic alternatives
such as ocean fertilization schemes, where limiting resources such as iron are introduced into the
ocean, thereby causing plankton blooms which capture CO2 and sequester it when they die and sink
to the bottom (at least theoretically). The second is non-biological CDR, a category of technologies
that chemically capture atmospheric carbon dioxide for subsequent storage or industrial use. SRM
proposals include deployment of space-based reflector systems between the Earth and the Sun, or
placing reflective devices in the stratosphere to reflect incoming solar radiation back into space, or
injecting sulfate particles or salt particles into the upper atmosphere, where they help reflective
clouds form, again reflecting incoming solar radiation back into space.
SRM and CDR affect climate through different mechanisms, and therefore address different aspects
of climate change. Perhaps most notably, SRM, unlike CDR, does not mitigate perturbations that
arise from human induced changes in atmospheric chemistry, which affect coupled systems such as
the worlds oceans (as atmospheric concentrations of CO2 rise, the oceans absorb more CO2 and
become more acidic, potentially affecting marine organisms such as mollusks that are dependent on
carbonate chemistry for shells and support structures).
From an ESEM perspective, geoengineering technologies are not necessarily bad or good as
technologies per se. Indeed, if climate degradation were to accelerate unexpectedly, or if political
gridlock were to prevent significant mitigation or adaptation, geoengineering technologies might
prove to be necessary, and perhaps the only available, insurance against unacceptable climate
change. As with other powerful technologies, however, many costs and benefits arise as a result of
the pace and scale of deployment that is, the effect of the technology arises from social and
political choices about how and at what scale the technologies are deployed, and how the disparate
costs and benefits are valued and balanced, rather than inherent characteristics of the technology
itself. Rational and ethical development of these technologies may be desirable given the challenge
of climate change, but deployment may be risky without a more sophisticated understanding of how
technologies should be used in the context of complex adaptive systems. Rational management of
these technologies will also require a more detailed understanding of the implications of specific
geoengineering technologies, especially if they are used at large scales as silver bullet solutions,
rather than as part of an integrated portfolio of responses.
2.1 Critiques of Geoengineering
Not surprisingly, proponents argue that geoengineering technologies, while risky, should at least be
explored as fallback options to be used if needed, while opponents emphasize the risks, potential
costs, and unknowns. Opponents also make what is known as the moral hazard argument,
claiming that any technology that significantly mitigates the effect of human contributions to global
climate change would also reduce pressure to force governments to mandate changes in individual
consumption, which is seen as necessary by many environmentalists, climate change activists, and
scientists (Brumfiel, 2012). This second argument is tricky from several perspectives. It is
undoubtedly the case that most people, and therefore governments, usually avoid disruptive change
unless it is clearly necessary. Changes in consumption mandated in an effort to reduce emissions
are therefore seen to be difficult to achieve unless there are significant pressures driving the change.
Climate change is such a pressure, and some people argue that it has therefore been used by
activists and some climate change scientists for social engineering rather than treated as a scientific
Allenby An Earth Systems Engineering Critique of Geoengineering
4
challenge. But it is one thing to argue that anthropogenic climate change requires that governments
force emissions reductions; another to design research programs, present scientific work, and
manipulate technological development so that potential options are never presented to the public for
fear the public might make the wrong choice (Allenby, 2010-2011).
But from a technology system perspective, there are more fundamental weaknesses in the current
state of the geoengineering dialog. These arise, ironically, because geoengineering embeds many of
the same assumptions as the UNFCCC process does. For example, in part because anthropogenic
climate change is seen as existentially catastrophic, both geoengineering and the Kyoto Protocol
approach to climate change assume that the systems dynamics involved are simple when the
underlying systems are, in fact, complex adaptive systems. Thus, one reason the Kyoto Protocol is
failing is overly simplistic political assumptions (e.g., that environmental values should dominate
other values; that countries will choose to eschew development to reduce climate change emissions);
one reason geoengineering as currently conceived is dangerous is because technologies which, by
definition, are powerful and foundational enough to change climate systems are being developed
and, perhaps in future, deployed based only on climate change implications (Allenby, 2010).
Underlying this assumption is another that goes to the heart of an ESEM critique of current climate
change activities: both geoengineering and the Kyoto Protocol approach assume that climate change
is a problem to be fixed rather than a condition to be managed. This leads to an emphasis on
simple, silver bullet solutions, rather than developing management regimes that encourage the
emergence of effective social, technological, and economic responses. If, rather than being a
problem, that can be easily fixed, climate change is one indicator of a shift from a planet where
humans were one system among many, to a terraformed planet characterized by highly complex
integrated human/built/natural systems a planet with seven billion people on it, all wanting a
better life then it is not something that is going to be fixed. Perhaps managed, but fixed
no. . As currently conceived, geoengineering is a solution for a specific problem. This
misunderstands both the condition to be addressed climate change and the implications of the
solution. Thus, geoengineering as framed assumes that climate change can be separated from
other systems, and put back into a state where it poses little or no risk to humans, urban and built
environments, agriculture, biodiversity, ecosystems, other natural systems such as the nitrogen or
phosphorus cycles, or any other system. If climate change were that separable, actually, it would
probably already have been fixed. If, on the other hand, climate change is simply one of a number
of coupled emergent behaviors generated by seven billion people and their concomitant economic
and technological systems, it cant be fixed certainly not by silver bullets, be they policy or
technology.
Perhaps most importantly, the geoengineering discourse fundamentally fails to appreciate the
dynamics and characteristics of powerful technology systems; it is a discourse of natural scientists,
not engineers and social scientists. And it is therefore regrettably nave. It strongly suggests that
the implications of the proposed fixes, such as CDR and SRM, are quite likely to be far more
profound than currently understood, simply because each technology is being designed, and
evaluated, primarily based on its climate change impact.
This is a profound category error. Any technology system powerful enough to affect global climate
will also inevitably have fundamental effects across many other domains. This point, in fact, is the
key to the concept, developed by economic historians, of long waves of innovation (also called
Allenby An Earth Systems Engineering Critique of Geoengineering
5
Kondratiev waves) and associated institutional and social change that develop around core
fundamental technologies. The Kondratiev wave framework was developed to help explain
business cycles in developed economies, and centers on the idea of technology clusters that build
around core technologies, and in turn co-evolve with new social, institutional, economic,
technology, and cultural patterns. Among the most commonly identified clusters are railroads and
steam engines from the 1840s to the 1890s, heavy engineering and electricity from the 1890s to
the 1920s, the automobile and aviation from the 1920s to the 1990s, and the emerging technology
cluster from the 1990s to the present (emerging technologies including nanotechnology,
biotechnology, robotics, information and communication technology, and applied cognitive science).
Each cluster is characterized by new and complex patterns that were inherently unpredictable until
they happened (Freeman and Louca, 2001).
Railroads, for example, did not appear as an isolated technology system, for among other things
they required other complementary technologies to ensure that rail networks could operate in real
time, especially telegraph networks for communications and, more subtly, a standardized time
system coextensive with railroad networks that replaced idiosyncratic local times with a uniform
global time. Railroad infrastructure also profoundly changed the economic structures, primarily
local, that had dominated up to that time by enabling national economies of scale, thereby leading to
the replacement of local businesses and industry with trusts and monopolies: Big Oil, Big Tobacco,
Big Sugar. Railroad firms and their capital demands were also larger than any that had gone before,
which created pressures for the evolution of modern financial markets and instruments. The scale
of the railroad firm was also far larger than any that had gone before, leading to the model of the
large hierarchical firm, with its differentiation of white collar work (corporate lawyers, accountants,
human resource experts, managers, and so forth) that characterizes industrial capitalism
(Schivelbusch, 1977). The automobile was a quite limited technology until Ford developed mass
production and, by paying his workers enough to buy the cars they produced, mass consumption
culture as well. But higher salaries werent enough, so firms introduced individual credit as well,
an innovation with huge implications for economies, individual happiness, and the environment.
And each of these technology systems also profoundly changed the physical environment: railroads
opened up continental interiors so that, for example, the American Midwest shifted from a
predominantly riverine wetlands to an agricultural monoculture; the environmental implications of
automobile technology at current scale are also profound and apparent (Freeman and Louca, 2001).
In short, powerful technology systems form new earth system engineering states; they ripple across
regional and global human, built, and natural systems in complex and unpredictable ways. From an
ESEM perspective, therefore, it is dangerously nave to deploy technologies as potent as
geoengineering in order to solve one particular problem, regardless of how important that
problem may be. Any such technology will change not just the climate and associated coupled
natural systems such as the oceans, the hydrologic cycle, and so on unpredictably; it will also
have social, cultural, political, institutional, economic, psychological, and technological
implications that are unpredictable but that, inevitably, will be profound. Moreover, costs and
benefits will never be evenly distributed across political entities and communities, and the means
for balancing them in any fair way dont exist. Some have speculated, for example, that monsoon
rain patterns could shift in Asia, with immediate major impacts, including famine and mass
migration, and as a result political and social instability of huge proportions. How can one value
that rationally? And how much climate change mitigation is worth such an outcome (always
Allenby An Earth Systems Engineering Critique of Geoengineering
6
remembering that such scenarios are speculative, not predictive). Moreover, depending on how
geoengineering techniques are deployed, it is likely that they may rapidly become critical to
maintaining climate stability, and thus locked in and difficult to reverse. The cold reality is that
geoengineering technologies will not solve the climate change problem; rather, they will
redesign major Earth systems including not just natural, but human and built systems
powerfully and unpredictably.
2.2 Where are the Real Geoengineering Technologies?
Part of the reason that geoengineering has been defined in such an extraordinarily limited and
somewhat nave way is that it has been mischaracterized from the beginning as a solution to a
problem. To accept the current definition, one must pretend that geoengineering technologies do
not have major effects on coupled systems, which in a case like this includes most large natural,
human, and built systems. One must also make the flawed assumption that all other technological
possibilities those that do not fall into either the CDR or SRM categories - do not have major
effects on climate systems. If that is not the case, after all, then it makes no sense to focus only on
geoengineering technologies in assessing the possibilities. So is it reasonable to assume that there
is something uniquely effective about the way CDR and SRM technologies as currently defined
impact climate change? Probably not; it is difficult to defend limiting the consideration
geoengineering technologies to those which are intended to be silver bullets for climate change.
If nothing else, this approach is brought into question by the observation that the entire network of
coupled complex adaptive systems of which technology is a part - social, cultural, environmental,
economic, institutional, and technological is already shifting rapidly to reflect a higher
prioritization of climate issues. People and institutions talk about climate change more. More and
more venture capital is being invested in green energy research, development, and deployment.
Automobiles are being redesigned as more and more hybrids are available and plug-in vehicles near
market. Natural gas is being substituted for coal in new build power facilities, a development which
is mightily helped by the rapidly increasing exploitation of shale gas reserves around the world.
Building heating, ventilating, and air conditioning systems are being redesigned with an increased
emphasis on efficiency and just-in-time delivery of services. LED lighting technology is rapidly
diffusing from the laboratory to the market. Adolescents no longer drive to the mall to mingle; they
do so on Facebook and other social networking sites. The real business of geoengineering the
agile adjustment to changing priorities and environments by the complex adaptive systems which
characterize the terraformed Earth - is already happening. It isnt perceived, though, because its
not a simple, easily isolated, highly publicized major technology system that provides the dangerous
illusion of control; moreover, its not simple enough for the ideologies characteristic of the climate
change discourse.
And there are other major technology systems which could have effects similar to even the dramatic
CDR and SRM proposals. Consider, for example, the new, promising, and seriously under-funded
technology of growing meat from stem cells in factories. Animals such as cows and pigs are really
a very inefficient production methodology if what one wants is hamburger or pork chops; in fact,
livestock today consumes about a third of global grain production (Holmes, 2010). Moreover, if
one is interested in global climate change, it is not unimportant that a cow, for example, annually
emits roughly 50 kilograms of methane, a greenhouse gas far more powerful on a per molecule
basis than CO2 (Mattick and Allenby, 2012). In fact, some people estimate that shifting away from
Allenby An Earth Systems Engineering Critique of Geoengineering
7
livestock agriculture could reduce greenhouse gas emissions by well over 15 percent this is often
used to argue that everyone should become vegan but, since that would involve a highly contentious
culture change, a technology that shifted production away from fields and animals to technology is
much more likely to be effective (Allenby and Sarewitz, 2011).
This example serves to reaffirm the point that any technology of sufficient power to affect global
climate systems must be evaluated across its range of impacts, not just with regard to one particular
perturbation (even if it is a major one, such as climate change) (Tollefson, 2010). Shifting away
from raising food animals would not just significantly help directly to reduce climate change
forcing; it would also reduce nitrogen loading on sensitive ecosystems; help manage the
phosphorous process by reducing demand for fertilization of pasture and grain used for animal feed;
dramatically reduce the land required for agricultural activities, helping to preserve biodiversity in
the process (this is not a trivial effect; some estimates are that globally such a technology could free
up an area of over 3 million square kilometers, about the size of India); and contribute to substantial
avoidance of soil erosion (some estimates are that half the soil erosion in the United States is
associated with livestock activity). It would create benefits across many important domains. Some
of these are not ones that might be obvious: some argue that factory meat might help provide
additional protein to poor people around the globe (Mattick and Allenby, 2012).
But factory meat production would not be costless. In this case, major costs would accrue to those
who currently make their living from livestock not just the ranchers and farmers, but their
suppliers, those who build and maintain the infrastructures involved in livestock production, the
industries that produce equipment used in livestock sectors, the towns and villages that have grown
up in livestock areas. Some localities where the landscape is defined by pastoral archetypes might
also be adversely affected consider, for example, the Swiss tourist industry. This example also
confirms, then, that all geoengineering technologies will have costs because any major shift of
Earth systems to a new, relatively stable state has costs. Moreover, the costs will not be distributed
to the same people, communities, and nations as the benefits are.
Examples from other sectors could also be provided. How much, for example, could information
and communication technology (ICT) be used to support large scale substitution of information
technology for unnecessary transportation? This might involve not just telework and virtual offices,
an area that has already been explored in the literature, but serious implementation of virtual reality
conferences and meetings at scale (a number of organizations and societies have held meetings in
Second Life, for example; from experience, these are much cheaper, but because the technology is
new and some people arent comfortable with it, generally draws smaller audiences than real life
conferences). It also might involve creating systemic efficiencies, such as use of ICT to
dramatically reduce unnecessary inventory and reduce production and distribution of unwanted
product, thus significantly reducing material waste (specialty books printed on demand are an
example, rapidly being replaced by e-readers, which dispense with paper, shipping, storage and
sales space, and other environmental costs associated with reading and, of course, put out of work
the people that produced physical books).
Arguably, it is technologies such as these and many others manufacturing on demand, for
example - that constitute serious geoengineering. They are, unquestionably, more complex and
harder to understand, much less explain to the public. Yet that is simply reality, and part of their
value comes from the fact that they are not oversimplified technological fantasies but must
Allenby An Earth Systems Engineering Critique of Geoengineering
8
necessarily recognize, and engage with, all the other systems to which they are coupled. Moreover,
they point the way towards generation of a rational, ESEM-compatible, approach to geoengineering,
and the technologies of climate change.
3 Reengineering Geoengineering
From a technology policy perspective, this brief analysis suggests two major additional general
principles that should inform geoengineering efforts. The first is the need to drop the implicit
silver bullet approach and adopt an explicit portfolio approach: that is, drop the over simplistic
and dangerous assumption that one, or a few, major technological interventions in the climate
system are feasible and desirable, and substitute a palate of technologies, some already identified in
the CDR and SRM categories, and others that have potential for helping manage various aspects of
climate change. Put simply, when dealing with such a major, and highly complex and adaptive,
Earth system, and such potent technologies, it is highly unlikely that any particular technology can
be relied on as the sole, or even major, response, no matter how seemingly ideal: the associated
costs and risks, most of them unknowable ab initio, are likely to be far too high. Moreover, such a
single solution response is highly unstable. If anything does go wrong, it is likely to be very hard to
fix if you only have one option and especially if that option has become locked-in because other
systems have adapted themselves to it. A good, if lesser, example of this is the American ethanol
fuel additive program: despite causing price changes in many food items, differences in crop
acreage planted, huge investment in processing facilities, substantial additional demand for water,
and political instability around the world, it is unlikely to be reversed: farmers are benefiting from
the income, processors need to recover their investment in technology, and so forth. A portfolio of
responses, in contrast, is much more flexible. It avoids the need to scale up new and unknown
technologies to global levels in short time frames, and enables substitution of various fixes at the
margin. If one technology should begin to begin to generate undesirable costs, it is much easier to
augment other mitigating technologies, and continue to move forward (Allenby, 2011). Especially
in earth systems engineering of highly complex and fundamental systems such as climate, the
ability to avoid lock-in, and support development of a robust options space that enables agile
responses to unforeseen shifts in system state, are not simply good to have characteristics, but
rather important design criteria (Allenby and Sarewitz, 2011).
The second need is for more research on CDR and SRM technologies. Few of the proposed
geoengineering technologies are bad; it is just that they are highly risky if implemented with
inadequate information at large scale and as a last resort. Continuing research is not only necessary
to help identify potential costs and benefits of each option; it is also equally important to better
understand the scale, and speed of introduction, at which each technology can be implemented with
some confidence that the costs and implications are manageable, and, conversely, identify regimes
within which non-linear effects can be anticipated. Consider again the U.S. policy to rapidly scale
up corn-based ethanol as a gasoline component. Producing ethanol from corn is not, by itself, a
bad technology: it has been practiced with notable success in the mountains of Appalachia for
centuries (google moonshine). But ramped up quickly to large scale, it has generated costs,
market distortions, redistribution of income, famine, and many other impacts - a high price indeed
for a failure to understand systemic nonlinearities. It is highly likely that some environmentalists
such as the ETC Grolup in Canada will object to this research on the (moral hazard) grounds that
the very presence of such an experiment may make politicians think that theres a way to wriggle
out of emissions caps. (Brumfiel, 2012). But as their favored policy initiatives such as the
Allenby An Earth Systems Engineering Critique of Geoengineering
9
UNFCCC fail, and the effort to sub rosa socially engineer society to be more environmentally
conscious raise every stronger opposition, the single voice approach to anthropogenic climate
change becomes ever more brittle. The possibility that geoengineering technologies will become
necessary looms ever closer; the key is to do it rationally, with redundancy and agility, so that costs
and benefits can be monitored and managed, rather than to try to manipulate science to force the
outcome desired by environmental elites. Deliberate ignorance is seldom a good policy.
4 Conclusions
The message of anthropogenic climate change is that humans are already in the era of
geoengineering. Thats what it means to live in the Anthropocene, on a terraformed planet. Until
recently, it was perhaps excusable to fail to perceive the concomitant responsibilities, which are
certainly daunting and difficult to understand, much less manage. But at some point it becomes not
wisdom, but ideological hubris, to pretend that these large, complex systems are not being
implicitly designed by humans and their economic, cultural, social, and institutional systems.
Indeed, that is the realization behind ESEM. Perhaps unfortunately, this does not mean one should
presume to suggest starting anew with a more sophisticated, less ideological perspective. Many
people and institutions have given years to the current processes and discussions around climate
change and geoengineering, and it is not realistic to expect that they will be able to change now.
The problem solving professions engineering, law, business management must work with the
world they are given. This does not, however, mean that working for improvements small scale
demonstrations of principle, development of portfolios of technologies, an emphasis on agility in
the face of uncertainty is infeasible.
This leads to the conclusion that it is premature to reject geoengineering technologies outright, or to
ignore them in hopes they will therefore disappear, or to oppose them because they make social
engineering harder and more complex. Geoengineering technologies, if properly scaled and
implemented as part of a portfolio of mitigating technologies, represent an important possible
source of insurance in a future which is highly unpredictable and in which major systems climate,
ocean circulation patterns, and the like are contingent in poorly understood ways. This in fact
suggests a pressing need for continuing research, especially regarding their non-linear effects as
they are scaled up, and how they may most effectively be combined with other social, economic,
and technological responses to global climate change and, more broadly, the challenges of the
Anthropocene. Especially given the mixed track record of institutional or social adjustments, the
quick responses that technologies can provide in the face of significant system deterioration are a
factor that cannot be ignored.
It is not just research on various technologies that is required, however. Such research is important,
but it should be augmented with development of a far more sophisticated understanding of what
technology systems are, and what they do when deployed at large scale regionally and globally in
short, continued development of the theory and practice of ESEM. In particular, potent
technologies cannot be thought of only, or even primarily, in terms of particular domains, even if
they are perceived as critically important, as climate change is. Such a simplistic approach
fundamentally misunderstands the nature of technology. Additionally, the current conceptualization
of geoengineering is too limited in that it ignores virtually all current technological options, thereby
restricting decision makers to a few all-or-nothing choices instead of providing a portfolio of
options that enhance agility and adaptability in the face of an unpredictable world. It is entirely
Allenby An Earth Systems Engineering Critique of Geoengineering
10
feasible to generate far more ethical, responsible, and rational responses to a challenging future; it is
a tendency to rely on ideology rather than analysis, and a dangerous lack of imagination, that are the
real problems.
References
Allenby, B. R. (2010). We Cant Fix the Planet. Slate, http://www.slate.com/id/2268492/, posted September 24, 2010.
Allenby, B. R. (2010-2011). Climate Change Negotiations and Geoengineering: Is This Really the Best We Can Do?
Environmental Quality Management, Winter 2010-2011, pages 1-16.
Allenby, B. R. (2011). Geoengineering: A Critique. Proceedings of the 2011 IEEE Annual Symposium on Sustainable
Systems and Technology, Chicago, IL, May 16-18, 2011.
Allenby, B. R. (2012). Durban: Geoengineering as a Response to Cultural Lock-In. Proceedings of the IEEE 2012
Annual Symposium on Sustainable Systems and Technology.
Allenby, B. R. and D. Sarewitz. (2011). The Techno-Human Condition. Cambridge, MA, MIT Press.
AP. (2011). Russia Slams Kyoto Protocol.
http://www.google.com/hostednews/ap/article/ALeqM5hvs_QYwvGYexpDwEz20ax5cKNRxw?docId=89bcec9b70034
b3386428ac71f71f90e. Accessed May 2012.
Bijker, W. E., T. P. Hughes, and T. Pinch, eds. (1997). The Social Construction of Technological Systems. Cambridge:
MIT Press.
Bracmort,K., R. K. Lattanzio, and E. C. Barbour. (2010). Geoengineering: Governance and Technology Policy. U. S.
Congressional Research Service R41371.
Brumfiel, G. (2012). Good Science. Nature 484, 432-434.
Freeman, C. and F. Louca. (2001). As Time Goes By: From the Industrial Revolutions to the Information Revolution.
Oxford: Oxford University Press.
Gillies, R. (2011). Canada Pulls Out of Kyoto Protocol. http://news.yahoo.com/canada-pulls-kyoto-protocol-
224653838.html, accessed December 16, 2011.
Greenpeace. (2011). COP 17 Grinding to a Halt Greenpeace Response.
http://www.link2media.co.za/index.php?option=com_content&task=view&id=14565&Itemid=12, accessed December
15, 2011.
Gunther, M. (2011). What the Heck Happened in Durban? www.greenbiz.com/print/45353, accessed December 17,
2011.
Holmes, B. (2010). Whats the beef with meat? New Scientist 207(2769), 28-31.
Mattick, C. S. and B. R. Allenby. (2012). Cultured Meat: The Systemic Implications of an Emerging Technology.
Proceedings of the IEEE Annual Symposium on Sustainable Systems and Technology, May 2012.
Rosenberg, N. and L. E. Birdzell, Jr. (1986). How the West Grew Rich: The Economic Transformation of the Industrial
World. New York: Basic Books.
Royal Society. (2009). Geoengineering the Climate: Science, Governance, and Uncertainty. London: Royal Society.
Schivelbusch, W. (1977). The Railway Journey: The Industrialization of Time and Space in the 19
th
Century. Berkeley:
University of California Press.
Tollefson, J. (2010). Intensive farming may ease climate change. Nature 465, 853.
United Nations. (2011). Durban Conference Delivers Breakthrough in International Communitys Response to Climate
Change. http://www.un.org/wcm/content/site/climatechange/gateway, accessed December 17, 2011.



11

Changing The Metabolism Of
Coupled HumanBuiltNatural Systems
M B Beck
1
, R Villarroel Walker
1
, and M Thompson
2

1
Warnell School of Forestry, University of Georgia
Athens, Georgia 30602-2152, USA
E-mail: mbbeck@uga.edu
2
International Institute for Applied Systems Analysis (IIASA),
A-2361 Laxenburg, Austria
Abstract
The archetypal metabolism of the city is defined by the flows of energy and materials (carbon (C),
nitrogen (N), phosphorus (P), water) entering the city from the rest of the global economy, then
circulating around and through its economic, social, and industrial life, before returning to the citys
environment (and the global economy). The change we argue for is that of viewing nutrients not as
pollutants a perception entailed in the historic success of water-based systems for securing public
health in cities but as resources to be gainfully recovered. We expect such change to propagate
from change in the human (local and very personal scale), through the built (city-wide scale),
hence eventually to better stewardship of the natural (and global scale) component of coupled
human-built-natural systems. We begin with a discussion of moral positions on the Man-Nature
relationship and the material flows resulting therefrom. In theory, if material flows are to be
changed, a map of the plural contending moral positions is first needed, in particular, in respect of
contending notions of fairness. Empirical evidence from the renewal of Londons housing stock in
the 1970s and 1980s, the siting of hazardous waste treatment facilities in Austria, and European
work on decentralization and source-separation in wastewater management, is used to illustrate the
theoretical basis of our opening argument. Two more in-depth case studies, in changing the
metabolisms of the cities of Atlanta, USA, and London, UK, primarily through technological
innovations, are then introduced. Foresight regarding future distributions of financial costs and
benefits amongst multiple stakeholders, should the change of outlook be made, is generated from a
Multi-sectoral Systems Analysis (MSA) model (covering the water, energy, food, waste-handling,
and forestry sectors). Upon these specific results is built the closing argument of the paper regarding
matters of fairness in constructing (or dismantling) social legitimacy around the policies of urban
infrastructure re-engineering that would be needed to recover the resources currently treated as
pollutants.

Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
12
1 Introduction
Until public health had been secured for citizens living cheek by jowl in the confined spaces of
cities, cities were arguably prevented from realizing their full potential as the engines of national
(and now global) economies (Glaeser, 2011; McGreevey et al., 2009). One hundred and fifty years
ago when the introduction of the Water Closet (WC) was becoming widespread the
configuration of the water infrastructure, into which most cities of the Global North were to become
ever more comprehensively locked, could not have been imagined. And not until just some two
decades ago did we then question whether the predominant style of environmental engineering of
such infrastructure, especially that for managing wastewater (on the downside of the city), was
self-evidently doing good by the environment (Niemczynowicz, 1993). If it was not, moreover,
how might we re-engineer a way out of this technological lock-in and seek to learn how to avoid
it in the first place, from socially and economically successful cities of the Global South (Beck,
2011; Crutzen et al., 2007)? We stand presently on the threshold of what some, therefore, are
calling a decisive change of paradigm (Larsen et al., 2009; Larsen et al., 2012).
In the fullness of time, that small, seemingly humble, yet utterly vital innovation of the household
WC has indeed brought about its own form of earth systems engineering. Consider this. The WC,
together with subsequently evolved sewerage, cuts the short feedback loop between pathogenic
excretions and drinking water and conveys our (human) metabolic residuals out of the confined
spaces of the city and into the environment. The materials we need in food for sustaining ourselves
nitrogen (N), phosphorus (P), carbon (C), embodied energy, and so on pass through our
bodies and, given the WC, are then headed to some form of aquatic environment. Prior to
comprehensive installation of the WC and sewerage, this was not naturally so. Public health in the
city has been acquired at the expense of water pollution. Thus did (and does) the environmental
engineering of the citys wastewater infrastructure progress through eras driven successively by the
need to control pathogenic pollution, gross organic pollution, nutrient pollution, and toxic pollution
all in respect of water bodies. Had the Reverend Moules Earth Closet (EC), or some kind of
Vacuum Closet (VC), instead gained supremacy ahead of the WC popularized by Mr Crapper,
might none of these eras of water pollution control ever have been entered into.
From another perspective, given the extraordinary success of the Haber-Bosch process for
manufacturing fertilizers based on nitrogen (Erisman et al., 2008), the WC and sewerage in the
absence of their effective coupling with wastewater treatment have participated in fueling coastal
eutrophication on a global scale (Beck, 2011; Grote et al., 2005). Artificial fertilizer is applied to
the land, to produce foodstuffs in North America, for example. These products are shipped around
the globe, to become imports into, say, Asian countries and their cities. There, once consumed
and in the absence of wastewater treatment
8
all the residuals of the nutritious nitrogen (N) and
phosphorus (P) materials end up (untreated) in coastal seas and oceans, with distorting
consequences for the structures of marine ecologies and their associated fisheries (Jackson et al.,
2001). Moreover, given the current staggering successes of membrane technologies, hence the
burgeoning of desalination facilities around the world (Frenkel and Lee, 2011), there is every
prospect of yet more earth systems engineering being wrought and with complex, unfolding,
unraveling social consequences. For desalination amplifies the capacity for supplying potable water

8
Installation of which component of infrastructure tends to lag some 20 years behind the introduction of
infrastructure for potable water supply on the upside of cities (McGreevey et al., 2009).
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
13
to people in coastal cities. In principle, this greater access to water should sustain greater
populations of citizens in such cities, all of whom may thereby be placing themselves increasingly
at risk from the threats of sea-level rise (Beck, 2011).
There are many reasons, therefore, to judge that we stand on the threshold of constructive, pivotal
change change, arguably, of proportions entirely consistent with the scale and scope of earth
systems engineering. Consequently, we should address, first, the prospect of a change in mindset:
from viewing the C, N, P, and other materials entrained into the water metabolism of the city (as a
result of the WC), as pollutants to be rid of (at a cost); to their being viewed as resources to be
recovered (with profit). Second, the commonplace of talking about a global water crisis tends to
limit thinking about the citys water infrastructure to matters of water supply, water recovery, and
water re-use (Beck and Villarroel Walker, 2011). It accords inadequate, if not scant, recognition to
the role and place of wastewater in that infrastructure or rather the waste in the water. Given
then this altered apprehension of the citys metabolism as not being solely that of water fluxes,
but that of the multiple C, N, P, energy, water, and other material fluxes we are obliged to
change our outlook further: from policy and engineering analysis of the water sector alone; to that
of integrated analysis of the water and nutrient and energy sectors (Villarroel Walker et al., 2012).
This is entirely in line with the emerging global agenda item of the Water-Food-Energy (WFE)
Climate Security Nexus, deriving from nothing less than the World Economic Forum (WEF, 2011).
We examine the potential for technological innovations within city-watershed systems (specifically
Atlanta and London) to achieve substantial rates of resource-energy recovery, in a multi-sectoral
context. From this foresight exercise flow back a number of economic implications, not least an
emerging appreciation of fairness: of who might pay and who might gain, for example, should these
potential innovations be introduced. Monetary matters are crucial, of course. But they are not the
only considerations in any debate within a community with plural aspirations for the future of their
city and its cherished environment. Acceptance of the innovations, especially if they are disruptive
of personal and household behavior, i.e., they require a change in habits, will be just as important in
the community witnessing the formation of policy decisions as socially legitimate.
Our analysis of the change we advocate begins with the subject of valuation, hence the manner in
which mind-sets may change. There is a theory for this and there is empirical evidence of its
workings, in matters of renewing the housing stock in London, UK, and managing hazardous waste
in Austria, which is briefly recounted. The subsequent task is to frame the debate surrounding the
issue of recovering (or not) the nutrients/energy in sewage according to this theoretical structure and
to assemble further empirical evidence of the nature of the debate, notably across Europe (Larsen et
al., 2012). This framing of the issues extends our earlier work on governance for re-engineering city
infrastructure (Beck et al., 2011). We ask then, could computationally generated foresight about
such inter-generational matters foresight generated, that is, in a manner consistent with the
theoretical structure have an impact on either the debate or achieving progress towards change in
practice?
2 Framing the Problem: Understanding What Might Spark the
Change
In Rubbish Theory: The Creation and Destruction of Value, Thompson (1979) writes of how the
way we value things changes over time. The object remains the same: city house, chair, nutrient in
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
14
sewage. It is just that our perspective rotates around the object and with that change of perspective
the object may come to be valued quite differently. Thus, as we all know, the inner-city property
once viewed as a rat-infested slum to be rid of as quickly as possible through demolition may
in due course become a very highly valued piece of real estate (and endure for much longer than
might have been expected). Thus might it also be for the C, N, and P materials in sewage, and their
embodied energy: conventionally treated across the 20th century as pollutants to be utterly rid of;
from hereon (arguably) to be viewed instead as valued resources to be recovered (see, for example,
Beck (2011); Larsen et al. (2012)). The essential question is: what might spark such a
transformation in outlook through what kind of policy or technological innovation and with
what promise of greater social well-being?
If we understood something of the mechanics of similar transformations, we might then be able to
sense how the re-valuation from nutrients-as-pollutants to nutrients-as-resources might occur. And
it may well be that the transformation is not a function of better policy, or new and better
technologies, or even of financial incentives, but of the interplay, for example, amongst the plural
and contending notions of fairness that are abroad in society. We should be open to such being the
case.
2.1 Plurality of Perspectives in Society
Writing more recently, in a paper on Material Flows and Moral Positions, Thompson (2011)
provides a framework within which to contemplate how the change we seek could come about, as
follows:
A prevalent view, among those who are concerned about the material flows we are generating, is that
they are excessive and environmentally unsustainable. Greed, the triumph of competition over
cooperation, the inequalities between North and South, and anthropocentrism are then blamed for this
state of affairs. The solution is obvious: more altruism, a worldwide equalising of differences, a
reining-in of market forces, and a whole new relationship with nature ecocentrism. This clearly is a
moral position, and those who act from that position will certainly be having some effect on material
flows. But there are other moral positions, and other ways of framing the problem and its solution, and
it is the plurality of moral positions, and their modes of interaction, that are actually determining the
material flows. If we are to understand these flows, and to come up with ways of lessening them [and
making them more circular], then the first essential is a map of these moral positions.
While elaborated more fully in Thompson (2011) and Thompson and Rayner (1998), enough of this
map of these moral positions has already been aired elsewhere, in respect of assessing the kind of
institutional governance that might enable (as opposed to stifle) strategies for re-engineering cities
such that they may become forces for good in the environment (Beck et al., 2011). Two case
histories one of Londons housing stock, the other of siting hazardous waste disposal facilities in
Austria reveal the key insights for our present discussion (Thompson, 2011).
In the case of London, government-associated planning experts were largely responsible for the
provision of housing during the 1960s and 1970s. For brevity, let us label these actors, i.e., the
upholders of one of the moral positions, as hierarchist (Beck et al., 2011; Thompson et al., 1990).
Then, beginning in the 1970s (Thompson, 2011),
[A] creative and motley assortment of owner-occupiers ... were able, through their myriad individual
and uncoordinated efforts, to derail the planners singular and unrelenting vision of The New
Jerusalem.
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
15
These others are the upholders of a second moral position, which we shall dub individualist. In
effect, they privatized what had once been viewed (by the hierarchists) as the despised communal,
public burden. Where they were successful they reduced to a trickle what were elsewhere the
massive flows of materials and embodied energy liberated by demolition and reconstruction.
Materials and objects, in principle destined for the landfill, were instead recycled; and local, small-
scale, higher-skilled, and labor-intensive trades flourished. In the eyes of those we would now
recognize as upholders of the moral position of ecocentrism call them egalitarians(with their
new relationship between Man and Nature) the competitive, greedy, and profligate individualists
are the problem, not the driving force behind the solution (as here).
Nor was this solution of the individualists a matter of the preservation of the old ways of history for
the sake of preservation and, therefore, against innovation. New technologies were developed and
diffused, such as timber treatments, damp-proofing, forced ventilation, thermostatically controlled
heating systems, and so on. Today, the re-valued housing stock combines very high-speed and high-
volume technologies of information flows with impressively slowed-down and shrunken material
flows. As Thompson (2011) puts it:
Revaluing ... is altogether different from re-cycling. In re-cycling the building itself disappears and its
physical components are then re-used in the construction of a new building ... Re-valuing, however, is
something that happens in our heads, and the building itself stays in place. The only change, to begin
with, is in our attitude to the building. But of course, once we see it as sadly-neglected glorious
heritage, rather than as awful rat-infested slum, our behaviour towards it changes, and that ... leads to
all sorts of changes in the material flows associated with our built environment.
In the second case history, one of selecting sites for hazardous waste treatment facilities in Austria,
the hierarchists (of the government) were at the outset again in control of the process, with the local
communities of the candidate sites nonetheless not ignored. Essentially, they were accepting of the
government-planning moral position (Thompson, 2011):
[I]n their deference to expert opinion, and their willingness to sacrifice themselves for the common
good, the citizens conformed to the hierarchical expectations of their government, and the government,
for its part, marshalled the required technical expertise and shouldered ultimate responsibility.
The hierarchists sense of fairness that the least burden (here) should be borne by the greatest
number of citizens (Linnerooth-Bayer and Fitzgerald, 1996) was intended to prevail.
Despite the ubiquity of the orthodox duopoly of markets and hierarchies, the alternative approach of
the market (favored by the individualists) was largely absent. That approach, which might have
taken the economically efficient form of transporting the waste out of Austria for disposal in
another country, or in some other relatively poor, disadvantaged community (Summers, 1991),
would surely evoke the opposition of the egalitarians. For despite the claims of the individualists
about the fair and successful movement of Adam Smiths invisible hand (by which the process of
benefits accruing to individuals benefits everyone else) their approach tramples over the
egalitarians contending sense of fairness. To wit, a survey conducted at the time revealed an
unusually high commitment amongst Austrian citizens to solutions that might derive from this third
(and often over-looked) egalitarian moral position. 84% of the citizens surveyed expressed their
strong sense of being responsible for, hence doing something about, their own local wastes as
opposed to unfairly transferring the burden to some quite other, possibly distant, disadvantaged
community or nation (Linnerooth-Bayer, 1999). Here, in contrast to the London case, i.e., in
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
16
contrast to an individualist-inspired solution, it could have been the egalitarian-inspired way,
whereby material flows of hazardous wastes would desirably extend over but small distances.
To summarize, we find from these case histories four insights of potentially general applicability,
that:
(I1) the value of material objects and things can change dramatically over time (the creation
and destruction of value);
(I2) there is a triad of three moral positions on the Man-Nature relationship, i.e., those behind
the hierarchist, individualist, and egalitarian solidarities in a society or community;
(I3) the vitality and significance of these positions for the extrusion of policy is
dependent upon their opposition one to another (including implacable opposition), even
though each would prefer to have things all its own way; and
(I4) the respectively attaching plural notions of fairness may be especially important in
determining the fate of wastes, in particular.
2.2 Change: Nutrients as Pollutants or Resources?
To re-iterate, the change we are looking for is this: the transformation from nutrients-as-pollutants
to nutrients-as-resources under the priority of maintaining public health in densely populated cities.
For a variety of reasons, our problem setting cannot be identical with those of material flows and
moral positions in London and Austria. Not the least of the differences is that place, context, and
local, regional, and national cultural attitudes matter to the outcomes of problem-solving. In
addition, it is obvious that there may be differences between re-valuing previously despised
property (real estate) and re-valuing human excrement, towards which we should always (arguably)
have an instinctive, self-preserving sense of revulsion (but see Mehta and Movik (2011); Sim
(2011)). Like the hazardous waste flows of the Austrian case study, such material is something of
which to be most wary, in particular in the context of material flows in confined, localized spaces.
Still, with the prospect of 9 billion fellow citizens, many of whom will live in cities, we may be
approaching a threshold when habits must perforce be modified in some way, in order to attain and
maintain personal and public health, now under such profoundly changed circumstances (since the
mid-19th century). Indeed, it should not escape notice that, given the historic solution to public
health of the (nutrient/resource-wasting) WC and urban sewerage, those of us fortunate enough
have become wholly accustomed to someone else taking care of our wastes. An intensely
local/personal challenge has been solved at the expense of more regional (pollution) problems. Our
personal problems have been exported away out of house, home, and the city to somewhere
else. We know only too well these aphorisms: out of sight, out of mind; flush and gone.
To the surprise of some, re-valuation is already taking place. It is happening through an interesting
collaborative partnership amongst a public-sector utility (Clean Water Services (CWS), Durham,
Oregon, USA), a private-sector start-up company (Ostara, Vancouver, British Columbia, Canada),
and a non-governmental organization, the British Columbia Conservation Foundation (BCCF).
9

Without going into the details, suffice it to say that The Ultimate Recycling was the accolade
accorded to the partnership by (Force, 2011) in the magazine Treatment Plant Operator, followed
by the qualification:

9
This partnership and its modus operandi are unusually (and surprisingly) significant in the context of our work on
cities as forces for good in the environment, as elaborated in more detail elsewhere, in (Beck, 2011).
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
17
An Oregon plant uses a proprietary process to extract nutrients from wastewater and uses them for
salmon restoration and a high-quality fertilizer.
Irony, if not disbelief at the change in perspective, is in the air. Here again is Force (2011), quoting
Rob Baur, wastewater professional and senior operations analyst of CWS at the Durham, Oregon,
facility:
For 35 years, Ive been removing phosphorus and ammonia from wastewater. Its hard to believe that
now Im putting them back into a river.
The question is, will this early practical (and far-reaching) incursion into an alternative way of
valuing what were previously thought pollutants, ignite any subsequent herd instinct, i.e., a mass
buy-in to this alternative way of thinking?
2.3 Empirical Evidence of the Push for More Radical Change
Transforming though it may seem, the (profitable) innovation of CWS-Ostara-BCCF works
nevertheless within essentially the paradigm of the conventional, centralized urban (waste)water
infrastructure, into which most cities are comprehensively locked (certainly in the Global North).
The designation centralized connoting the conventional sewer network, with all its dendritic
branches leading from individual WCs (in individual households), eventually coming together in
the trunk that connects the sewer to the single wastewater treatment facility has become the
paradigm of the past and present, against which we are bidden constructively and creatively to react
(Larsen et al., 2012; Niemczynowicz, 1993). The end-point of this now radically different logic
of de-centralization is that of the on-site, resource-recovering, water-nutrient-energy re-
plumbing of the quasi-autarchic household.
This logic of de-centralization is the subject of many of the chapters in Larsen et al. (2012),
together with the companion logic of source separation, wherein the sources of material flows in
households, offices, public buildings, industries are to be kept strictly separate and not discharged to,
hence mixed in, the conventional, centralized catch-all of the urban sewer network. Nothing
epitomizes the notion of source separation so magnificently as the urine-separating toilet. It simply
matches the evolved design of the human body (Larsen et al., 2009; Lienert and Larsen, 2007). N
and P are concentrated in urine, C and pathogens in faeces, whereas the water of the WC is the
added other material flow. And from this elementary mix has followed decades of earth systems
engineering.
Its appeal notwithstanding, the theoretical structure of the three moral positions introduced above
cannot be brought into alignment with the empirical evidence in Larsen et al. (2012) of the multiple
agencies and actors pushing, pulling, or resisting the logic of de-centralization (and source
separation). For while behavior of some of the actors, and their most obvious institutional
descriptors (public, private, or civil-society sector, for instance), might be strongly redolent of the
hierarchist, individualist, or egalitarian positions, the source material is not sufficiently rich to infer
such alignments. Furthermore, none of these actors is (nor are we ourselves) resolutely an upholder
of one or the other positions in respect of all social, economic, and environmental issues, or even in
respect of the same, single issue for all of time (Price and Thompson, 1997; Thompson, 2002).
Time, place, culture, and country matter so very greatly.
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
18
To discern this, we must begin by aggregating the multiple actors, across the several countries from
which the evidence derives (Lienert, 2012; Londong, 2012; Swart and Palsma, 2012; Truffer et al.,
2012; Vinners and Jnsson, 2012), into the following groups:
(A1) Households;
(A2) Communities, as realized in housing associations and schools;
(A3) Professionals, variously identified as engineers, civil engineers, wastewater professionals,
DWA (Deutsche Vereinigung fr Wasserwirtschaft, Abwasser und Abfall eV; essentially,
the German association for water professionals), and STOWA (Stichting Toegepast
Onderzoek Waterbeheer; the Dutch Foundation for Applied Water Research);
(A4) Private sector, e.g., sanitary hardware firms, small- and medium-size enterprises
manufacturing pipes, tanks and small bioreactors, and manufacturers of packaged on-site
industrial wastewater treatment systems;
(A5) Public sector, such as municipal governments, public authorities, and public utilities;
(A6) Law, as in regulations, legal liability, and codes of practice;
(A7) Farmers, as the recipients of the nutrient (fertilizer) resources to be recovered, in principle.
Table 1 summarizes the instigating and resisting categories of actors across Europe, in respect
of the issue of de-centralization and on-site technological innovation, hence, by strong association,
our issue of change in valuation (from nutrients-as-pollutants to nutrients-as-resources).
It would be dangerous to draw too many conclusions from Table 1, not least because it could be
argued that each of the authors of the narratives would consider themselves both professional actors
and members of the instigating camp, hence biased. In addition, both the foregoing US-Canadian
practical realization of the sought-after revaluation of nutrients in sewage, and our own
computational studies for the Atlanta-Chattahoochee system (Beck et al., 2010), demonstrate how
doing better by the environment does not oblige us to adopt de-centralization or even source-
separation (Beck, 2012). Furthermore, Table 1 deals with evidence of a change in the valuation of
nutrient-energy materials in sewage, but in the absence of hard supplementary evidence of this re-
valuation occurring under the abiding priority of maintaining public health in densely populated
cities, which is crucial to our present discussion.

Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
19
Table 1 Classification of empirical evidence from the book of Larsen et al. (2012) in favor of, and opposing,
a change of perspective on human-waste-into-the-water-cycle (Crutzen et al., 2007).
Country Instigating Generic Actor Resisting Generic Actor
Europe
(Lienert, 2012)
Households
Private sector
(Communities)
a

Professionals
(Farmers)
a

Europe/Germany
(Truffer et al, 2012)
(Private sector) Law
Professionals
Sweden
(Vinners and Jnsson, 2012)
Law
Germany
(Londong, 2012)
(Public sector)
a
Professionals
Law
Netherlands
(Swart and Palsma, 2012)
Professionals
Communities
Private sector
(Professionals)
a

a
Actors indicated in parentheses (...) signal mild support according to our inferences for their placement in the
instigating or resisting category.
Nevertheless, the debate about the possibilities and options for radical change (of on-site,
household technologies) has not only been opened, but appears to be gathering pace. And we
confess to a commitment to engage in it.
2.4 Systemwide Change: Multisectoral Stance
If the C, N, P, embodied energy, and other materials in sewage were to be recovered, any further
analysis of policy and technological options on the basis of water, water infrastructure, or the water
economic/industrial sector alone would obviously be quite inadequate (Beck, 2011; Beck and
Villarroel Walker, 2011). The arguments in favor of the more appropriate multi-sectoral perspective,
to be elucidated and applied below, are already well rehearsed and, lest it be overlooked, both their
origins and implications are of the global proportions consistent with earth systems engineering
(Beck and Villarroel Walker, 2011; Villarroel Walker et al., 2012).
One further observation is due, however. The debate may have been opened, as noted above, but it
has yet to recognize those who may have the most to contribute to it and, in return, benefit
financially from it (Beck and Villarroel Walker, 2011). Given the sectoral origins of the food
imported into the city and the sectoral destinations of its recovered products exported out of the city,
candidate technologies for potential introduction into the traditional urban water sector hence the
possible instigators of change (as owners of these services and technologies) might not presently
be associated with the water sector.
10
These (non-farmer) instigators may currently be acting and
conducting their businesses in the agricultural, food, process chemicals, and energy sectors, for
instance, and perhaps even the information technology (IT) industry.
There is some irony, therefore, in the observations of both Olsson (2012) and Truffer et al. (2012)
on the role of the IT sector. Indeed, these have a bearing on fairness, on the moral position of taking
care of ones own wastes, and even on matters of civil liberty (not to mention issues of system-wide

10
However, while Veolia Environnement might be most familiar to us as just a water utility, it is already a diversified,
multiutility enterprise, with the ambition to provide its services in an integrated, seamless, multisectoral manner
(Veolia, 2008).
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
20
risk; (Beck, 2005b)). As responsibility for treating household sewage is progressively devolved
down to individual, private households, overseeing the maintenance of public health increasingly
motivates the market need and niche for remote professional service supervision (Olsson, 2012;
Truffer et al., 2012), presumably because some individuals cannot be trusted to take care of their
own wastes, day-in, day-out (see also Beck (2011)). Progressive de-centralization of the physical,
civil-engineering infrastructure, it is suggested, should go hand-in-hand with the creation of a
progressively centralized (virtual, IT) control-engineering infrastructure. Or should it? In the UK
one of the worlds most closely monitored nations a recent proposal by a previous government,
to place IT devices in household rubbish bins, has been overturned by the current (2012)
government. The purpose of the proposal was to reveal and punish those households who were not
recycling a sufficient proportion of their wastes. Besides being a popular reversal of proposed
legislation, there seems to be little, if any, evidence of free-riding, i.e., taking advantage of a
public service by, in effect, not acting privately and personally in a socially constructive manner.
Our essential point, however, is that none of the non-water-centric actors as agents of change in
the water sector are cited in any of the categories of (A1) through (A7) associated with Table 1
and the original source material of Larsen et al. (2012).
3 Approach: Foresight & Reachable Futures
In the modern (and over-worked) phrasing, a tipping point is presenting itself. Under scrutiny is
change in an infrastructure with arguably the greatest and deepest of technological and social lock-
ins, according to the criteria of Collingridge (1981). This lock-in has taken decades to mature, a
century and more in places. The technical freedom to operate differently the legacy of its physical
infrastructure is akin to the (lack of) movement afforded by a straitjacket (Beck, 2005b). From a
societal perspective, if the asserted benefits of source separation and de-centralization are to be
realized (Larsen et al., 2012), the intensely personal and intimate matters of the dietary and toilet
habits of a life-time (for every one of us) are subject to serious challenge. The prospect of change,
and the uncertainty clouding its distant, inter-generational outcomes (which opponents may readily
exploit to stifle attempts at instigating any such change of outlook in the first place; Lienert (2012);
Truffer et al. (2012)), could hardly be more profound and radical.
We presume that some sort of computationally-generated foresight should, therefore, be fruitfully
employed to inform the gathering debate (witness Borsuk et al. (2008)). How this might be
achieved, as a matter of identifying the seeds of structural change in the behavior of a system the
first hint of the imminent tipping point and then (crucially) of generating foresight in the
expectation of such dislocations in behavior, has been set out in its general form in Beck (2005a,
2002).
More specifically, a procedure of Adaptive Community Learning (ACL) is to be followed. In
principle, the plural solidarities within the community where the change is being contemplated start
by expressing their aspirations for the distant future, including from their respective moral positions
(Beck, 2011; Beck et al., 2002b). The attainability, or reachability, of these plural futures (under
gross uncertainty) is then assessed according to the inverse computational analyses proposed and
demonstrated by Beck et al. (2002a) and Osidele and Beck (2003). What results, inter alia, are
indications of those factors technologies, policy components, elements of the uncertain science
bases upon which critically hinges the reachability (or not) of the community-imagined futures.
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
21
And that, their reachability or not, can be expected to be of visceral interest to those various
solidarities.
On this occasion, the foresight-generating procedure will be illustrated by two case studies of
London and the Atlanta-Chattahoochee system, using the Multi-sectoral Systems Analysis (MSA)
software of Villarroel Walker and Beck (2012) and Villarroel Walker et al. (2012). The MSA is
based on an analysis of material flows within the city-watershed system, i.e., the flows of water, N,
P, C, and energy, into, around, and out of five interacting economic sectors, for water, energy, food,
waste-handling, and forestry.
3.1 Moral Positions, Material Flows, and Neutrality in Our Analyses
To account for the interplay amongst the plural moral positions, which interplay determines the
eventual material flows, as Thompson (2011) has argued, we presume further that (Beck, 2011):
i. each of the actors on the scene (such as those listed under (A1) through (A7) above) has a
set of greatest hopes (and worst fears) for the distant future, under the re-valuation we are
contemplating;
ii. each is deeply interested in whether if it does not get things all its own way the
immediate, first step in the resulting policy/technology intervention is not going to foreclose
on the possibility of that solidaritys greatest hopes being the object of (revised) policy at
some future date; and
iii. each is just as interested (to the core of their being) in who might pay, and who might
benefit, should the contemplated change occur.
We, the authors of this paper, as well as the contributors to Larsen et al. (2012), want to see the
change come about: from nutrients-as-pollutants to nutrients-as-resources. We are accordingly not
neutral onlookers.
In practice, the change should emerge (if ever it does) from the interplay amongst the plural
perspectives on the Man-Environment relationship (of which the contending views of fairness are a
part), irrespective of whether members of any of these solidarities within the given community are
necessarily aware of that advocated change. There could well be, and perhaps should be, instigators
of the change and resistors of it active within the present solidarities, even at the outset of the
process. Equally, there will surely be those who care neither one way nor the other about the change;
and their aspirations and views too are to be heard and respected (Beck, 2011; Beck et al., 2011)).
But we do not base our analysis, for the moment, on any of these presumptions. What follows is but
a first preliminary, almost surrogate, analysis of the terms of a debate that has been forming for two
decades, with probably just as long to run into the future. To that extent, we are presuming that the
foresight generated by the analysis would be a timely projection of some elements of the change we
seek into the disputatious (and iterative) process of the debate (Beck, 2011).
In fact, the debate would not be entirely new. A few years ago, this was one of the challenges set
down in the Sanitation 21 document of the International Water Association (IWA, 2006):
Can people who have no previous experience of recycling human wastes be persuaded to adopt such
practices and who pays for the promotion of the approach?

Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
22
The so-called ecosan dry toilet is just such a means of recycling. In particular, it is one already the
subject of vigorous debate in professional circles, with accusations circulating of its epitomizing the
expensive luxury of sustainability for the multitudes of the poor and unserved, that is (and eco-
insanity was the jibe leveled at it; McCann (2005)). Kwames field study (2007) of the social
acceptability of these toilets amidst the tough realities of life on the ground in peri-urban Accra,
Ghana could hardly have been more timely (Kwame, 2007).
Adoption there of the new technology promised not just sanitation but the benefit of nutrient
recovery (instead of environmental pollution) and the personal and community obligation to
confront the actuality and proximity of our very human biological residuals. Those in the
community with a strong individualist flare wanted to know whether a market for the sale of
personal, composted residues could be created, not least to compensate them for the waste of their
own personal time in achieving the composting. Hierarchical types, if they could not have the status
symbol of a WC, preferred legislation for punishing non-compliant members of the community
and trusted, certified experts, such as community health nurses and sanitary inspectors, as the
bases of their scheme for managing the introduction and operation of the new ecosan technology.
Egalitarian participants meanwhile, understood the benefits (without further expert endorsement),
would allocate land to collective, community composting, even in favor of land for individual
shelter, and stood ready to overcome the single obstacle to adoption. Their agenda was to change
the perceptions of the individualists and hierarchists who had yet to be persuaded of the benefits of
recycling human wastes through introduction of the ecosan toilet (Kwame, 2007).
Now we can see how, by our simply having an agenda to persuade, our position is not neutral.
3.2 Rigging MSA Every Which Way
The challenge we face for cities of the Global North is indeed a problem embedded in the
complexly compounded behavior of coupled human-built-natural systems. One might expect
deployment, therefore, of some agent-based model; and that may well be necessary in due course
(Beck et al., 2011). MSA is no agent-based model, however. Our current preference is instead to
implement it along the lines of the work of van Asselt and Rotmans (1996) on uncertainty and the
formation of policy for combating climate change. We therefore provide an outline of the procedure.
It is expressed in greater detail in Beck (2011).
In the interests of precision and succinctness in setting out our procedure, MSA relates inputs (u) to
the model (M) of city-watershed material flows and M transcribes the consequences of u into the
perceived output response behavior (y) of the system. In other words, we have the triplet [u,M, y],
which can be arranged to specify three variations on the basic problem-solving paradigm of
elementary mathematics: given two knowns, find the third unknown. Like any model, M is
populated with many parameters (), which characterize the physical and economic (even quasi-
social) behavior of all the unit processes participating in the formation of the material flows across
the city-watershed couple (wastewater treatment, incineration, power generation, urban drainage,
citizens dietary patterns, and so on).
Recognizing and labeling the three previously introduced perspectives of the hierarchists,
individualists, and egalitarians as H, I, and E respectively, we fully expect each to come to the table
of the debate more than ready to express their aspirations (greatest hopes; worst fears) for the
distant, inter-generational future of their cherished city-watershed, i.e., respectively target outcomes
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
23
(behavior) y(H), y(I), and y(E). We further acknowledge that y(H), y(I), and y(E) will inherently be
subject to gross uncertainty.
11

In addition, we allow the possibility of H, I, and E holding to their own respective convictions about
the physical and economic knowledge bases undergirding the relationships encoded within the
triplet [u,M, y], as did van Asselt and Rotmans (1996) with climate science. In other words, there
can be an (H), (I), and (E) determining how the plural y(H), y(I), and y(E) might be attained,
given a singular set of assumptions and choices for the inputs u. In general, this allows for
exploration of the reachability of one solidaritys aspirations, say y(I), given the presumption of
anothers science and technology preferences and convictions, as in (E), for instance (see van
Asselt and Rotmans (1996)).
Conversely, the inverse analysis of reachable futures seeks that singular (policy) u capable of
delivering the expressed plural futures {y(H), y(I), y(E)}, according to the MSA model M, where
now there can be plural takes on the inner workings of the model, as a function of the different
parameterizations of {(H), (I), and (E)}. Overall, the analysis seeks to express something (such
as a probability) of the plausibility, or attainability, or reachability, of the various hoped-for futures.
Such insight may be welcome or unwelcome to the upholders of the various moral positions. In
particular, the inverse analysis also seeks to identify which elements of which processes,
which economic factors, which quasi-societal habits, which material flows within the behavior of
the city-watershed system, and so on are key in determining whether a future aspiration is
reachable or not. It searches for the
key
, as we may call them. And, by reflection, considerations of
those factors and features that are redundant to such attainability may be screened out of
consideration (for the time being).
Bringing together these two products of the inverse analysis the reachability assessments (and
their plausibilities) and the
key
it may be of very special and deep interest to the negotiating
parties to apprehend whether there are any elements within
key
that appear to be key in not
foreclosing on the reachability of any of their own plural futures {y(H), y(I), y(E)}. Thus, while one
solidarity, let us say the egalitarians (E), might be obdurately opposed to a policy u pandering to the
aspirations of the hierarchists (H), i.e., to embark on a path to attaining the distant, inter-
generational y(H), this snubbed (E) solidarity is not yet obliged to abandon what it cherishes for that
future. For this would be what lies at the core of their convictions about the way the world is: the
abiding and reasonable prospect of some day attaining y(E) instead.
4 Case Studies: Computational Results
In 2010, about 5.45M people were living in the Atlanta Metropolitan Area (AMA), which occupies
roughly 22,000 km
2
. The Greater London Area (GLA), in comparison, has a population of 7.8M
and occupies just 1,570 km
2
. The population of Atlanta has grown by 100% since 1985, Londons
by 15%. The proportion of land-use classified as urban in the GLA has fluctuated between 57%
and 62% over the past 25 years, while that of the AMA was projected to increase from 20% in 1987
to 34% in 2010 (Hu, 2004). Food consumption by the two populations is estimated to be 0.6-0.8
tonnes per capita each year in the GLA and 0.8-1.4 tonnes per capita in the AMA. Densely
populated London is served entirely by a conventional, centralized sewerage and wastewater

11
In practice, it is not at all straightforward either to elicit such stakeholder aspirations (Fath and Beck, 2005) or to
translate them into the numbers required by a computer model (Osidele and Beck, 2003).
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
24
infrastructure, whereas almost 40% of Metro Atlantas population occupies dwellings utilizing
septic tanks, ergo a decentralized arrangement. Both are locked into the prevailing mind-set of
nutrients-as-pollutants.
Our position, as currently bystanders to any debate in either city, is this: what might it take to create
value in the nutrients as recoverable resources? There are no solidarities into whom we can tap to
provide us with authentic expression of their plural futures {y(H), y(I), y(E)}. Instead, we must
substitute for them by the following, plural, detached target behaviors for resource savings and
recovery: (i) a reduction in water use, y(1); (ii) an increase in the ratio of energy generated to energy
consumed, y(2); (iii) the mass of N-bearing materials gainfully recovered, y(3); and (iv) the mass of
P-bearing materials likewise gainfully recovered, y(4).
12

The influence of four promising innovations in the water sector are assessed, as candidates for
attaining the target behaviors, starting from the status quo (and with the eventual prospect of their
100% penetration of their respective niches):
(T1) Urine-separating toilets (UST; Larsen et al. (2009); Lienert and Larsen (2007)), for the
production of struvite (a P- and N-based product) and ammonium sulfate (an N-based
product);

(T2) Consolidation and co-treatment of household organic (food) waste, through its conveyance
in the sewerage system (COW), which implies the use of food-waste grinders and the
mixing of kitchen organic waste with the usual contents of household sewage, i.e., laundry
and bathroom/toilet fluxes (Malmqvist et al., 2010);

(T3) Pyrolysis of separated sewage sludge (PSS), by which organic material is decomposed at
high temperatures and in the absence of oxygen to produce gas, bioliquids, and biochar
(Furness et al., 2000); and

(T4) Algae production in wastewater treatment facilities (AWW; Srinath and Pillai (1972);
Sturm and Lamer (2011)); for subsequent biofuel extraction, utilizing any remaining
nutrients in treatment plant effluent flows, for example, in the event AWW is implemented
jointly with UST.
Innovations (T1) through (T4) are incorporated into the model M via parameters that are elements
of the overall parameter vector .
The purpose of our inverse analysis can now be stated succinctly as:
What factors in , in particular, those associated with (T1) through (T4), are found to
be key in discriminating between whether y(1), and/or y(2), and/or y(3), and/or y(4)
are reachable or not, i.e., what is contained within the subset of parameters
key
?
Given these identified
key
which of its elements, if any, are key to the reachability of
all {y(1), y(2), y(3), y(4)}, i.e., which factors in the coupled human-built-natural

12
Detached would conventionally provoke us purportedly neutral analysts into asserting objectivity about the
expression of these goals!
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
25
system encapsulated in M might be key to the potential for none of the target futures
to be foreclosed upon, in principle?
4.1 A Preeminently Nonforeclosing Innovation
The set of elements of
key
found to be key in some way for either the London or Atlanta case study,
or both, are identified and defined in Table 2. It is apparent that they cover not only technological
features, but other properties of the interactions between infrastructure and the rest of the
environment (sewer leakage and infiltration), as well as societal features having to do with diets.
Table 2 Key constituent technologies and features of the multi-sectoral metabolisms (of both Atlanta and
London) for reducing water use, improving the energy ratio, and nutrient recovery.
ID Description Of Systems Features
F1 Water supply leakage
F2 Inflow/infiltration to sewer network
F3 Urine separating toilets (UST)
a

F4 Diet and nutrient content in bodily waste
F5 Pyrolysis of sewage sludge (PSS)
a

F6 Wastewater treatment (nutrient removal performance)
F7 Algae production in wastewater effluent (AWW)
a

F8 Consolidation of organic waste (COW)
a

F9 Water use by domestic/residential users
F10 Water use by commercial users
F11 Water use for coal-based power generation
F12 Water use for natural-gas-based power generation
F13 Direct energy use for water supply
F14 Industrial discharges to the sewer network
a
Treated as an aggregate of two or more constituent features, such as degree of implementation, separation
efficiency, and process conditions.
Tables 3 and 4 show how the various elements of
key
govern the reachability (or not) of the target
futures {y(1), y(2), y(3), y(4)} for Atlanta and London respectively. In fact, these futures have each
been graded into progressively more ambitious target levels of resource savings and recovery, such
as, for example, exceeding a 5%, then a 10%, and finally a 15% reduction in water use, or
recovering at least 500, then 1500, 3000, and finally 5000 tonnes of P per annum.

Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
26
Table 3 Summary of RSA results associated with Atlanta for achieving a set of suggested targets.
Water use
reduction, %
Energy ratio
increase, %
Nutrient recovery
tonnes N/a x 10
3
tonnes P/a x 10
3

5 10 15 20 50 100 150 2 4 8 12 0.5 1.5 3.0 5.0
F1 F1 F2 F2 F2 F2 F2 F2 F3 F3 F3
F3 F3 F3 F3 F3 F3 F3 F3 F4 F4 F4
F9 F9 F9 F5 F5 F5 F5 F4 F4 F4 F5 F5 F5 F5
F10 F10 F6 F6 F6 F6 F6
F11 F11 F11 F7 F7 F7 F7 F7
F12 F13 F13 F13 F13 F8
F14 F14
a
See Table 2 for nomenclature.
As the salient interpretation of these results, we find that UST (feature F3 in Tables 3 and 4) is the
single innovation consistently of critical significance across all the savings/recovery targets, i.e., for
saving water, increasing the energy production/consumption ratio, and recovering N and P nutrients.
Put the other way around (and in a less detached context), UST extends the promise of not
foreclosing upon any of the (imagined) community-societal aspirations for the future. It is the one
feature all solidarities upholders of each of the plural moral positions might have a strong
interest in adopting in order to change the material flows coursing around and through the
metabolisms of both London and Atlanta; and it happens to be a technological innovation. We
might call it a privileged candidate innovation.
Table 4 Summary of RSA results associated with London for achieving a set of suggested targets.
Water use
reduction, %
Energy ratio
increase, %
Nutrient recovery
tonnes N/a x 10
3
tonnes P/a x 10
3

5 10 15 20 50 100 150 2 4 8 12 0.5 1.5 3.0 5.0
F1 - F2 F2 F2 F2 F2 F2 F2 F2 F2
F3 F3 F3 F3 F3 F3 F3 F3 F3 F3
F9 F9 F5 F5 F5 F5 F4 F4 F4 F4
F12 F12 F6 F6 F6 F6 F5 F5 F5 F5 F5
F8 F8 F8 F8 F6 F6
F9 F7
F13 F13 F13 F13 F8
a
See Table 2 for nomenclature.
Many other deductions from these results are possible, of which we cite just five (in passing). First,
reaching the targets for N and P recovery are also found to be sensitive to the dietary choices of the
two populations (feature F4 in Table 2). Second, there is no scope for attaining the most aggressive
rate of savings in water use (above 15%) in the case of London. Third, while the candidate
innovation of algae biofuel production (AWW; feature F7 in Table 2) is identified as key in
Atlantas ambitions for increasing the energy independence of its water sector, this is clearly not
so for London. Interactions among the features are complex. In this instance, antagonisms are
present among the degree of centralization/decentralization of sewerage and sewage collection, the
amounts of nutrients available for recovery through the alternative UST technology and, therefore,
the amounts available for supporting algae generation (AWW) when UST-directed nutrient
recovery is also in place. Fourth, pyrolysis of sewage sludge (PSS; feature F5 in Table 2) is
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
27
promising in respect of both energy and P recovery, but not at all for N recovery. Last, the
uncertainties notwithstanding, recovery of some 12,000 tonnes P per annum is a reasonable
expectation by 2050 in the case of London, were PSS to be installed by then at 100% market
penetration (Villarroel Walker et al., 2012).
4.2 Fairness: Who Pays, Who Gains
Estimates of the potential financial returns attaching to the reachability of the performance
aspirations in Tables 3 and 4 are summarized in Table 5. The benefits listed there under the energy
ratio increase (y(2)), N recovery (y(3)), and P recovery (y(4)), for example, should be returned to the
water utility, in principle, since it is the entity introducing the various technological changes leading
to the generation of the benefits.
13

Table 5 Potential annual economic benefits of each performance aspiration in millions of US Dollars.
Figures in the second row are for London, when these differ from those of Atlanta the water and energy
targets are relative (percentage) changes, hence a function of differing initial (base-case) conditions for the
two metropolitan areas.

Water use
reduction, %
a

Energy ratio
Increase, %
a,b

Nutrient recovery
a

tonnes N/a x 10
3
tonnes P/a x 10
3

5 10 15 20 50 100 150 2 4 8 12 0.5 1.5 3.0 5.0
50 101 151 0.4 1.1 2.1 3.2 2.5 5.0 10.1 15.1 1.7 5.2 10.3 17.2
32 64 - 1.2 2.9 5.8 8.7
a
Values considered the following information: U.S. farm prices per ton for Urea fertilizer (46% N) and Super
Phosphate (46% PO
4
) are about $526 and $633 respectively (data from USDA); electricity price for industrial users
is 6.8 cents per kWh (data from EIA); average U.S. residential water cost of $1 per cubic meter (averaged data from
www.circleofblue.org), assuming an industrial water rate is 30 per cent less than the public supply water rate.
b
Benefits estimated as average total savings in the electricity bill.
It is, of course, one thing to generate foresight regarding estimates of benefits (and costs) on a
broadly undifferentiated and societally-detached system-wide basis (as in Table 5), covering
multiple sectors, utilities, and stakeholders, with their quite different and frequently strongly
opposed aspirations. It is quite another to reveal who might bear the future cost and who might reap
the future rewards of making the transformation from nutrients-as-pollutants to nutrients-as-
resources. What lies below the headline numbers of Table 5 must be examined in somewhat greater
detail.
Thus, to begin, the benefits of attaining the target savings in water use (y(1)) are those of the
consumers of the water, not the utility/enterprise supplying the water. More specifically, at a more
dis-aggregated level, Table 2 shows that features F9, F10, F11, and F12 allow us to distinguish
respectively amongst water use by domestic/residential consumers, commercial users, coal-based
power generation, and natural-gas-based generation. The financial benefits associated with the
reduction of water use shown in Table 5, therefore, are benefits accruing collectively to just the two
groups of domestic and industrial/commercial users. This is because the costs (and savings)
attaching to acquiring water for power-station cooling may be very different from those of the
domestic and industrial users, since such water is clearly not supplied by a water utility (but taken

13
Still, we cannot resist remarking upon this. While the utilities for Atlanta and London ought to be rewarded for
wasting their time recovering the N and P resources (as the individualists of Kwames study of ecosan toilets would
have put it), it is each and everyone of us who buys the food that results in the N and P to be recovered. We wonder,
therefore, whether this implies some kind of ownership thereof.
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
28
directly by the generator from the environment). Accordingly, any financial benefits to the power
generators resulting from savings in water consumption have been omitted from Table 5.
In more detail and in London specifically, there are just three centralized wastewater treatment
plants where the three performance targets for energy, N, and P recovery can be beneficially
improved. And it is to the water/wastewater utility/operator to whom the benefits thereof accrue.
Yet there are literally millions of household water users, amongst whom the cost savings in water-
use reduction are to be distributed. If, therefore, a household of three individuals was able to reduce
its water consumption by 10%, it would save $24 annually. The same relative percentage saving in
Atlanta would be worth $57 each year (on an identical unit cost basis), because of its currently
larger per capita consumption of water. The change to using USTs, we note, could be just what
might be needed to achieve such savings of water and money (albeit modest) within individual
households. It is, of course, precisely this same innovation that could yield the beneficial gains in
energy, N, and P recovery, i.e., that nutrients would by then be viewed as resources, not pollutants.
Would introduction of the UST be perceived, therefore, as a win-win opportunity, at least for both
householders and the utility?
While acknowledging the preliminary nature of our numerical analyses and results, we can begin to
discern the magnitudes of the incentives and to whom they relate for making any change
towards improved city-wide resource-use performance. Our point, moreover, is this. To be able to
have such foresight about the future distribution of costs and benefits amongst these several
stakeholders (water utility, power generators, other industries/commerce, and householders), would
surely have a bearing on how they would today negotiate with each other in building (or
dismantling) the social legitimacy of the policy and technology options necessary for making any
change towards realizing the various target-performance ambitions of Tables 3 through 5 (Beck et
al., 2011).
5 Where Do We Now Stand With Respect To The Change We
Seek?
At a computational level, we can draw the conclusion that even a non-agent-based model can be
formulated, parameterized, and implemented so as to mimic the interplay among, first, the plural
aspirations plural community agents and solidarities can have for their futures and, second, their
plural convictions about the technologies and science underpinning the future behavior of the
coupled human-built-natural system under scrutiny. It is this interplay among the plural contending
notions of fairness, for example, that Thompson (2011) argues is the determining factor in the
material flows associated with the metabolism of a city and its surrounds. In our computational
studies of London and Atlanta, while technically no account has been taken of the plural
convictions about how the world is understood to work, such accounting has been demonstrated
elsewhere (van Asselt and Rotmans, 1996). Unlike an agent-based model, however, it is
acknowledged that the human elements of adaptation and learning over time are absent from the
above computational foresight exercises. Such elements would inevitably determine the future co-
evolution of the natural and built environments with the human environment (see Janssen and
Carpenter (1999) in respect of coupled rural-farmer systems). Generating foresight, through our
analysis of reachable futures, within the over-arching and iterative procedure of Adaptive
Community Learning, would itself be repeated, as community learning and adaptation recursively
progress through time (Beck, 2011; Beck et al., 2002b).
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
29
Technically too, what has been demonstrated with the MSA model of material flows has been
uniquely enabled by the Regionalized Sensitivity Analysis (RSA) in which it is embedded
(Hornberger and Spear, 1981; Osidele and Beck, 2003; Villarroel Walker et al., 2012). First, RSA
originated historically in the late 1970s in the need to analyze the behavior of environmental
systems under gross uncertainty, i.e., largely in the absence of conventional, quantitative
observational data. Gross uncertainty prevails under our present circumstances of a society
contemplating its plural aspirations for the future. This uncertainty, moreover, is of two broadly
different categories: that deriving from disagreement amongst solidarities (if not experts; Beck
(2011); Patt (2007)), which we fully expect to envelope the change we are here contemplating; and
the more familiar kinds of statistical and probabilistic uncertainties wherever consensus obtains.
This enabling feature of RSA, however, comes with the technical difficulties of transcribing the
spoken words of lay individuals into the numerical language of the computer model (to which
difficulties we have already alluded; Osidele and Beck (2003)). Second, the sensitivity analyses of
RSA are conducted uniquely in the setting of a model (with many internal parameters ) whose
responses are being calibrated against some form of required external patterns of manifest behavior
(y).
Last, on technical grounds, we note that our economic accounting of the change we seek from
nutrients-as-pollutants to nutrients-as-resources is as yet rudimentary. Evidence of its potential
improvement is foreshadowed variously in Jiang et al. (2012), Truffer et al. (2012), and Maurer
(2012).
At a deeper non-technical level, it has to be acknowledged that the kind of advocacy this paper and
its supporting analyses has espoused because we ourselves are convinced change is desirable
cannot be seen to stand clinically and neutrally detached from the fray of the debate. We are part of
it. And no reminders of such apprehension are needed (Beck et al., 2011; Hare et al., 2006). It is
possible that our seemingly detached (numerical) aspirations for the future {y(1), y(2), y(3), y(4)}
are therefore no less authentic albeit less legitimate than those ({y(H), y(I), y(E)}) of any of
the other solidarities who hold a stake in the future of their human-built-natural system, if it is not
ours. Should we then also pose the question: is our (purported) knowledge of how the world works,
symbolized by an uncontroversial parameter vector , privileged relative to any of theirs (amongst
{(H), (I), and (E)})?
As to progress in practice, we suggest the experience of the Nepal Water Conservation Foundation
(NWCF) in respect of the Kathmandu-Bagmati system in Nepal is exemplary (NWCF, 2009).
6 Conclusions
Our discussion has focused on the prospects for a strategic change of mind-set: in coming to view
as resources to be profitably recovered the nutrient materials and energy we currently view as the
wastes entrained into what is generally seen (and analyzed) as solely the water metabolism of
cities. In other words, we seek changes in the human and built components of the coupled
human-built-natural systems that are our cities and their environmental surrounds. This is change
moreover of a scope and scale consistent with the evolving definition of Earth Systems
(Re)Engineering. In particular, given the inter-generational nature of the change being contemplated,
we have asked: how might some form of computational foresight inform the debate, which after
more than two decades, appears to be coming to a head (witness Larsen et al. (2012))?
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
30
To structure our analysis, we have drawn upon two theories from Anthropology: Rubbish Theory
(Thompson, 1979), which treats the creation and destruction of value in objects and materials; and
Cultural Theory (Thompson et al., 1990), which deals with contending notions of fairness along
with other contending notions entailed in inevitably plural world views (on the Man-Environment
relationship) in how Man views what is to be done with what we currently consider as our
Wastes.
Our conclusion is that there is indeed a prima facie case for the value of computational foresight
exercises, as already established elsewhere in respect of policy for climate change (van Asselt and
Rotmans, 1996). Yet these exercises raise not only technical questions, but also social and
philosophical questions regarding the place, role, and oft-claimed neutrality of supposedly
objective research conducted by just as supposedly value-free researchers.
Acknowledgments
The work on which this paper is based has been conducted within the International Network on
Cities as Forces for Good (CFG) in the Environment (www.cfgnet.org). CFG is funded by the
Wheatley-Georgia Research Alliance endowed Chair in Water Quality and Environmental Systems
at the University of Georgia, in particular, in support of a Graduate Assistantship and Postdoctoral
Fellowship for RVW.
References
Beck, M. B. (2005a). Environmental Foresight and Structural Change. Environmental Modelling & Software, 20(6),
651-670.
Beck, M. B. (2005b). Vulnerability of water quality in intensively developing urban watersheds. Environmental
Modelling and Software, 20(4), 379-380.
Beck, M. B. (2011). Cities as Forces for Good in the Environment: Sustainability in the Water Sector. Warnell School
of Forestry & Natural Resources, University of Georgia, Athens, Georgia, 2011, (ISBN: 978-1-61584-248-4), xx +
165pp (online as http://cfgnet.org/archives/587).
Beck, M. B. (2012). Why Question the Prevailing Paradigm of Wastewater Management? In T. A. Larsen, K. Udert & J.
Lienert (Eds.), Wastewater Management: Source Separation and Decentralization. London (to appear): IWA
Publishing.
Beck, M. B. (Ed.). (2002). Environmental Foresight and Models: A Manifesto. Oxford, UK: Elsevier.
Beck, M. B., Chen, J., and Osidele, O. O. (2002a). Random Search and the Reachability of Target Futures. In M. B.
Beck (Ed.), Environmental Foresight and Models: A Manifesto (pp. 207-226). Oxford, UK: Elsevier.
Beck, M. B., Fath, B. D., Parker, A. K., Osidele, O. O., Cowie, G. M., Rasmussen, T. C., Patten, B. C., Norton, B. G.,
Steinemann, A., Borrett, S. R., Cox, D., Mayhew, M. C., Zeng, X. Q., and Zeng, W. (2002b). Developing a Concept of
Adaptive Community Learning: Case Study of a Rapidly Urbanizing Watershed. Integrated Assessment, 3(4), 299-307.
Beck, M. B., Jiang, F., Shi, F., Villarroel Walker, R., Osidele, O. O., Lin, Z., Demir, I., and Hall, J. W. (2010). Re-
engineering Cities as Forces for Good in the Environment. Proceedings of the Institution of Civil Engineers,
Engineering Sustainability, 163(1), 31-46.
Beck, M. B., Thompson, M., Ney, S., Gyawali, D., and Jeffrey, P. (2011). On Governance for Re-engineering City
Infrastructure. Proceedings of the Institution of Civil Engineers, Engineering Sustainability, 164(2), 129-142.
Beck, M. B., and Villarroel Walker, R. (2011). Global Water Crisis: A Joined-Up View from the City. Surveys And
Perspectives Integrating ENvironment & Society, [Online], 4(1), Online since 27 December 2011 at
http://sapiens.revues.org/1187.
Borsuk, M. E., Maurer, M., Lienert, J., and Larsen, T. A. (2008). Charting a Path for Innovative Toilet Technology
Using Multicriteria Decision Analysis. [doi: 10.1021/es702184p]. Environmental Science and Technology, 42(6), 1855-
1862. doi: 10.1021/es702184p
Collingridge, D. (1981). The Social Control of Technology. Open University Press, Milton Keynes.
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
31
Crutzen, P. J., Beck, M. B., and Thompson, M. (2007). Cities. Essay: Blue Ribbon Panel on Grand Challenges for
Engineering, online at www.engineeringchallenges.org. US National Academy of Engineering. See also Options
(Winter, 2007), International Institute for Applied Systems Analysis, Laxenburg, Austria, p 8.
Erisman, J. W., Sutton, M. A., Galloway, J., Klimont, Z., and Winiwarter, W. (2008). How a Century of Ammonia
Synthesis Changed the World. Nature Geoscience, 1, 636-639.
Fath, B. D., and Beck, M. B. (2005). Elucidating Public Perceptions of Environmental Behavior: A Case Study of Lake
Lanier. Environmental Modelling & Software, 20(4), 485-498.
Force, J. (2011). The Ultimate Recycling. Treatment Plant Operator, September, COLE Publishing, Inc., 12-17.
Frenkel, V., and Lee, C.-K. (2011). Membranes Head Towards a Low Energy, High Output Future. IWA Yearbook 2011
(pp. 52-54). London: IWA Publishing.
Furness, D. T., Hoggett, L. A., and Judd, S. J. (2000). Thermochemical Treatment of Sewage Sludge. Water and
Environment Journal, 14(1), 57-65. doi: 10.1111/j.1747-6593.2000.tb00227.x
Glaeser, E. (2011). Triumph of the City. How Our Greatest Invention Makes Us Richer, Smarter, Greener, Healthier,
and Happier. New York: Penguin.
Grote, U., Craswell, E., and Vlek, P. L. G. (2005). Nutrient Flows in International Trade: Ecology and Policy Issues.
Environmental Science and Policy, 8(5), 439-451.
Hare, M. P., Barreteau, O., Beck, M. B., Letcher, R. A., Mostert, E., Tbara, J. D., Ridder, D., Cogan, V., and Pahl-
Wostl, C. (2006). Methods for Stakeholder Participation in Water Management. In C. Giupponi, A. J. Jakeman, D.
Karssenberg & M. P. Hare (Eds.), Sustainable Management of Water Resources: An Integrated Approach (pp. 177-231).
Northhampton, Massachusetts: Edward Elgar Publishing.
Hornberger, G. M., and Spear, R. C. (1981). An Approach to the Preliminary Analysis of Environmental Systems. J.
Environ. Management, 12(1), 7-18.
Hu, Z. (2004). Modeling Urban Growth in the Atlanta, Georgia Metropolitan Area Using Remote Sensing and GIS.
PhD, University of Georgia, Athens, Georgia.
IWA. (2006). Sanitation 21: Simple Approaches to Complex Sanitation. London: International Water Association.
Jackson, J. B. C., Kirby, M. X., Berger, W. H., Bjorndal, K. A., Botsford, L. W., Bourque, B. J., Bradbury, R. H.,
Cooke, R., Erlandson, J., Estes, J. A., Hughes, T. P., Kidwell, S., Lange, C. B., Lenihan, H. S., Pandolfi, J. M., Peterson,
C. H., Steneck, R. S., Tegner, M. J., and Warner, R. R. (2001). Historical Overfishing and the Recent Collapse of
Coastal Ecosystems. Science, 293(5530), 629-638. doi: citeulike-article-id:3756305
Janssen, M. A., and Carpenter, S. R. (1999). Managing the Resilience of Lakes: A Multi-agent Modeling Approach.
Conservation Ecology, 3(2), 15 [online].
Jiang, F., Villarroel Walker, R., and Beck, M. B. (2012). The Economics of Recovering Nutrients from Urban
Wastewater: Transitions Towards Sustainability. In preparation.
Kwame, D. S. (2007). Domestication of Excreta: a Cultural Theory Analysis of Ecosan Dry Toilet Schemes in Peri-
Urban Accra, Ghana. MSc, Norwegian University of Life Sciences, s, Norway.
Larsen, T. A., Alder, A. C., Eggen, R. I. L., Maurer, M., and Lienert, J. (2009). Source Separation: Will We See a
Paradigm Shift in Wastewater Handling? Environmental Science & Technology, 43(16), 6121-6125.
Larsen, T. A., Udert, K., and Lienert, J. (Eds.). (2012). Wastewater Management: Source Separation and
Decentralization. London (to appear): IWA Publishing.
Lienert, J. (2012). High Acceptance of Source Separating Technologies However, ... In T. A. Larsen, K. Udert & J.
Lienert (Eds.), Wastewater Management: Source Separation and Decentralization. London (to appear): IWA
Publishing.
Lienert, J., and Larsen, T. A. (2007). Soft Paths in Wastewater Management The Pros and Cons of Urine Source
Separation. GAIA, 14(4), 280-288.
Linnerooth-Bayer, J. (1999). Climate Change and Multiple Views on Fairness. In F. L. Toth (Ed.), Fair Weather?
Equity Concerns in Climate Change (pp. 44-64). London: Earthscan.
Linnerooth-Bayer, J., and Fitzgerald, K. B. (1996). Conflicting Views on Fair Siting Processes. Risk, Health, Safety and
Environment, 7(2), 119-134.
Londong, J. (2012). Practical Experience With Source Separation in Germany. In T. A. Larsen, K. Udert & J. Lienert
(Eds.), Wastewater Management: Source Separation and Decentralization. London (to appear): IWA Publishing.
Malmqvist, P.-A., Aarsrud, P., and Pettersson, F. (2010, September). Integrating Wastewater and Biowaste in the City
of the Future. Paper presented at the World Water Congress, International Water Association (IWA), Montreal, Canada,
Paper 2704 (on cd).
Maurer, M. (2012). Full Costs, (Dis)-Economies of Scale and the Price of Uncertainty. In T. A. Larsen, K. Udert & J.
Lienert (Eds.), Wastewater Management: Source Separation and Decentralization. London (to appear): IWA
Publishing.
Beck et al. Changing The Metabolism Of Coupled Human-Built-Natural Systems
32
McCann, W. (2005). The Sanity of Ecosan. Water21, IWA Publishing, 28-30.
McGreevey, W., Acharya, A., Hammer, J., and MacKellar, L. (2009). Propinquity Matters: Health, Cities, and Modern
Economic Growth. Georgetown Journal of Poverty Law, XV(3), 605-633.
Mehta, L., and Movik, S. (2011). Shit Matters: The Potential of Community-led Total Sanitation. Rugby, UK: Practical
Action Publishing, pp 270.
Niemczynowicz, J. (1993). New Aspects of Sewerage and Water Technology. Ambio, 22(7), 449-455.
NWCF. (2009). The Bagmati: Issues, Challenges and Prospects. Technical Report prepared for King Mahendra Trust
for Nature Conservation, Kathmandu, Nepal: Nepal Water Conservation Foundation (NWCF).
Olsson, G. (2012). The Potential of Control and Monitoring. In T. A. Larsen, K. Udert & J. Lienert (Eds.), Wastewater
Management: Source Separation and Decentralization. London (to appear): IWA Publishing.
Osidele, O. O., and Beck, M. B. (2003). An Inverse Approach to the Analysis of Uncertainty in Models of
Environmental Systems. Integrated Assessment, 4(4), 265-282. doi: 10.1080/1389517049051541
Patt, A. (2007). Assessing Model-based and Conflict-based Uncertainty. Global Environmental Change, 17, 37-46.
Price, M. F., and Thompson, M. (1997). The Complex Life: Human Land Uses in Mountain Ecosystems. Global
Ecology and Biogeographical Letters, 6, 77-90.
Sim, J. (2011). SaniShop: Transforming the Sanitation Crisis Into a Massive Business Opportunity for All. In D.
Waughray (Ed.), Water Security. The Water-Food-Energy-Climate Nexus (pp. 143-147). Washington DC: Island Press.
Srinath, E. G., and Pillai, S. C. (1972). Phosphorus in Wastewater Effluents and Algal Growth. Journal Water Pollution
Control Federation, 44(2), 303-308.
Sturm, B. S. M., and Lamer, S. L. (2011). An Energy Evaluation of Coupling Nutrient Removal from Wastewater with
Algal Biomass Production. Applied Energy, 88(10), 3499-3506. doi: 10.1016/j.apenergy.2010.12.056
Summers, L. (1991, 2 February). Why the Rich Should Pollute the Poor, The Guardian.
Swart, D. B., and Palsma, A. J. B. (2012). The Netherlands. In T. A. Larsen, K. Udert & J. Lienert (Eds.), Wastewater
Management: Source Separation and Decentralization. London (to appear): IWA Publishing.
Thompson, M. (1979). Rubbish Theory: The Creation and Destruction of Value. Oxford: Oxford University Press.
Thompson, M. (2002). Man and Nature as a Single but Complex System. In P. Timmerman (Ed.), Encyclopedia of
Global Environmental Change (Vol. 5, pp. 384-393). Chichester, UK: Wiley.
Thompson, M. (2011). Material Flows and Moral Positions. Insight: Cities as Forces for Good (CFG) Network, online
as http://cfgnet.org/archives/531.
Thompson, M., Ellis, R., and Wildavsky, A. (1990). Cultural Theory. Boulder, Colorado: West View.
Thompson, M., and Rayner, S. (1998). Cultural Discourses. In S. Rayner & E. L. Malone (Eds.), Human Choice and
Climate Change (Vol. 1, pp. 265-343). Columbus, Ohio: Battelle.
Truffer, B., Binz, C., Gebauer, H., and Strmer, E. (2012). Market Success of On-site Treatment: A Systemic
Innovation Problem. In T. A. Larsen, K. Udert & J. Lienert (Eds.), Wastewater Management: Source Separation and
Decentralization. London (to appear): IWA Publishing.
van Asselt, M., and Rotmans, J. (1996). Uncertainty in Perspective. Global Environmental Change, 6(2), 121-157. doi:
10.1016/0959-3780(96)00015-5
Villarroel Walker, R., and Beck, M. B. (2012). Understanding the Metabolism of Urban-Rural Ecosystems: A Multi-
sectoral Systems Analysis. Urban Ecosystems, (in press).
Villarroel Walker, R., Beck, M. B., and Hall, J. W. (2012). Water and Nutrient and Energy Systems in Urbanizing
Watersheds. Frontiers of Environmental Science and Engineering, (in press).
Vinners, B., and Jnsson, H. (2012). The Swedish Experience. In T. A. Larsen, K. Udert & J. Lienert (Eds.),
Wastewater Management: Source Separation and Decentralization. London (to appear): IWA Publishing.
WEF. (2011). Water Security: The Water-Energy-Food-Climate Nexus. D. Waughray (Ed.). Washington DC: World
Economic Forum (WEF). Island Press, pp 248.


33

Complex Adaptive Systems Engineering improving our
understanding of complex systems and reducing their risk
Theresa J. Brown, Stephen H. Conrad, Walter E. Beyeler
1

1
Sandia National Laboratories, Albuquerque New Mexico, USA
E-mail: tjbrown@sandia.gov; shconra@sandia.gov; webeyel@sandia.gov

Abstract
Complex adaptive systems are central to many persistent problems locally and globally. Taking a
longer and broader view of these systems and their dynamic interactions improves our ability to
reduce the risks they face and create. This is particularly true with the risks due to climate change,
economic crises, energy and food supply disruptions. Climate change and the challenge of
addressing the global risks provides a common set of problems on which to build a global
community of practice for engineering solutions to complex adaptive systems of systems problems.
This paper presents general concepts and a few examples of successful applications of an
engineering process for complex systems of systems.

Brown et al. Complex adaptive systems engineering
34
1 What are Complex Adaptive Systems (CAS) and why do we
want to reduce their risks?
Many definitions for CAS exist, none are universal. Definitions sometimes emphasize system
structure (e.g., composed of many interacting and self-organizing parts) or characteristics of system
behaviour (e.g., emergent). From a scientific and engineering perspective it is important to have a
definition that focuses on the process that creates these characteristic functional structures and
enables emergence and other system behaviors. We define a CAS as one in which the structure
modifies to enable success in its environment (Johnson et al. 2012). In this definition, a CASs
structure and behaviour are products of all the perturbations and modifications that it has
experienced or implemented. Adaptive systems tend to exhibit certain structural characteristics,
such as hierarchical and modular components, and they tend to have simple rules for interaction
among the elements. These features allow us to design and modify CAS, and provide a guide for
creating models to represent their behaviour. Many persistent, large-scale engineering challenges
involve multiple interacting CAS or Complex Adaptive Systems of Systems (CASoS).
The class of problems for which we are applying CASoS Engineering approaches (Glass et al.,2011,
Brown et al., 2011 ) include evaluation of what happens to CAS such as ecosystems, societies,
infrastructures or economies when their environment changes and identifying strategies for
reducing risks to CAS or increasing security through modifications that are robust to uncertainty
(Figure 1). Climate change, and the impacts of climate change on the environment, population and
engineered systems, is one of the problems that require a CAS approach for analysis and design of
effective risk reduction actions.
Brown et al. Complex adaptive systems engineering
35

Figure 1 CAS Problem Domains Represented by the CASoS, Perturbations and Engineering Aspirations
2 What is needed for riskinformed decisions?
Modeling of coupled human, natural and technological systems provides a means for quantifying
and testing theories about their dependencies, dynamics and response to different stresses. Analysis,
using CAS models, provides information structured to support decision making and risk
management within CAS. Such analyses provide a longer-term view of the potential consequences
and benefits of actions than assessments based on a static system or network approach. Decisions
are often made using a trial and error approach without identification of potential system level
consequences; it happens in medicine, civil engineering, regulatory policy and many other aspects
of our daily lives. Most of those decisions are not harmful and may fix a problem. In cases where
the effects play out over longer time frames and propagate through interdependencies with other
systems, the broader view and understanding gained from CAS analysis allow us to recognize the
causal relationships and solve system-level issues.
Thanks to Malcom Gladwell, tipping point is a generally understood concept. Uncertainties,
however, render tipping points difficult to predict and avoid. In CASoS such as infrastructures,
system-spanning events like large-scale power outages are not frequent nor are they rare (Figure 2).
Network topology, control systems and innovations in processes and equipment influence the
Brown et al. Complex adaptive systems engineering
36
frequency of such events (Jensen, 1998; LaViolette et al., 2006; Beyeler et al., 2007); but, given the
magnitude of the consequences, the risks are only slightly reduced by those measures. Adaptation
can reduce the magnitude of events (Miller and Page, 2007). For critical services, redundant
systems (e.g., back-up generators at hospitals, battery back-up for emergency communication
systems, alternative fuels for power generators) are often used to reduce the impacts of disruption
while the primary services are restored. Effective solutions require foresight. Back-up systems
must be entirely independent, not impacted by the perturbation that caused the original system to
fail or by the main system failure, and they need function until services can be restored.

Figure 2 Truncated power-law verses normally distributed event sizes in CAS
With climate change we are concerned about global cascades in supply chains that are critical to
human life and prosperity. If we take a traditional economic view, individuals, companies and
nations with sufficient resources will pay more or export fewer goods to offset their shortages; but
this does not reduce risk at a global scale (Brown et al., 2010), and may increase risks over a longer
time frame. There are tipping points for the behavior of individuals and groups. We cant predict
with any certainty what the final factor will be; we can only recognize the stresses that drive a
system toward a tipping point and identify what it will take to keep conditions below the threshold
with some certainty. Drought is the primary concern from an infrastructure and population
perspective. Energy, manufacturing and agriculture are dependent on large quantities of fresh water.
Countries that are economically dependent on agriculture (Figure 3), where long-term droughts are
expected due to climate change, have less ability to adapt and are likely to be the first areas
impacted. Providing food-aid would reduce some of the consequences but would not make the
region more resilient or less vulnerable to climate impacts.
Brown et al. Complex adaptive systems engineering
37

Figure 3 Fraction of Gross Domestic Product (GDP) from agriculture (U.S. CIA 2009) an indicator of risk
due to climate change (agriculture vulnerability to drought and reduced economic capacity for adaptation).
One strategy for managing climate risks, particularly for systems that provide services, is to
engineer for resilience. Vugrin et al. (2010) provide a definition and mathematical framework for
quantifying system resilience that allows comparison of options and supports decision making
because costs are explicitly included. They define system resilience as follows: Given the
occurrence of a particular disruptive event (or set of events), the resilience of a system to that event
(or events) is that systems ability to reduce efficiently both the magnitude and duration of the
deviation from targeted system performance levels. Using resilience analysis to design and
implement effective adaptation strategies for climate change adds the cost and reliability
perspectives to the process needed for evaluating risk mitigation strategy options.
3 What are some successful examples of CASoS Engineering?
The goal of CASoS Engineering is finding realistic, risk-management solutions that are robust to
uncertainty. This approach has been successfully applied to national planning for pandemics and
other natural disasters (e.g., Davey, 2008, Perlroth, 2010, Finley, 2011), identification of strategies
for reducing counteracting monetary policies (Beyeler et al., 2007) and reducing uncertainty in
forward and backward tracking of food supply chain contamination (Conrad et al., 2011).
The engineering goal for pandemic planning was to find an intervention that would contain the
spread of a novel strain of influenza, protecting the population until a strain-specific vaccine can be
developed. The uncertainties include characteristics of the virus, effectiveness of existing vaccines,
antiviral stockpile size and effectiveness, effectiveness of social-distancing measures, timing of
interventions and compliance with each aspect of the intervention. A model of a representative
population of 10,000 people, its social networks and disease spread was developed and used to
evaluate the uncertainties, compare possible interventions and design a robust strategy. The
uncertainty quantification results for a single pandemic strain, with characteristics similar to the
1918 pandemic, show how the number of aspects included in the intervention (expressed as a
Brown et al. Complex adaptive systems engineering
38
number of interventions) change the distribution of possible outcomes (Figure 4). The best-
performing composite intervention strategies include school closure, effectively reducing the spread
of disease by changing the structure of the social interactions until the strain dies out or a vaccine is
developed. Quarantine and antiviral treatment appear to be effective in strategies reliant on few
interventions, but require knowledge of who is infected and their close contacts. Prophylactic
interventions (contact tracing-based antiviral prophylaxis) requires more interventions (such as
school closure, social distancing (e.g., wearing masks)) to reduce the mean and standard deviation
in outcomes.

Figure 4 Uncertainty Quantification for comparison of intervention effectiveness for a 1918-like pandemic
influenza
The consequences of economic perturbations are well understood given the on-going issues around
the world. As with pandemics, the system interactions are global with the interventions applied at
several levels (local, regional, national, multi-national). A highly abstracted model of two payment
systems linked through a foreign exchange (FX) market provides a means to test and compare the
effects of different monetary policies and how they are implemented (Renault et al., 2007).
Monetary policies implemented to reduce risk exposure a national level push the risk to the other
participant in the exchange market; prompting a change in policy in the second system and a
dynamic cycle of perturbations that take many years to dampen. This highly simplified, abstract
model indicates that prioritizing FX trades over normal payments can reduce exposures
significantly and that differences in liquidity in the two systems can increase exposures. Low level
of liquidity in one system can negatively affect the other system even though it is operating on
higher level of liquidity.
Food supply chains are another type of global system that if contaminated, threatens population
health. Recent events have highlighted the difficulties in identifying the source of contamination
and eliminating it from the food supply. In the U.S. there are abundant data on the businesses
involved in agriculture, food processing and retail sales. Information on the connections between
entities in the food supply chain is not easily accessible. Tracing possible contaminant routes
through these supply chains is a labour intensive process. Accounting for the processes (growing
regions and seasons, distributors, processors and products, retailers) general network characteristics
Brown et al. Complex adaptive systems engineering
39
such as big tends to sell to big; small sells to small (Figure 5) and the uncertainties in network
connections provides a means for identifying more likely paths and prioritizing data needs (Conrad
et al. 2011).

Figure 5 General network topology for food supply chains.
These applications produced a core set of general modelling and analysis modules that can be
replicated, connected and populated with parameter values, then used to represent and evaluate a
wide variety of CAS and perturbations to those systems. The core modelling components include:
network and community builders for representing single or multiple interacting networks (social,
supply chain and/or pipeline); and infectious disease, exchange, opinion dynamic and population
structure models. Infections and opinions propagate through multiple, interacting social networks;
food contamination propagates through supply chains; behaviors spread as a function of opinion
and information through social and communication networks; and functional disruptions spread
through logical and physical system dependencies.
4 Conclusions
We have many challenges moving forward. Climate risks are global and will require an
international community committed to reducing the risks. We need to develop a strong international
community of practice of CASoS Engineering to find solutions that will benefit us all. This
requires tremendous commitment and willingness to find ways to work together on common
problems.
We need to build confidence in CAS modelling and analyses. We need a different approach to
validation for CAS modelling. CAS are inherently unpredictable, thus traditional validation
methods based on predictability for physical models are not applicable. CAS analysis outcomes
must demonstrate understanding of the potential dynamics, explicitly represent uncertainty in the
analysis and validate the actions to be taken by designing solutions that are robust to uncertainty.
Farms Consolidators
/Brokers
Processors Distributors Retailers
Brown et al. Complex adaptive systems engineering
40
Acknowledgements
The authors thank Louise Maffitt from New Mexico Technical University for her editorial review
and contributions to this work, and Robert Glass, Sandia National Laboratories, who led the
development of CASoS Engineering process and continues to build a community of practice.
References
Beyeler, W.E., Glass, R.J., Bech, M.L., & Soramki, K. (2007). Congestion and cascades in payment systems. Physica
A, Volume 384, Issue 2, pp. 693-718.
Brown, T.J., Glass, R.J., Beyeler, W.E., Ames, A.L., Linebarger, J.M. & Maffitt, S.L. (2011). Complex Adaptive
System of Systems (CASoS) Engineering Applications Version 1.0, Sandia National Laboratories, Albuquerque, 2011,
SAND 2011-8032.
Brown, T.J., Parks M.J., Hernandez, J., Jennings, B.J., Kaplan, P.G. & Conrad, S.H. (2011). Uncertainty Quantification
and Validation of Combined Hydrological and Macroeconomic Analyses. Sandia National Laboratories, Albuquerque,
2011, SAND 2010-6266.
Carlson, J.M. & Doyle, J. (2002). Complexity and robustness. Proceedings of the National Academy of Sciences, USA
99, 2538-2545.
Conrad, S.H., Beyeler, W.E., Brown, T.J. (2012). The value of using stochastic mapping for understanding risks and
tracing contaminant pathways. Proceedings of the 4th Annual Conference on Infrastructure Systems, Norfolk, VA, 2012
(in press) (see also Sandia National Laboratories, SAND 2011-4203C).
Davey, V.J., Glass, R.J., Min, H.J., Beyeler, W.E., & Glass, L.M. (2008). Robust Design of Community Mitigation for
Pandemic Influenza: A Systematic Examination of Proposed U.S. Guidance. PLoS ONE 3(7): e2606 doi:
10.1371/journal.- pone.0002606.
Finley, P.D., Glass R.J., Moore T.W., Ames, A.L., Evans, L.B., Cannon, D.C., & Davey, V.J. (2011). Integrating
Uncertainty Analysis into Complex-System Modeling to Design Effective Public Policies. Proceedings of the 8th
International Conference on Complex Systems, Quincy, MA.
Glass, R.J., Beyeler, W.E., Conrad, S.H., Brodsky, N.S., Kaplan, P.G., & Brown, T.J. (2003). Defining Research and
Development Directions for Modeling and Simulation of Complex, Interdependent Adaptive Infrastructures. Sandia
National Laboratories, Albuquerque, SAND 2003-1778.
Glass, R.J., Brown, T.J., Ames, A.L., Linebarger, J.M., Beyeler, W.E. & Maffitt, S.L. (2011). Phoenix: Complex
Adaptive System of Systems (CASoS) Engineering Version 1.0. Sandia National Laboratories, Albuquerque, SAND
2011-3446.
Jensen, H.J. (1998). Self-Organized Criticality: Emergent Complex Behavior in Physical and Biological Systems,
Cambridge University Press, Cambridge.
Johnson, C., Backus, G., Brown, T., Colbaugh, R., Jones K., Tsao J. (2011). A Case for Sandia Investment in Complex
Adaptive Systems Science and Technology, Sandia Report, SAND 2011-9347.
LaViolette, R., Beyeler, W.E., Glass, R.J., Stamber, K.L., Link, H. (2006). Sensitivity of the resilience of congested
random networks to rolloff and offset in truncated power-law degree distributions. Physica A: Statistical Mechanics and
its Applications, 368, Issue 1, pp. 287-293.
Miller, J.H. and Page, S.E. (2007). Complex Adaptive Systems: An introduction to computational models of social life,
Princeton University Press, Princeton, p. 165-177.
Perlroth, D.J., Glass, R.J., Davey V.J., Cannon D.C., Garber, A.M., & Owens, D.K. (2010). Health Outcomes and Costs
of Community Mitigation Strategies for an Influenza Pandemic in the United States, Clinical Infectious Diseases, CID
2010:50 (15 January).
Simon, H.A. (1962). The Architecture of Complexity. Proceedings of the American Philosophical Society, Philadelphia,
106, 467-482.
Simon, H.A. (1996). The Sciences of the Artificial, 3rd edn. MIT Press, Cambridge.
U.S. CIA (2009). World Factbook, https://www.cia.gov/library/publications/the-world, Accessed 9/7/2010.
Vugrin, E.D. & Camphouse, R.C. (2011). Infrastructure resilience assessment through control design. International
Journal of Critical Infrastructures, 7, No. 3, pp.243260.


41

Analysis of Infrastructure Networks

Sarah Dunn
1
, Sean Wilkinson
1
, Gaihua Fu
1
, Richard Dawson
1

1
School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK.
E-mail:sarah.dunn@ncl.ac.uk; sean.wilkinson@ncl.ac.uk; Gaihua.fu@ncl.ac.uk; richard.dawson@ncl.ac.uk

Abstract
In recent years, the study of complex networks has been applied to many areas of research,
including: mathematics, social sciences, biological systems and computer science. It is often cited
that Eulers celebrated solution of the Konigsberg bridge problem, in 1735, is the first true proof in
the theory of networks (Newman, 2003) and since this date several notable advances in this area
have been made. This paper presents some of the more important advances, made in this field, that
are applicable to the understanding of infrastructure networks. The European air traffic network is
then used as an example to demonstrate that graph theory can inform us about the change in
performance of our infrastructure networks when they are subjected to different types of disasters.

Dunn et al. Analysis of infrastructure networks
42
1 Introduction
It could be argued that, the first notable advance in network graph theory, relating to the
application of real world problems, is the development of different network models. The first
network model developed was the random graph model (Erdos and Renyi, 1960) and has since been
followed by the small-world network (Watts and Strogatz, 1998), the scale-free network (Barabasi
and Albert, 1999) and most recently the exponential network (Liu and Tang, 2005). Each of these
network models has different evolutionary rules for attaching links between pairs of nodes, resulting
in networks with different architectures (i.e. different arrangements of the links between nodes in
the network). The development of these different network models has been driven by the desire to
model real world networks (e.g. the Internet, social networks) with increasing accuracy. Today,
many real world networks can be classified into one of the four main network architectures (classes)
of network model.
Another notable advance is the identification of the hazard tolerance of each network class. For
example, it has been shown that the scale-free network is resilient to random hazard but vulnerable
to targeted attack when compared to the random network (Albert et al., 2000) and this difference is
due to their different network architectures.
This paper expands upon these important advances and considers other more recent developments,
including the extension of the theory to include spatial and interdependent networks and presents a
number of examples that demonstrate the utility of complex graph theory in the analysis of these
networks.
2 Types of Networks and Network Modelling
Probably the major contribution of network theory is its ability to describe generic properties of a
network and in so doing give an indication of the behaviour of seemingly different systems.
Different types of networks with different arrangements of links (connecting the nodes) have been
discovered and some of their generic properties described. The first developed network model was
the Erdos and Renyi random graph model (Erdos and Renyi, 1960). This is arguably the simplest
graph possible (Albert and Barabasi, 2002) and has been shown to be a poor representation of real
world network architectures (Newman, 2003); however, random graphs are useful and are normally
used as a baseline for comparison with more structured networks (Lewis, 2009). An example of
this can be found in tests for network robustness presented in Batagelj and Brandes (2005).

Figure 1 A sample random network and (b) the shape of its degree distribution (Barabasi and Oltvai, 2004).
Figure shows a sample random network and its associated degree distribution. The degree
distribution of a network is defined as the cumulative probability distribution of the number of
b a
Dunn et al. Analysis of infrastructure networks
43
connections that each node has to other nodes (see Figure 2 for a further explanation). From the
degree distribution (Figure b) it can be seen that the nodes in a random graph model tend to have
the same value of degree (this can also be identified from a visual inspection of the network in
Figure a).


Figure 2 The calculation of degree distribution is made by obtaining the degree of each node. The degree of
a node, k, is the number of links attached to this node from other nodes; for example if a node has 3 links
attached to it, then it has a degree of 3. (a) Shows a small sample from a scale-free network, created using
Network Workbench, and shows the degree of each node (the dashed lines indicate links to other nodes in the
network that have been removed from this figure for clarity). The degree distribution of the network, P(k),
gives the cumulative probability that a selected node has k or greater links. P(k) is calculated by summing
the number of nodes with k=1, 2, links divided by the total number of nodes. It is this distribution which
allows for the distinction between different classes of network. The degree distribution for the scale-free
network (partly shown in (a)) is shown in (b) (Wilkinson et al., 2012).
To more accurately model real world systems, Watts and Strogatz modified the random graph
model by using the concept of six degrees of freedom (Milgram, 1967) forming small-world
networks (Watts and Strogatz, 1998). The main characteristic of small-world networks is that the
majority of nodal pairs are not directly connected, but can be reached via very few edges. The
degree distribution is very similar to that of a random network (Figure b) (Barthelemy, 2011).
Both the random graph model and the small-world network are characterised by a Poisson degree
distribution (Network Workbench, 2009). However, Barabasi and Albert discovered that real world
networks (including, the Internet (Albert et al., 2000) and the World-Wide-Web (Barabasi and
Albert, 1999, Barabasi et al., 2000) tend to form a power law degree distribution. Networks that
follow this power law are more commonly known as scale-free networks.
5
6
4
3
1
3
2
2
0.0001
0.001
0.01
0.1
1
1 10 100 1000
P
(
k
)

(
P
r
o
b
a
b
i
l
i
t
y

o
f

D
e
g
r
e
e
)
k (Degree)
b a
Dunn et al. Analysis of infrastructure networks
44

Figure 3 (a) A sample scale-free network and (b) its degree distribution (Barabasi and Oltvai, 2004)
These scale-free networks include a small number of highly connected nodes (nodes with a high
degree) and a large number of poorly connected nodes (nodes with a small degree). This can be
seen visually in the sample network shown in Figure 3a and by the associated degree distribution in
Figure 3b.
Other real world networks, such as power grids, have been found to have an exponential degree
distribution and so can be classed as exponential networks (Liu and Tang, 2005). The origins of
exponential networks are unclear and no one individual (or group) appears to be cited with their
discovery; however, they have been used in many studies of real world networks including those by,
Albert et al. (2004), Amaral et al. (2000), Bompard et al. (2011).

Figure 4 Degree distribution for the North American Power Grid, a real world example of an exponential
network (Deng et al., 2011)
The degree distribution for exponential networks is shown in Figure 4; in these networks the value
of degree for the high degree nodes is lower than that of scale-free networks, but higher than those
in a random network (for a network with the same number of nodes and links) (Albert et al., 2004).
When the previous studies described the various classes of network, they did so assuming that they
were isolated systems - meaning that they were independent of each other and therefore could
function and grow without relying on resources provided by other systems. While this assumption
holds true for the network generation algorithms, to accurately model real world systems it could be
argued that these systems should be modelled as networks of networks (i.e. modelling the
dependence of one system on another) (Gao et al., 2011, Pederson et al., 2006). For example, the
successful operation on an electrical distribution system relies on a supply of water for cooling and
ICT systems for control and management; i.e. the system relies on other networks to function and
therefore, when considering its hazard tolerance, should be modelled as a network of networks.
Figure 5 shows an example of an interdependent network, where network A (show in orange) is
b a
Dunn et al. Analysis of infrastructure networks
45
connected to network B (shown in blue). The nodes in the system which are reliant on each other
are indicated by the red dashed lines.

Figure 5 model of an interdependent network, where the A nodes belong to one network and the B nodes
to another network. The single system links in these networks are shown by the solid lines and the
interdependency links are represented by the dashed lines (Fu et al., 2012).
3 Previous Research using Network Theories
Previous research has used network theory to examine the properties of many real world networks,
including; social networks (Amaral et al., 2000, Newman et al., 2002, Arenas et al., 2003), neural
networks (Sporns, 2002, Stam and Reijneveld, 2007), biological networks (Rual et al., 2005) and
computer science (Valverde and Sol, 2003), to name but a few.
Recently network theory has also been applied to infrastructure networks, aiming to classify them
into one of the four main classes of network model. This research has primarily focused on the
analysis of transportation systems, communication systems and electrical distribution systems
(power grids).
Transportation Systems - Subway networks have been analysed and shown to belong to the small-
world class of network (Latora and Marchiori, 2002). However, within this area it appears that
airline networks receive the most attention, being analysed at a country (Li and Cai, 2004, Bagler,
2008, Han et al., 2008), continental (Wilkinson et al., 2012) and whole world (Guimera and Amaral,
2004) scale. These networks have been analysed as both un-weighted and weighted network
models (in the case of the weighted networks, the links are given an increased importance
depending on the number of flights on a particular day (Chi et al., 2003)). Both directed networks
(where the direction of flights between airports is considered (Han et al., 2008)) and undirected
networks (where only the presence of a flight route is considered (Wilkinson et al., 2012)) have also
been analysed. Airline networks cannot easily be placed into a single network class because they
include elements of both the scale-free and exponential network architectures. This architecture has
been classed as a truncated scale-free distribution (or a scale-free distribution with an exponential
tail). Figure 6 shows the degree distributions for the airline networks of China and the US.
Dunn et al. Analysis of infrastructure networks
46

Figure 6 Degree distribution for (a) the China airline network and (b) the US airline network (Li et al., 2006)
Communication Systems The Internet and the World-Wide-Web are the two most analysed
networks within communication systems. They have been shown to belong to the scale-free
network class (Albert et al., 2000, Cohen et al., 2000, Albert et al., 1999), the degree distribution of
the World-Wide-Web is shown in Figure 7.

Figure 7 Degree distribution of the World-Wide-Web (Strogatz, 2001)
Electrical Distribution Systems (Power Grids) These systems are perhaps one of the most
complex human-constructed networks (Costa et al., 2007), comprising of transmission lines, which
connect power sources (e.g. nuclear power station) to power consumers (e.g. industry and
residences etc.). Studies have focused on the analysis of the North American (Kinney et al., 2005),
European (Sole et al., 2008) and Italian power grids (Crucitti et al., 2004), classifying them as
exponential networks (Rosas-Casals et al., 2006). Figure 8 shows the degree distribution for the
Italian power grid.
a b
Dunn et al. Analysis of infrastructure networks
47

Figure 8 Degree distribution for the Italian power grid (Crucitti et al., 2004)
4 Network Generation Algorithms
Each of the four main network classes has its own set of rules which govern the formation of links
between pairs of nodes in the network model (i.e. they define how the network grows with time).
Random Networks The network generation algorithm for random networks is possibly the
simplest of all the network models. The network starts with the total number of nodes and each pair
of nodes is considered in turn and a connection (link) is made between them based upon the value
of linking probability (the higher this value the more likely it is that a link will be generated) (Erdos
and Renyi, 1960). If the linking probability is equal to 1, then the network will be fully saturated
(i.e. it will have the maximum possible number of links) and if this value equals 0 there will be no
links in the network. It is possible to have isolated nodes (nodes without any connecting links) in
the network using this generation algorithm, usually occurring when the value of linking probability
is very small.
Small-World Networks Similarly to the random network model, the algorithm starts with the total
number of nodes in the network; although, these nodes are connected (via links) to a number of
initial neighbours. It is the number of initial neighbours which determines the total number of links
in the network (as no new links are added). For example, for a network with 20 nodes and a
number of initial neighbours as 2, there will be 40 links in the network (i.e. each node starts with
two links). These initial links are then rewired using a rewiring probability, the higher the value
of this probability the higher the number of links that are rewired. Figure 9 shows the effects of the
rewiring probability, p. For p = 0 no links are rewired and the resulting network is regular in
structure, for p = 1 all links are rewired resulting in a random network.

Figure 9 Showing the effects of the rewiring probability (p) in the small-world generation algorithm (Watts
and Strogatz, 1998)
Dunn et al. Analysis of infrastructure networks
48
Scale-Free Networks The Barabasi and Albert (1999) scale-free network is based upon the ideas
of growth and preferential attachment (Boccaletti et al., 2006). These networks are formed by
starting with an initial number of isolated nodes, m
0
(usually a small percentage of the total number
of nodes in the network). New nodes are then added to the network at each timestep
(i.e. growing the network) until the total number of nodes in the network is reached. These added
nodes have between 1 and m
0
links attached to them and attach to the existing nodes in the network
based upon the idea of preferential attachment. The probability of attaching to each existing node
is calculated based upon its degree, with the nodes with a high degree being more likely to attract
a link from the new node (i.e. the rich get richer). It is this preferential attachment rule which
results in a few high degree nodes and many small degree nodes in the network.
Exponential Networks This network class is not as well documented as the other three classes and
few network generation algorithms exist for creating exponential networks. However, Liu and
Tang (2005) propose a model based upon the Barabasi-Albert scale-free network (including the
ideas of growth and preferential attachment). In their model, the network starts with a few fully
connected nodes (m
0
), unlike the Barabasi-Albert scale-free model in which these initial nodes are
not connected. At each timestep a new node is introduced to the network with a number of links
between 1 and m
0
(this continues until all nodes have been added to the network). The idea of
preferential attachment is still used to connect to existing nodes to in network; however, this is
modified so that the probability of attachment is not based upon the degree of the existing node, it is
based on the degree of the connected nodes (to this node). Meaning that a node with a low degree
can still attract links from new nodes if it is connected to existing high degree nodes. This results
in a network where the high degree nodes have a degree higher than those in random networks, but
lower than those in scale-free networks.
Until recently, networks have only been generated as topological network models and a spatial
element has not been considered in their generation (i.e. only the physical connection between nodal
pairs was considered, not the physical distance between nodal pairs). However, as the analysis of
real world networks turns from the Internet and the World-Wide-Web (both requiring only very
little space to operate) to airline and electrical distribution systems (requiring large amounts of
space) the spatial element of these networks is becoming increasing important in their analysis.
Network generation algorithms are therefore beginning to explore ways to include a spatial element,
using the topological networks generation algorithms as a starting point.
For example, Gastner and Newman (2006) propose a model for connecting links between pairs of
nodes, based upon their separation distance. They include a variable parameter, , in their
algorithm, which is used to simulate users preference. For example, when = 0 the resulting
network resembles an airline network, in which users want to minimise the number of flights in
their journey; and when = 0 the resulting network resembles a road network where users want to
minimise the length of their journey (Figure 10). A similar model is constructed by Qian and Han
(2009), where a variable can be altered and at the two extreme values for this parameter the
resulting network again resemble airline and road networks.
Dunn et al. Analysis of infrastructure networks
49

Figure 10 Generating networks with different spatial layouts, depending on user preference (), where: (a)
= 0, (b) = 1/3, (c) = 2/3 and (d) = 1 (Gastner and Newman, 2006).
In these spatial network algorithms, the locations of the nodes are generally pre-allocated and are
usually based upon a real system (i.e. the main aim is to define the rules which govern link
formation between pairs of nodes, rather than to understand the rules that govern nodal location.
One of the few studies not to have used pre-allocated node locations is that of Wilkinson et al.
(2012). In this work they showed that the location of nodes within the European Air Traffic
Network exhibited a bilinear form; meaning that they were uniform with distance from the
geographical centre of the air traffic network up to radius of approx. 1,500 km, after which the
distribution of both airports and their degrees becomes sparser but remains relatively uniform. The
reason for this change in grade was because the considered area extended into the Atlantic Ocean in
the west, and the border of the European Union in the east. They went on to demonstrate that
accurate degree distribution could be obtained by randomly selecting nodal locations so that they
fitted this distribution. This study also demonstrated that space does play a role in the degree
distribution of a network as poorly connected nodes can capitalise on their close proximity to a
highly connected hub by attracting links that were bound for the high degree hub. This
modification also leads to the network having an exponential degree distribution.
5 Hazard Tolerance of Network Architectures and Failure
Modes
Studies have shown that each class of network has its own hazard tolerance when subjected to
different types of hazard. The two most researched and best documented network classes are the
random and scale-free networks.
The random network model is normally used as a baseline for tests of network robustness (Batagelj
and Brandes, 2005) and responds with the same level of resilience for different types of hazards.
This is due to each node in the network having approximately the same number of links (and
therefore the same effect to the network then removed) (Albert et al., 2000). Whereas the scale-free
network, has been shown to have different levels of robustness to different hazard types. This class
of network is robust to random hazards (which are more likely to remove one of the numerous low
degree nodes, rather than one of the few high degree nodes) and vulnerable to targeted attack
(which is likely to remove one of the few high degree nodes in the network) (Albert et al., 2000).
The robustness of small-world networks is not well documented, however considering the degree
distribution (which is similar to that of a random network) it could be argued that they respond in a
similar way to random networks. Similarly, the hazard tolerance of exponential networks is not
well documented and could be considered to be in between that of the random and scale-free
a
b
c d
Dunn et al. Analysis of infrastructure networks
50
networks (as exponential networks have high degree nodes with values of degree that are higher
than that of random networks, but not as high as scale-free networks).
Studies have also shown that real world networks respond to hazards in the same way as their
network class. For example, Cohen et al. have considered the resilience of the Internet to random
breakdown (Cohen et al., 2000) and to targeted attack (Cohen et al., 2001); finding that the Internet
(a scale-free network (Albert et al., 2000)) is resilient to random hazard, but vulnerable to targeted
attack, corresponding with the hazard tolerance of its network class (Albert et al., 2000).
With the development of spatial network models, the spatial hazard tolerance of these networks is
starting to be considered (i.e. subjecting spatial network models to hazards that have a spatial
component). This hazard tolerance does not necessarily correspond to the topological hazard
tolerance of the network. For example, the European airline network is a truncated scale-free
network and should be resilient to random hazards (Wilkinson et al., 2012). However, when
considering the spatial component in both the layout of the network (the nodes and links) and the
random hazard (which was spatially coherent) in the analysis the results suggested that this class of
network is vulnerable to spatial hazard. This is due to the combination of geographical distribution
and network architectures jeopardising the inherent hazard tolerance of the network (Wilkinson et
al., 2012).
The hazard tolerance of interdependent networks has also been considered in previous studies, and
these networks have been shown to be more vulnerable to hazard (when compared to analysing
these systems in isolation). For example, building on the work of Buldyrev et al. (2010), Fu et al
(2012) coupled two random networks (using a model similar to that shown in Figure 5) and showed
that interdependent networked systems can be more vulnerable than an individual (uncoupled)
network. In this study nodes were removed randomly from the network and the network
performance was assessed using the relative size, P, of the largest connected component in the
remaining network (Figure 11, Figure 12).

Figure 11 Performance comparison of an interdependent network against that of a single network, where q
is the fraction of the nodes removed in the network (using random node removal) and P is the relative size of
the largest connected component in the remaining network. Each curve represents the mean performance of
100 simulations of interdependent networks that couple two 10,000 node random networks (Fu et al., 2012).

Dunn et al. Analysis of infrastructure networks
51

Figure 12 Aggregate performance of interdependent networks A and B when K (the average
interdependent degree or number of supporting nodes that a dependent node is directly connected to)
and F (the portion of dependent nodes that a network has) are varied. Each point represents the mean
performance of 100 simulations of interdependent networks that couple two 10,000-node ErdsRnyi
networks (Fu et al., 2012).
6 Network Measures
There are two different categories of network measure, one category considers the performance of
the network and the other category considers the importance of individual nodes in the network.
In the performance category, there are numerous measures that can be used to show different
aspects of network performance. The most commonly used are:
Shortest Average Path Length (APL) captures the concept of efficiency in a network (Boccaletti et
al., 2006). It is defined as the average number of steps along the shortest paths for all pairs of nodes
in the network. The higher the value of shortest average path length the more inefficient the
network (as on average there are more steps between each pair of nodes).
Diameter (D) this is the maximum shortest path length in the network (Boccaletti et al., 2006). If
the network is fragmented (i.e. contains groups of nodes that are not connected via links) then this
value refers to the maximum shortest path length in the largest cluster (Nojima, 2006).
Number of Clusters (NC) if the network is fragmented this measure represents the number of
clusters which contain two or more nodes (i.e. it does not contain isolated nodes) (Nojima, 2006).
For fully connected network (i.e. one that is not fragmented) this value is equal to 1.
Maximum Cluster Size (MCS) the total number of nodes in the largest cluster of the network
(Nojima, 2006). For a network that is not fragmented this value is equal to the total number of
nodes in the network.
Studies have used these measures to show how a network degrades when different attack
strategies are used to assess hazard tolerance. For example, Nojima (2006) used these measures to
show how the Japanese airline network responds to random node removal and preferential node
Dunn et al. Analysis of infrastructure networks
52
removal (based upon node degree, i.e. nodes are removed in order of high to low degree). This
study found that removing nodes preferentially degraded the network much quicker than using
random node removal (the maximum cluster size of the network decreased sharply and the diameter
and average path length increased noticeably with the removal of only a small percentage of the
total nodes in the network). Another study by Albert et al. (2000) also subjected networks to two
different attack strategies to assess their impact on network performance. Again, a random node
removal strategy and an attack strategy (based upon node degree) were used.
Other researchers have tried to develop more sophisticated measures of establishing the importance
of nodes, rather than just using node degree. The most widely used measures are known as
centrality measures and have been used to show that these high degree nodes are not necessarily the
most important in the network (for example, Guimera et al. (2005)).
Betweenness Centrality is the proportion of all shortest average path lengths between pairs of
other nodes that include this node (Freeman, 1979, de Nooy et al., 2005) and is based on concept
that central nodes are included on the shortest average path length of pairs of other nodes (de Nooy
et al., 2005).
Closeness Centrality is defined as the mean shortest path between that node and all other nodes
reachable from it (nodes that tend to have a small shortest path length between other nodes in the
network have a higher value of closeness) (de Nooy et al., 2005, Freeman, 1979) and comprises the
idea of speed of communication between pair of nodes in a network (de Nooy et al., 2005, Cadini et
al., 2009).
Centrality measures have been previously applied to social networks (Everett and Borgatti, 1999)
with the aim of identifying the central person / figure or group / class in a social network. Recently,
these measures have also been applied to infrastructure networks (Choi et al., 2006, Crucitti et al.,
2006). However, these studies do not consider how the services that the network provides flows
around the network, nor do they stress the network (by removing nodes and / or links) to gauge the
effect on performance. It is therefore unproven as to whether the node with the highest value of
centrality would have more of an effect on the network, when removed, compared to the node with
the highest value of degree.
7 Example Vulnerability Assessment of the European Air
Traffic Network
In this paper, we demonstrate how graph theory can be applied to an infrastructure network to
quantify the change in network performance when subjected to different hazards. We use the
European air traffic network (Figure 13) and stress the network using four different attack
strategies and quantify the change in performance using four different measures.
Dunn et al. Analysis of infrastructure networks
53

Figure 13 Showing (a) the European air traffic network (the black circles are the airports and the red circle
is the geographical centre of the network, weighted by airport degree, the air routes have been omitted for
clarity) and (b) its degree distribution.
The European airline network consists of 525 airports and 3886 air routes and has previously been
analysed by Wilkinson et al. (2012) and shown to follow a truncated scale-free distribution (Figure
13b); as such it should be resilient to random hazard but vulnerable to targeted attack. Nodes are
removed from the network in four different orders to enable the range of hazards to be simulated:
x Random Node Failure nodes are removed randomly from the network.
x Degree nodes are removed from the network in the order of highest to lowest degree.
Previous studies have used this attack strategy to simulate a targeted attack, i.e. the worst
case scenario.
x Betweenness Centrality similar to the degree attack nodes are removed from the network
based upon their value of betweenness centrality (highest to lowest). Previous studies have
shown that the node with the highest value of degree is not necessarily the most central or
important node in the network and therefore may not have the largest effect when removed
(i.e. basing node removal on degree may not simulate the worst case scenario).
x Spatial Hazard this hazard is based entirely upon the spatial layout of the network (unlike
the other three attack strategies, which are based upon topological measures). The hazard
starts in the geographical centre of the network (calculated using the position of the airports,
weighted by their degree, Figure 13a) and then grows outwards, removing nodes from the
network in order of their distance from the geographical centre.
To assess how the network changes (in terms of performance and connectivity) when the attack
strategies are applied, we use four measures, two describing the connectivity of the network (NC,
MCS) and two describing the change in network performance (APL, D) (Nojima, 2006).
Dunn et al. Analysis of infrastructure networks
54


Figure 14 Correlating the percentage of airports (nodes) removed from the European air traffic network,
when subjected to different attack strategies, and network performance measures: (a) shortest average path
length, (b) diameter, (c) maximum cluster size, and (d) number of clusters.
Figure 14 shows the results of correlating the percentage of airports removed, with the performance
and connectivity measures. For all measures it can be seen that removing nodes based upon their
degree or betweenness centrality have similar results (i.e. the red and green lines follow similar
trends) and the random node failure and spatial hazard attack strategies also follow similar trends
(blue and purple).
Considering the network performance measures (Figure 14 a, b) removing nodes based upon their
degree has the worst effect to the network. Both the APL and D both increase significantly when
around 20% of the nodes are removed, meaning that the network is now inefficient at transporting
air passengers. Then, when around 30% of the nodes in the network have been removed, the values
of APL and D dramatically reduce. This is because the network has broken into many small
clusters each having small APL and D (i.e. the MCS has collapsed - reducing to 15 when 30% of
the nodes have been removed). Both the decrease in MCS and the increase in NC (Figure 14d)
suggest that these two attack strategies quickly fragment the network, rendering it impossible to
travel to all parts of the network.
Both the random node failure and spatial hazard, remove nodes that do not significantly affect the
APL and D (Figure 14a, b); however, the spatial hazard is slightly worse. Both of these attack
strategies affect the connectivity of the network in much the same way, i.e. they both cause the
MCS (Figure 14c) to decrease almost linearly with the percentage of nodes removed and do not
cause the network to break into a significant number of clusters (Figure 14d).
From these results it can be argued that the network is vulnerable to targeted attack (based upon
both the degree and betweenness centrality) when compared to a random hazard. It can also be
argued that the network is resilient to spatial hazards; however, these results are misleading as we
are not plotting the degradation in performance in terms of size of the hazard. A previous study by
a b
c
d
Dunn et al. Analysis of infrastructure networks
55
Wilkinson et al. (2012) showed that the European airline network is in fact vulnerable to spatial
hazards as shown in Figure 15; and is a strong argument for further research in determining the
hazard tolerance of geographically distributed networks. Other future research that needs to be
conducted is to assess the effects of edge weighting (i.e. number of flights on each route) as well as
considering the knock on effects due to interdependence between networks. For example how are
other types of infrastructure system (e.g. the train network) affected when parts of the airline
network are removed e.g. can they cope with an influx of extra passengers due to the cancellation of
flights, or alternatively can they offer sufficient redundancy by providing other modes of travel.

Figure 15 Plotting the maximum cluster size of the network and (a) percentage of closed airports and (b)
percentage of closed airspace, when subjecting the network to two types of spatial hazard. The results show
in (a) show a similar trend to those shown in Figure 13c, for the spatial hazard, and seem to indicate that the
network is resilient to spatial hazards. However, when the size of the hazard is considered (d) the network is
shown to be vulnerable (Wilkinson et al., 2012).
8 Conclusions
In this paper, we have presented some of the important advances in the field of graph theory and its
applications to analysing real world networks (including: social, biological and infrastructure
networks). We have discussed the current advances and research in the field which aims to increase
the accuracy with which we can model real world systems.
We have used the European airline network to show how graph theory can be used to analyse the
effects of four different disaster scenarios. The simulations presented quantified the change in
network performance and connectivity and demonstrated that the resilience of this network is
different for all four hazards. We have also demonstrated that when considering real world
networks, it is important to consider the spatial distribution of the network because, not only does
space influence the architecture of the network, but simple metrics that just consider network
connectivity may not give the full picture of hazard tolerance. We suggest that more research is
required to better understand the hazard tolerance of spatially distributed networks and the influence
that weighted edges may have on this tolerance. We also suggest that research is required on how
other networks may be affected by their dependency on a failed network or conversely, the
possibility of other networks providing redundancy by carrying the services of the failed network in
different modes.
Acknowledgements
Sarah Dunn is funded by an EPSRC DTA studentship. The interdependent network analysis wss
funded through the Resilient Futures project, EPSRC grant (EP/I005943/1).
Dunn et al. Analysis of infrastructure networks
56
References
Albert, R., Albert, I. & Nakarado, G. L. (2004). Structural vulnerability of the North American power grid. Physical
Review E, 69.
Albert, R. & Barabasi, A. L. (2002). Statistical mechanics of complex networks. Reviews of Modern Physics 74, 47-97.
Albert, R., Jeong, H. & Barabasi, A. L. (1999). Internet - Diameter of the World-Wide Web. Nature 401, 130-131.
Albert, R., Jeong, H. & Barabasi, A. L. (2000). Error and Attack Tolerance of Complex Networks. Nature 406, 378-382.
Amaral, L. A. N., Scala, A., Barthelemy, M. & Stanley, H. E. (2000). Classes of small-world networks. Proceedings of
the National Academy of Sciences of the United States of America, 97, 11149-11152.
Arenas, A., Danon, L., Diaz-Guilera, A., Gleiser, P. M. & Guimera, R. (2003). Community Analysis in Social Networks.
European Physical Journal B 38, 373-380.
Bagler, G. (2008). Analysis of the airport network of India as a complex weighted network. Physica a-Statistical
Mechanics and Its Applications 387, 2972-2980.
Barabasi, A.-L. & Oltvai, Z. N. (2004). Network biology: understanding the cell's functional organization. Nat Rev
Genet 5, 101-113.
Barabasi, A. L. & Albert, R. (1999). Emergence of scaling in random networks. Science 286, 509-512.
Barabasi, A. L., Albert, R. & Jeong, H. (2000). Scale-free characteristics of random networks: the topology of the
World-Wide Web. Physica A 281, 69-77.
Barthelemy, M. 2011. Spatial networks. Physics Reports-Review Section of Physics Letters 499, 1-101.
Batagelj, V. & Brandes, U. 2005. Efficient generation of large random networks. Physical Review E 71.
Boccaletti, S., Latora, V., Moreno, Y., Chavez, M. & Hwang, D. U. (2006). Complex networks: Structure and dynamics.
Physics Reports-Review Section of Physics Letters 424, 175-308.
Bompard, E., Wu, D. & Xue, F. (2011). Structural vulnerability of power systems: A topological approach. Electric
Power Systems Research 81, 1334-1340.
Buluyiev S v Paishani R Paul u Stanley B E anu Bavlin S Catastiophic cascaue of failuies in
inteiuepenuent netwoiks Nature
Cadini, F., Zio, E. & Petrescu, C.-A. (2009). Using Centrality Measures to Rank the Importance of the Components of a
Complex Network Infrastructure. In: Setola, R. & Geretshuber, S. (eds.) Critical Information Infrastructure Security.
Springer Berlin / Heidelberg.
Chi, L. P., Wang, R., Su, H., Xu, X. P., Zhao, J. S., Li, W. & Cai, X. (2003). Structural properties of US flight network.
Chinese Physics Letters 20, 1393-1396.
Choi, J. H., Barnett, G. A. & Chon, B. S. (2006). Comparing world city networks: a network analysis of Internet
backbone and air transport intercity linkages. Global Networks-a Journal of Transnational Affairs 6, 81-99.
Cohen, R., Erez, K., Ben-Avraham, D. & Havlin, S. (2000). Resilience of the Internet to random breakdowns. Physical
Review Letters 85, 4626-4628.
Cohen, R., Erez, K., Ben-Avraham, D. & Havlin, S. (2001). Breakdown of the internet under intentional attack.
Physical Review Letters 86, 3682-3685.
Costa, L. F., Oliveira, O. N., Travieso, G., Rodrigues, F. A., Boas, P. V., Antiqueira, L., Viana, M. & Rocha, L. E. C. D.
(2007). Analyzing and Modeling Real-World Phenomena with Complex Networks: A Survey of Applications. Physics,
103.
Crucitti, P., Latora, V. & Marchiori, M. (2004). A topological analysis of the Italian electric power grid. Physica a-
Statistical Mechanics and Its Applications 338, 92-97.
Crucitti, P., Latora, V. & Porta, S. (2006). Centrality in networks of urban streets. Chaos, 16.
De Nooy, W., Mrvar, A. & Batagelj, V. (2005). Exploratory Social Network Analysis with Pajek, Cambridge,
Cambridge University Press.
Deng, W., Li, W., Cai, X. & Wang, Q. A. (2011). The exponential degree distribution in complex networks: Non-
equilibrium network theory, numerical simulation and empirical data. Physica A: Statistical Mechanics and its
Applications 390, 1481-1485.
Erdos, P. & Renyi, A. (1960). On The Evolution of Random Graphs. Publication of the Mathematical Institutre of the
Hungarian Academy of Sciences 5, 17-61.
Everett, M. G. & Borgatti, S. P. (1999). The centrality of groups and classes. Journal of Mathematical Sociology 23,
181-201.
Freeman, L. C. 1979. Centrality In Social Networks Conceptual Clarification. Social Networks 1, 215-239.
Fu G, Khoury M, Dawson R, Bullock S (2012) Vulnerability Analysis of Interdependent Infrastructure Systems, in
Proc. 2012 European Conference on Complex Systems.
Dunn et al. Analysis of infrastructure networks
57
Gao, J., Buldyrev, S. V., Havlin, S. & Stanley, H. E. (2011). Robustness of a Network of Networks. Physical Review
Letters 107, 195701.
Gastner, M. T. & Newman, M. E. J. (2006). The spatial structure of networks. European Physical Journal B 49, 247-
252.
Guimera, R. & Amaral, L. A. N. (2004). Modeling the world-wide airport network. European Physical Journal B 38,
381-385.
Guimera, R., Mossa, S., Turtschi, A. & Amaral, L. A. N. (2005). The worldwide air transportation network: Anomalous
centrality, community structure, and cities' global roles. Proceedings of the National Academy of Sciences of the United
States of America 102, 7794-7799.
Han, D. D., Qian, J. H. & Liu, J. G. 2008. Network Topology Of The Austrain Airline Flights.
Kinney, R., Crucitti, P., Albert, R. & Latora, V. (2005). Modeling cascading failures in the North American power grid.
The European Physical Journal B - Condensed Matter and Complex Systems 46, 101-107.
Latora, V. & Marchiori, M. (2002). Is the Boston subway a small-world network? Physica a-Statistical Mechanics and
Its Applications 314, 109-113.
Lewis, T. G. (2009). Network science: theory and practice, John Wiley & Sons.
Li, W. & Cai, X. 2004. Statistical analysis of airport network of China. Phys Rev E Stat Nonlin Soft Matter Phys 69,
046106.
Li, W., Wang, Q. A., Nivanen, L. & Le Mehaute, A. 2006. How to fit the degree distribution of the air network?
Physica a-Statistical Mechanics and Its Applications 368, 262-272.
Liu, J. Z. & Tang, Y. F. (2005). An exponential distribution network. Chinese Physics 14, 643-645.
Milgram, S. (1967). The Small-World Problem. Psychology Today 1, 61-67.
Network Workbench (2009). Network Workbench Tool: User Manual 1.0.0.
Newman, M. E. J. (2003). The structure and function of complex networks. Siam Review 45, 167-256.
Newman, M. E. J., Watts, D. J. & Strogatz, S. H. (2002). Random graph models of social networks. Proceedings of the
National Academy of Sciences of the United States of America 99, 2566-2572.
Nojima, N. (2006). Evaluation of Functional Performance of Complex Networks for Critical Infrastructure Protection.
First European Conference on Earthquake Engineering and Seismology. Geneva, Switzerland.
Pederson, P., Dudenhoeffer, D., Hartley, S. & Permann, M. (2006). Critical Infrastructure Interdependency Modeling: A
Survey of U.S. and International Research. Idaho: Idaho National Laboratory.
Qian, J. H. & Han, D. D. (2009). A spatial weighted network model based on optimal expected traffic. Physica a-
Statistical Mechanics and Its Applications, 388, 4248-4258.
Rosas-Casals, M., Valverde, S. & Sole, R. V. (2006). Topological Vulnerability of the European Power Grid under
Errors and Attacks. International Journal of Bifurcation and Chaos 17, 2465-2475.
Rual, J.-F., Venkatesan, K., Hao, T., Hirozane-Kishikawa, T., Dricot, A., Li, N., Berriz, G. F., Gibbons, F. D., Dreze,
M., Ayivi-Guedehoussou, N., Klitgord, N., Simon, C., Boxem, M., Milstein, S., Rosenberg, J., Goldberg, D. S., Zhang,
L. V., Wong, S. L., Franklin, G., Li, S., Albala, J. S., Lim, J., Fraughton, C., Llamosas, E., Cevik, S., Bex, C., Lamesch,
P., Sikorski, R. S., Vandenhaute, J., Zoghbi, H. Y., Smolyar, A., Bosak, S., Sequerra, R., Doucette-Stamm, L., Cusick,
M. E., Hill, D. E., Roth, F. P. & Vidal, M. (2005). Towards a proteome-scale map of the human protein-protein
interaction network. Nature 437, 1173-1178.
Sole, R. V., Rosas-Casals, M., Corominas-Murtra, B. & Valverde, S. (2008). Robustness of the European power grids
under intentional attack. Physical Review E, 77.
Sporns, O. (2002). Network analysis, complexity, and brain function. Complexity 8, 56-60.
Stam, C. J. & Reijneveld, J. C. (2007). Graph Theoretical Analysis of Complex Networks in the Brain. Nonlinear
Biomedical Physics 1, 1-19.
Strogatz, S. H. (2001). Exploring complex networks. Nature 410, 268-276.
Valverde, S. & Sol, R. V. (2003). Hierarchical small worlds in software architecture. Arxiv preprint cond-mat/0307278.
Watts, D. J. & Strogatz, S. H. (1998). Collective dynamics of 'small-world' networks. Nature 393, 440-442.
Wilkinson, S., Dunn, S. & Ma, S. (2012). The vulnerability of the European air traffic network to spatial hazards.
Natural Hazards 60, 1027-1036.


58


59

Cities as geoengineering building blocks
Jonathan H. Fink
1,2

1
Office of Research and Strategic Partnerships,
2
Department of Geology
Portland State University, Portland, OR 97201-0751
E-mail: jon.fink@pdx.edu

Abstract
Geoengineering refers to extreme, planetary-scale technological responses to the deleterious effects
of human-induced climate change. Here we explore the idea that urbanization could incorporate a
form of distributed, bottom-up geoengineering. Through the redesign of existing urban areas and
the creation of more efficient new ones, per capita energy consumption and CO
2
generation could
be radically reduced. However, implementing these changes requires unusually strong political
action, coordination across sectors and geography, and a combination of new technology and
altered behaviour. In contrast to other geoengineering schemes, urban redesign would provide
society with many ancillary benefits besides slowing the increase in atmospheric carbon. Given the
likelihood that climate change in coming decades will cause catastrophic social and environmental
disruptions, planners, engineers, politicians, architects, and social scientists should aggressively
consider how quickly and extensively cities could shift to become positive forces for climate repair.


Fink Cities as geoengineering building blocks
60
1 Introduction: Urbanization as a model for geoengineering
Geoengineering is a subset of Earth Systems Engineering and Management that aims to manipulate
the Earth's environment to counteract the negative effects of anthropogenic greenhouse gas
emissions (Keith, 2000). Coupled with mitigation and adaptation strategies, geoengineering can in
theory contribute to the amelioration of other profound challenges facing society, including
resource depletion, poverty, and biodiversity loss. But most such proposals involve largely
unproven technologies, and raise serious concerns about excessive costs and unintended
consequences (McKibben, 2010). They also tend to be monolithic in design, massive in scale, and
ignore the potential benefits derived from modifying human behaviour. Furthermore, they minimize
the extreme governance challenges associated with policies and decisions that can potentially affect
all of Earths human and non-human residents (Broykin et al, 2009).
The redesign of cities is not commonly classified as geoengineering, although in many ways
urbanization can be seen as the largest ongoing experiment in planetary-scale manipulation. Cities,
where more than half the world's population lives, the majority of resources are consumed, and the
most pollution is generated, have physical and biological influence that extends far beyond their
borders. Through economies of scale and system integration, cities hold the potential of greatly
reducing per capita consumption and pollution generation, helping to accomplish the climate-
stabilizing goals of geoengineering (Brand, 2009).
To date, the closest this promise is coming to realization is in relatively wealthy and socially
conscious cities like those in Scandinavia and the Cascadia corridor in North America, where social
attitudes, public policy, and administrative practice are working in concert with cool climates and
enabling infrastructure to reduce environmental impact. But, with a few exceptions, these efforts
have been limited to isolated buildings, neighbourhoods, and districts. Futuristic urban experiments
such as Masdar in the United Arab Emirates and Dongtan in China are attempting to make carbon
reduction effortless for individual citizens, but the ability to scale up these mini-Utopias is far
from proven, and full life-cycle assessments of their true carbon costs are ambiguous (Sahely et al,
2005).
Carbon footprint calculations show that the centres of the densest cities, like New York and Hong
Kong, yield some of the least pollution per capita because car ownership is difficult while mass
transit is affordable and extensive (Owen, 2010). In wealthier countries, this benefit is somewhat
offset by high consumption in the surrounding suburbs. Todays fastest growing cities, in
developing nations, still have relatively small carbon emissions because of low average incomes.
However, their aspirations are to acquire the material benefits of western cities for their rapidly
expanding populations, which would result in massive increases in CO
2
generation. This reality
adds urgency to the need to more intentionally view cities as central to climate solutions.
In this paper we explore the contention that if properly conceived, executed, and coordinated across
the world, urban redesign could be among the most effective and realistic ways to relieve the global
atmospheric crisis (e.g., Calthorpe, 2010). In contrast to more extreme technological proposals like
injecting sunlight-reflecting aerosols into the stratosphere or deploying vast arrays of CO
2
-removing
scrubbers (Lackner, 2002), making cities more efficient would have many ancillary benefits beyond
climate, like providing economic activity, increasing resilience to natural and man-made disasters,
and improving the health and quality of life of residents. Using a combination of architecture,
Fink Cities as geoengineering building blocks
61
engineering, public policy, administrative practice, market mechanisms and social media as an
intentional way to achieve climate goals is a relatively new and unproven exercise. Here we focus
specifically on the need for unprecedented coordination across scales, tools, sectors, and regions in
order for urban design to become an effective means of climate repair. This example may hold
lessons for other geoengineering schemes.
2 Balancing behaviour modification and infrastructure in
climate mitigation
One of the most fundamental questions of sustainability is the extent to which individual behaviour
matters. While the climate crisis we face was caused by the decisions and actions of millions of
people over more than a century, the way their economies were organized greatly constrained their
choices. Now, with limited financial resources available to address potentially catastrophic
circumstances, governments have to choose which scale to focus on changing individual and
collective behaviour through education and regulation, or constructing enabling or transformative
infrastructure.
Consider three relatively simple examples related to the built environment of cities. The auto-
centric design of suburbs arguably contributed more to the build-up of atmospheric carbon in the
second half of the 20
th
Century than the relative inefficiency of car engines or the driving habits of
their owners. However, now that a large and growing global fleet of vehicles exists, we can reduce
their atmospheric impact through a combination of behavioural and physical steps. We can devise
and promote cars with more efficient power trains, teach drivers tricks for using less gas, provide
mass transit alternatives, and arrange cities so that people need to travel shorter distances to get to
work, school, recreation and shopping.
A second example involves energy use in buildings, which is one of the largest contributors to cities
carbon footprints. On the infrastructure end of the spectrum, governments have launched extensive
programs to lower energy consumption in the built environment through improved insulation, more
effective HVAC and lighting systems, onsite generation of renewable energy, and lower-impact
construction methods and materials (Rashkin, 2010). In the U.S., an entire suite of industries has
grown up around the design and construction of LEED-certified buildings, along with the
manufacture and installation of solar panels, small-scale wind turbines, geothermal heat pumps, and
other renewable energy sources. But while the impacts of the buildings themselves and their major
appliances have been going down due to regulatory requirements like Energy Star in the U.S.,
consumption by individuals living and working within the buildings has been steadily increasing,
driven in large part by the proliferation of computers, cell phone chargers, and other electronic
devices that never completely shut off, something difficult to regulate. This problem could be
addressed behaviourally through public education coupled with more targeted local metering and
billing. In addition, devices could be equipped with circuits that automatically shut off. But these
measures conflict with the economic imperative to sell the newest gadgets. As with transportation,
deciding whether to focus on technological, regulatory, or behavioural changes to reduce the
consumption associated with buildings is a critical part of any urban-based climate change
mitigation strategy.
A third example relates to how the buildings and transportation components are organized into
larger urban systems, and how different scales can best contribute to climate mitigation. Here the
Fink Cities as geoengineering building blocks
62
question is whether it is most efficient to upgrade individual buildings, blocks, neighbourhoods,
cities, or entire urbanizing regions. Each has its advantages. Regional solutions can affect the
largest populations and areas and involve economies of scale that can lead to the most rapid change.
But people tend to be more responsive if they can see the consequences of their actions, something
more likely to happen at a district or household scale. Feedback mechanisms that show how
aggregate demand has declined, or the establishment of group purchasing programs for renewable
energy systems can lead to substantial reductions in consumption while promoting neighbourhood
cohesion.
Many cities and their major institutions, like universities, corporate campuses, and sports stadiums,
save energy by using district heating or cooling systems that circulate chilled or heated water from
large reservoirs through networks of insulated pipes and tunnels. Combined heat and power systems,
where cogenerated heat from power plants or industrial processes circulates to warm a group of
buildings, is one of the most effective ways to cut carbon emissions. District-scale urban solutions
are also being used for water and sewage collection and treatment. These include the installation of
bioswales, permeable pavement, and green roofs, which collectively promote aquifer recharge while
reducing excessive storm-water runoff. Once put in place, neighbourhood-scale strategies make it
easier for individuals to reduce their carbon output than household-based solutions. City or regional
governments then need to compile and report these results to demonstrate the large positive impact
of local actions. Municipal agencies also must communicate their best practices to their counterparts
in other cities to assure that local benefits get shared and amplified globally.
3 What scale of urban decisionmaking is most appropriate?
Theoretically, there is an optimal combination of behavioural and infrastructural actions that would
lead to the greatest emission reductions, but there are currently no common decision-making
mechanisms or models that are used or trusted by all of the relevant parties. Life cycle assessments
of the true carbon costs of the different options remain far from perfect (Ramaswami et al, 2008).
While a local focus promotes greater participation in carbon-reduction activities, it can also lead to
ambiguity in attributing where the carbon is generated. For instance, the city of Portland Oregon is
well known for its transit-oriented development, urban growth boundary, and success in lowering
carbon emissions. But neighbouring suburbs contain the same big-box stores and malls found in
less environmentally conscious parts of the U.S. To the extent that Portland residents shop in the
nearby suburbs, the positive impact of their transit system on carbon emissions is diluted.
Metropolitan Portland has the unique advantage of an elected regional government charged with
overseeing land-use and transportation issues across many jurisdictional boundaries; this body can
help encourage all of the regions cities to adopt common, progressive, climate-friendly policies.
In the absence of agreed-upon, objective criteria for carbon-related decisions, bold political
leadership is required. But most politicians are reluctant to enact strong regulations about such
issues as driving habits, fuel efficiency standards, or building codes due to fear of a backlash from
voters or industry. In addition, no one regulatory agency has the jurisdictional authority to oversee
all of the intertwined environmental, health, technological, and economic costs and benefits
associated with urban systems (Fink, 2011a). Feedback mechanisms, like the fuel efficiency gauge
on a hybrid vehicle, can lead to reductions in consumption for a subset of environmentally
conscious drivers, but would probably lead to less total impact than regulatory approaches.
Fink Cities as geoengineering building blocks
63
Scenario modelling software that lets consumers and policy-makers evaluate the tradeoffs among
alternative investments can help determine an optimum balance among these different scales of
action. Arizona State Universitys Decision Theater has developed several tools for envisioning and
connecting future consumption of water and electricity to policies, infrastructure, and external
factors like climate and economics (White et al, 2010). Similar techniques could be applied to fuel
consumption and emissions so the public and its leaders could better understand the complex
implications of future actions and the challenge of balancing competing interests. Setting up a
global network of similar facilities would increase cities ability to learn from each other.
4 Cross sector collaborations and dual use benefits
By trying different approaches in cities throughout the world, government officials, NGOs,
companies, and universities are, in effect, conducting scientific experiments under a variety of
boundary conditions (Fink, 2011b). The benefits of this collective experience are amplified when
the cities share their best practices, something that groups like the C40, Urban Sustainability
Directors Network, and ICLEI all promote. Consulting companies like IBM, Arup, Siemens, and
CH2M HILL, which design sustainable systems for individual cities, accumulate knowledge that
they can then package and resell in other parts of the world. Similarly, foundations and
development banks that finance urban sustainability solutions can make their results available to
accelerate progress in those cities that are less financially able to launch their own new programs.
The World Bank, Inter-American Development Bank, Asia Development Bank, and United Nations
are all supporting urban projects, studies, partnerships, and conferences, which can provide a global
perspective on how improving the efficiency of cities can slow the growth of emissions.
In many ways, urban design is a stealth strategy for addressing climate change. The ancillary
benefits of reducing CO
2
emissions through transit-oriented development, energy efficient
construction, and deployment of renewable energy can be substantial enough to make them
politically palatable, even where mitigation of climate change is an ineffective motivation. If the
financial incentives can become strong enough, the worlds cities will adopt these practices as a
matter of course. This is one of the main goals of earth systems engineering and managementfind
solutions with dual use benefits that reward society for doing the right thing (Allenby & Fink,
2005).
5 The Goldilocks Asteroid Scenario and the shift from climate
mitigation to adaptation in cities
The radical geoengineering solutions to climate change proposed to date all assume that behaviour
is largely irrelevantthey are intended to allow society to proceed with business as usual, with little
or no shift of the attitudes, policies, or economics that created the emerging crisis. They also
downplay or ignore the tremendous governance challenges that would need to be overcome. For
instance, if CO
2
could be extracted from the atmosphere, slowing or reversing global warming,
what would the new target concentration and temperature be? Some countries, like Canada, are
likely to benefit from several degrees of global warming. Who would get to decide, and by what
mechanism? The failure of modern geopolitics to deal with most complex environmental problems
whose consequences will occur a decade or more in the future offers little confidence that the major
conflicts that would certainly arise over geoengineering could be successfully negotiated.
Fink Cities as geoengineering building blocks
64
In order to motivate countries to act in the long-term global or human, rather than national, interest,
they would need to believe that not taking action would have worse local consequences in the near
term. Consider a hypothetical illustration we might refer to as the Goldilocks Asteroid Scenario.
If space scientists could say with 100% certainty that an asteroid was hurtling toward Earth, and
that annihilation in a decade would result unless a massive effort could be made to design and
deploy technology to destroy or deflect the bollide, society could likely be persuaded to spend a
large fraction of its GDP. However, if instead the scientists said the odds were only 1 in 2 rather
than 100%, or that the event would occur in 50 years rather than 10 years, or if those that would
have to pay most for the technology mounted an opposing political campaign, it is easy to envision
human optimism and short-term self-interest putting off the hard choices until it was too late. What
is needed for motivation is a disaster that is imminent and serious enough to stimulate action, but
not so extreme as to be completely hopeless. Not too hard, not too softjust right.
The climate crisis we face today lacks this urgency, at least relative to more immediate concerns
like economic recovery. And skepticism about science is on the rise. Thus, if urbanization alone
fails to slow the build up of atmospheric carbon, we can expect that much of society will be forced
to confront and adapt to extreme climate change over the coming decades. Even so, the resulting
shift from mitigation (including geoengineering) to adaptation will benefit from the growing
migration of population from the countryside to cities. Experimentation by individual cities, sharing
of best practices, cooperation among different sectors, combining behaviour change and technology,
and acting on multiple spatial scales can all improve societys ability to adapt.
References
Allenby, B. & Fink, J. H. (2005). Toward inherently secure and resilient societies. Science 309 No. 5737, 1034-1036.
Brand, S. (2009). Whole Earth Discipline: An Ecopragmatist Manifesto. Viking, 336 pp.
Broykin, V., Petoukhov, V., Claussen, M., Bauer, E., Archer, D., & Jaeger, C. (2009). Geoengineering climate by
stratospheric sulfur injections: Earth system vulnerability to technological failure. Climatic Change 92 Nos. 3-4, 243-
259.
Calthorpe, P. (2010). Urbanism in the age of climate change. Island Press, 176 pp.
Fink, J. (2011a). The case for an urban genome project: a shortcut to global sustainability? National Academy of
Engineering Bridge (Spring Issue on Urban Sustainability) 41 No. 1, 5-12.
Fink, J. H. (2011b). Cross-sector integration of urban information to enhance sustainable decision making. IBM Journal
of Research and Development 55 No. 1-2, 12:1-12:8.
Keith, D. W. (2000). Geoengineering the climate: History and prospect. Annual Review of Energy and the Environment
25, 245 -284.
Lackner, K. S. (2002). Carbonate chemistry for sequestering fossil carbon. Annual Rev. Energy Environ. 27, 193-232.
McKibben, B. (2010). Eaarth: Making a Life on a Tough New Planet. Times Books, 272 pp.
Owen, D. (2010). Green Metropolis: Why living smaller, living closer, and driving less are the keys to sustainability.
Riverhead Hardcover, 368 pp.
Ramaswami, A., Hillman, T., Janson, B., Reiner, M., & Thomas, G. (2008). Demand Centered, Hybrid Life-Cycle
Methodology for City-Scale Greenhouse Gas Inventories. Environmental Science & Technology 42, 6455-6461.
Rashkin, S. (2010). Retooling the U.S. housing industry: How it got here, why its broken, and how to fix it. Delmar
Cengage Learning, 256 pp.
Sahely, H. R., Kennedy, C. A., & Adams, B. J. (2005). Developing sustainability criteria for urban infrastructure
systems. Canadian Journal of Civil Engineering 32 No. 1, 72-85.
White, D., Wutich, A., Larson, K.L., Gober, P., Lant, T., Senneville, C. (2010). Credibility, salience, and legitimacy of
boundary objects: water managers' assessment of a simulation model in an immersive decision theater. Science and
Public Policy 37 No. 3, 219-232.


65

Tunnelling through the complexity of national infrastructure
planning
Jim Hall
1
, Justin Henriques
1
, Adrian Hickford
2
, Robert Nicholls
2

1
Environmental Change Institute, University of Oxford, Oxford, OX1 3QY, U.K.
E-mail: jim.hall@eci.ox.ac.uk; justin.henriques@ouce.ox.ac.uk
2
Engineering and the Environment, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K.
Email: A.J.Hickford@soton.ac.uk; R.J.Nicholls@soton.ac.uk
Abstract
Infrastructure, including energy, transportation, water, waste and digital communications, is
essential for human well-being and economic productivity. Current methods and models for
infrastructure planning and design are not well suited to incorporating cross-sectoral
interdependencies or to coping with the major uncertainties that lie ahead. If the process of
transforming infrastructure is to take place efficiently, while minimizing the associated risks, it will
need to be underpinned by a long-term, cross-sectoral approach to planning for infrastructure under
a range of possible futures. We present a new method for strategic analysis of infrastructure
systems to consider the long-term prospects for national infrastructure in the UK. We consider how
this approach can help to plan the billions of investment that will be needed to sustain societys
lifelines.

Hall et al. National infrastructure planning
66
1 Introduction
Infrastructure, including energy, transportation, water, waste and digital communications, is
essential for human well-being and economic productivity (OECD, 2006). Indeed in the current
economic climate governments worldwide are increasingly looking to investments in infrastructure
for short-term stimulus and to enhance longer-term economic competitiveness. In many ways,
infrastructure defines the boundaries of national economic productivity, and is an often-cited key
ingredient for a nations economic competiveness (ULI, 2011). For example, the World Economic
Forum lists infrastructure as the second pillar in its Global Competitiveness Index (WEF, 2011).
Infrastructure networks are also one of mankinds most visible impacts on the environment. As
infrastructure is largely made up of long-lived assets with high up-front costs, the wrong decisions
during planning and design can locked in unsustainable patterns of development. To steer towards
more sustainable infrastructure systems requires a transformation in both thinking and methodology.
Over the last century, infrastructure has evolved from a series of unconnected structures to
interconnected networks that place demands upon one another. For example, increases in demand
for water has the effect of increasing demand for energy due to the energy intensity of water
treatment and distribution (e.g. pumping). The key forces that influence demand for infrastructure
services are deeply uncertain in the long term. For example, changes in population and economic
growth both serve to modify demand for infrastructure services. Climate change is undermining the
conventional assumptions of infrastructure designers about the environmental forces to which
infrastructure will be subjected (Milly et al., 2008). Still more uncertain is the role that
technological change will have on patterns of behaviour and demand for infrastructure services. Yet
while a predict and provide approach to infrastructure planning may be out-dated, infrastructure
owners still have to look far ahead into the future and plan for a range of eventualities. Current
methods and models for infrastructure planning and design are not well suited to incorporating
cross-sectoral interdependencies or to coping with the major uncertainties that lie ahead. If the
process of transforming infrastructure is to take place efficiently, while minimizing the associated
risks, it will need to be underpinned by a long-term, cross-sectoral approach to planning for
infrastructure under a range of possible futures.
Thinking has already shifted from infrastructure projects to infrastructure systems. Now the scope is
broadening from the infrastructure systems in a particular sector to embrace a system of systems
approach. Systems of systems are large-scale, integrated, complex systems that can operate
independently but are networked together for a common goal (Jamshidi, 2008). Infrastructure fits
well in this framing, as individual sectors are complex networks that rely on other sectors to provide
infrastructure services.
2 At a crossroads
The infrastructure assets of developed countries in the west and the east are ageing and deteriorating
(Davis et al., 2010), while under the pressure of ever increasing demand. Consider for example the
water infrastructure in London: nearly half of the water mains are over 100 years old, yet the system
is having to cope with increasing demand, due to population growth. In the case of the energy sector,
the UK will need to replace 25% of its electricity capacity in the next decade as it will come to the
end of its life or be phased out in order to meet EU regulations for large combustion plants (SSE,
2011). Further, the need for the UK to increase the proportion of final energy consumption from
Hall et al. National infrastructure planning
67
renewable sources to 20% to meet binding EU targets (House of Lords 2008; POST, 2008) implies
a transformation of the electricity transmission grids.
Thus, highly developed countries are now at a critical crossroads where the pathways chosen for
new and replacement capacity will both dictate future infrastructure supply security, and have
critical implications for climate change. Yet it is in rapidly industrialising countries that the most
significant infrastructure commitments are now being made, which will be locking in future patterns
of development and carbon emissions. China for example spent approximately 6.8% of GDP (Ahya
and Gupta, 2010) on transportation and water infrastructure during the 2010/2009 fiscal year over
two and a half times that of the U.S (The Economist, 2011; Congress of the United States 2010) in
2007. Now more than ever, it is essential that governments and utility providers have access to new
methods that enable the evaluation of the performance and impact of long-term plans and policy for
infrastructure service provision that accounts for the complexity and uncertainty.
3 A global mobilisation
Although neglected over recent decades, infrastructure is now high on the agenda and several
research programs have formed to begin to develop new approaches to addressing the
interdisciplinary challenges of infrastructure provision. For example, the Next Generation
Infrastructures programme, led by the Technical University of Delft in the Netherlands has made
advances using agent-based modelling and serious gaming models to help communicate the
complexities of infrastructure systems. The SMART Infrastructure Facility at the University of
Wollongong in Australia recently began as a major research initiative and is rapidly developing a
suite of simulation models and a Multi-Utility Dashboard for reporting the real-time status of
infrastructure performance.
In the UK, the Infrastructure Transitions Research Consortium (ITRC) formed as a
multidisciplinary collaboration of scientists, engineers, economists and policy-makers, funded by
the UKs Engineering and Physical Sciences Research Council to analyse the long-term dynamics
of interdependent infrastructure systems. Composed of seven universities (Oxford, Cambridge,
Newcastle, Leeds, Cardiff, Southampton, and Sussex), the consortium is creating a new generation
of models and tools that assist policymakers in the evaluation of strategies for infrastructure
provision.
4 Developing a new roadmap
A blueprint for a new conceptual systems of systems methodology is beginning to emerge from
the work of ITRC. The methodology enables the evaluation of a wide range of long-term cross-
sectoral strategies for infrastructure provision. In order to account for deep uncertainty in the long-
term, the methodology incorporates a scenario analysis framework to evaluate strategies under
multiple possible futures.
Analysis of strategies for infrastructure provision can be thought of in five main stages (Figure 1).
The first step is to (1) identify the drivers of change, which are the primary exogenous forces that
affect demand for and performance of infrastructure services over relevant timescales at least two
decades in the future, half a century for most civil infrastructures and an even longer to assess the
lock-in effects on land use and development. (2) To provide the context for analysis of
infrastructure performance it is then necessary to create long-term future scenarios (i.e. internally
Hall et al. National infrastructure planning
68
consistently possible futures) from the key drivers. Next, (3) develop strategic models of the
demand for and performance of each infrastructure sector. Strategic models exist for infrastructure
systems, but have to be adapted to operate in a consistent way at a national scale and over long
assessment time-frames. Next, (4) develop transition strategies, which are cross-sectoral long-term
strategies for infrastructure service provision. They are composed of a portfolio of supply-side (i.e.
capacity options) and demand management policies for each infrastructure sector oriented towards a
specific aim. Recognising the inertia in the legacy of existing infrastructure, each strategy starts
with todays infrastructure system but the strategies transition into the future in contrasting policy
directions. Finally, (5) identify the key common performance metrics, and evaluate each transition
strategy in the context of the scenarios with respect to these metrics. This enables the identification
of robust strategies (i.e. strategies that perform well in multiple possible futures (Lempert, 2002)).
In summary, the methodology creates long-term demand and capacity projections of infrastructure
services over multiple possible futures, and evaluates the performance of long-term strategies for
infrastructure provision.

Figure 1. An application of the methodology by the ITRC showing the four main stages of identifying the
drivers of change, creating future scenarios through the variations of the key drivers, and developing,
evaluating, and visualizing the performance of long-term cross-sectoral strategies for infrastructure service
provision.
Hall et al. National infrastructure planning
69
5 Road testing in the UK
In the first year of intensive research, the ITRC applied the conceptual methodology outlined above
to the long-term prospects for national infrastructure in the UK. The primary drivers of change
across all sectors were identified as population growth, economic growth, and energy cost.
Additionally, sector-specific secondary drivers were identified, and included climate change, carbon
emissions targets, and environmental directives and standards. In some cases, secondary drivers
were as influential as the cross-sectoral drivers.
The ITRC methodology has been piloted with only three future scenarios, each representing
different settings of the primary drivers of change to 2050 (Figure 1). Work that is now underway
involves much more extensive sampling of the range of possibilities. The ITRC adapted models for
each infrastructure sector and cross-sectoral demands.
The pilot of the ITRC methodology tested three representative strategies to explore questions of
interest to decision makers. The three transition strategies were labelled:
x capacity-intensive, providing high investment in new capacity to keep up with demand and
maintain good security of supply in all sectors;
x capacity-constrained, a low investment strategy with no increases in the current level of
infrastructure investment, and an emphasis is placed upon demand management measures;
x decentralised, a reorientation of infrastructure provision from centralised grid-based
networks to more distributed systems, involving a combination of supply and demand-side
measures.
The ITRC evaluated the performance of the strategies according to the common metrics of cost,
emissions, and security of supply, along with other sector-specific metrics. Infrastructure
performance in the three scenarios was evaluated over the near (2010-2030) and longer (2030-2050)
term.
The results have revealed the limitations of demand management, and the benefits of diversity of
supply that decentralised infrastructure can yield. The analysis has provided the evidence to future-
proof investments by testing their performance in the longer term. Such methods will enable a new
planning paradigm that shifts to the long-term, helping to steer towards sustainable outcomes and
navigate away from undesirable side-effects. The first year of ITRC research also demonstrated that
multidisciplinary collaborations between scientists, engineers, economists and policy-makers must
become commonplace in infrastructure analysis and planning.
6 The way ahead
It would be hubristic, and misunderstanding the nature of the complex adaptive systems involved, to
suppose that it is possible, decades in advance, to design a system as complex as the civil
infrastructure for a technologically advanced society, and specify a strategy for phased
implementation. There are too many unexpected contingencies and opportunities that may
materialize in the intervening years. Predicting future technologies in a period when they are rapidly
evolving is not possible. Yet on the other hand it is clear that sophisticated systems that service
society do not arise spontaneously. They require sustained and strategic intent. This is a long term
program moments of maximum leverage, when opportunities in multiple sub-systems coincide,
do not occur all that often unless we plan to make them happen they may not occur at all. The
Hall et al. National infrastructure planning
70
pathways for reaching sustainable endpoints from the current system state therefore need to be set
out now. This in turn will require further development of tools for design, appraisal and
communication of alternative strategies and new modes of collaborative research.
Acknowledgements
This research was funded by the UK Engineering & Physical Sciences Research Council UK
Infrastructure Transitions Research Consortium (ITRC): Long term dynamics of interdependent
infrastructure systems through grant EP/I01344X/1.
References
OECD (2006) Infrastructure to 2030: Telecom, Land Transport, Water and Electricty, Organisation for Economic
Cooperation and Development, Paris.
ULI and Ernst & Young (2011) Infrastructure 2010 - investment imperative, Urban Land Institute, Washington DC,
USA.
WEF (2011) The global competitiveness report 2011-2012, World Economic Forum, Geneva.
Milly P. C. D. et al. (2008) Stationarity Is Dead: Whither Water Management? Science, 319(573).
Jamshidi M. (2008) System of systems engineering: innovations for the 21st century, John Wiley & Sons.
Davis S. J., Caldeira K., Matthews H. D. (2010) Future CO2 Emissions and Climate Change from Existing Energy
Infrastructure. Science, 329(1330): September 10, 2010.
SSE (2011) Memorandum submitted by SSE, in 8th Report - The UK's Energy Supply: Security or Independence? -
Volume II, Commons Select Committee Energy and Climate Change, London.
House of Lords (2008) The EUs Target for Renewable Energy: 20% by 2020, European Union Committee, (October
2008), Volume I: Report.
POST (2008) Renewable Energy in a Changing Climate, Parliamentary Office of Science and Technology Report
Number 315, London.
Ahya C. and Gupta T. (2010) India and China: New Tigers of Asia, Part III: India to Outpace China's Growth by 2013-
15, Morgan Stanley, August 2010.
The Economist (2011), Life in the Slow Lane, Apr 28th 2011, The Economist Newspaper Limited, London.
Congress of the United States (2010) Public Spending on Transportation and Water Infrastructure, Congressional
Budget Office Pub. No. 4088, Ed. (November, 2010).
Lempert R. J. (2002) A new decision sciences for complex systems. Proceedings of the National Academy of Sciences
of the United States of America 99(7309):May 14, 2002.


71

Advancement of Natural Ventilation Technologies
for Sustainable Development
Ben R. Hughes
1
, John Kaiser S. Calautit
1
and Hassam N. Chaudhry
1

1
School of Civil Engineering, University of Leeds, Leeds, LS2 9JT, UK.
E-mail: B.R.Hughes@leeds.ac.uk

Abstract

Heating Ventilation and Air Conditioning (HVAC) systems account for up to 60% of domestic
buildings energy consumption [U.S Dept. of Energy, (2009)]. Natural ventilation offers the
opportunity to eliminate the mechanical requirements of HVAC systems by using the natural
driving forces of external wind and the buoyancy effect from internal heat dissipation. A wind
tower was used in traditional architecture originating from the Middle East and captured air at a
higher velocity and delivered it through cool sinks to the buildings occupants. Commercial Wind
towers have been available in the United Kingdom (UK) for the last forty years; recent rising
energy costs have seen their implementation into new and existing building increase. This research
details the technological developments of the wind tower system in the UK and Qatar and discusses
the barriers to implementation and the on-going research in this field.

Hughes et al. Advancement of natural ventilation technologies
72
1 Introduction
The way in which energy is produced and consumed has a direct impact on global warming and
pollutant emissions. The Kyoto Summit secured a commitment from the majority of countries to
establish a global program for CO
2
emissions reduction. The major sectors producing CO
2
in most
countries are the power generation sector, the transportation sector and the building sector.
According to the World Business Council for Sustainable Development (WBCSD), buildings
account for up to 40% of the worlds energy use [WBSCD, (2009)]. Hence, the building sector
accounts for a large proportional of primary energy consumption in most countries.
The rapid development of Middle Eastern countries such as Dubai and Qatar has placed them at the
top of the global carbon footprint with Qatar producing 55.4 tons of carbon per person, the highest
global carbon footprint. In Qatar and other hot climate countries, air conditioning is a major
contributor to CO2 emissions. Heating Ventilation and Air Conditioning (HVAC) systems account
for up to 60% of domestic buildings energy consumption [U.S Dept. of Energy, (2009)].
Clearly any technology which reduces the HVAC consumption will have a dramatic effect on the
overall energy performance of the building. Natural ventilation offers the opportunity to eliminate
the mechanical requirements of HVAC systems by using the natural driving forces of external wind
and the buoyancy effect from internal heat dissipation. This can be achieved through careful
placement of windows and doors and solar gain placement at the design stage [Lomas, (2007)].
Another device which incorporates both wind driven and buoyancy driven forces is the wind tower.
The wind tower was used in traditional architecture originating from the Middle East and captured
air at a higher velocity and delivered it through cool sinks to the buildings occupants. Commercial
Wind towers have been available in the United Kingdom (UK) for the last forty years; recent rising
energy costs have seen their implementation into new and existing building increase.
This research details the technological developments of the wind tower system in the UK and
discusses the barriers to implementation and the on-going research in this field. In 2010 the authors
formed an international research group to reduce energy consumption in the domestic sector by
integrating novel low energy cooling devices in Qatari residences [Ghani and Hughes, (2010)].
Presented here is a detailed insight into the future of advancement of the wind tower system and
highlights the vast scope for HVAC savings across both commercial and residential sectors.

2 Previous related work
In contemporary wind towers, the principles of passive stack and wind effect are researched in the
design of the stack. Hughes and Ghani (2008) carried out work on determining the overall
feasibility of sustainable development by decreasing the running expenses of buildings. A passive
ventilation device known as windvent was used in the computational fluid dynamics based
numerical analysis of wind velocities ranging between 1-5 m/s. The investigation confirmed that
even at low wind velocities, the windvent was able to provide the desired rate of fresh air supply
into the building, hence concluding that the device is suitable for sustainable ventilation systems.
Further study by Hughes and Ghani (2008) included the performance analysis of the Windvent
device by shifting the external angle of the louvre between 0 and 45 degrees in order to determine
Hughes et al. Advancement of natural ventilation technologies
73
the point of highest efficiency in terms of pressure and velocity. The work incorporated a CFD
based numerical code with the inlet wind velocity of 4.5 m/s. The results confirmed that an angle of
35 degrees was required for optimized Windvent louvre performance for the input parameters. The
study also revealed that there is an increase in velocity of a given space for a reduction in trailing
edge stall.
Hughes and Ghani (2009) studied the consequences on the indoor environment with the usage of
windvent dampers which work on the principle of difference in pressure gradient. The investigation
was based on highlighting the optimum operating conditions for this passive ventilation device.
Computational fluid dynamics was used for numerical analysis for damper angle range between 0
and 90 degrees. The results displayed that the best operation occurs between the range of 45-55
degrees for mean U.K. wind velocities.
Hughes and Ghani (2011) further calculated the capability of a passive windcatcher device to
achieve the required delivery rates of fresh air intake. The numerical model was based on CFD code
and included a standard passive stack device along with simulated low-voltage axial fan. The results
established that a low-power fan is mandatory to provide the British Standard requirements of 20Pa
for the minimum ventilation rates. The location of the fan was found to be top position by doing the
CFD analysis. The computational work was compared with experimental testing for the
confirmation of the investigation.
Hughes et al. (2011) carried out a comprehensive review of sustainable heating, ventilation and air-
conditioning technologies in modern day dwellings. The study focused on comparing various
cooling techniques used to condition the air in buildings ranging from natural ventilation to the
advanced cyclic loops involving desiccant cooling processes based on their energy consumption and
relative expenditure requirements (Figure 1). The work revealed that modern wind towers require
minimum external power consumption for its method of operation respectively.

Figure 1 Comparison of power consumption and financial cost on the log scale (Hughes et al., 2011)
Wind tower
Packaged
terminal air
conditioner
Air
handling
unit
Desiccant
cooling
Absorption
cooling
Price () 3,000 158 414 155,767 13,380
Power Consumption (W) 200 1,100 20,000 55,000 11,000
1
10
100
1,000
10,000
100,000
1,000,000
C
o
m
m
e
r
i
c
a
l

P
r
i
c
e

(

)
Hughes et al. Advancement of natural ventilation technologies
74
Hughes et al. (2012) further highlighted the different cooling techniques integrated with wind tower
systems to improve its thermal performance. Key parameters including the ventilation rates and
temperature were evaluated in order to determine the viability of implementing the devices for their
respective use. The results showed that the highest temperature reductions were achieved from
incorporating evaporative cooling techniques into the wind tower such as wetted column (clay
conduits) and wetted surface (cooling pads). The temperature reduction are found to be in the range
of 12 15 K. The study also highlighted the effect of the addition of the cooling devices inside the
device which reduces the air flow rates and reduces the overall efficiency of the wind tower.
Calautit et al. (2012) re-designed a traditional row house to be adapted to the hot and arid climate of
the Middle East. The vernacular design features include a number of cooling devices such as an
open courtyard, wind towers and heat-storing building materials to reduce overheating during the
summer months. The study investigated the performance of a wind tower incorporated into the row
house to replace the traditional ventilation devices using computational fluid dynamics (CFD)
modelling. The study highlighted the ways in which the resulting natural air flows in the house
operated using the ANSYS Fluent CFD tool to develop a numerical model of an optimized wind
tower system. Achieved ventilation rates and temperature distribution inside the structure were
investigated. The results demonstrated that the proposed wind tower configuration was able to
increase the average indoor air velocity by 63%. An improved airflow distribution is observed
inside the modified row housing model.
3 Technological development of wind towers
Wind towers have been in existence in various forms for centuries as a non mechanical means of
providing indoor ventilation, energy prices and climate change agendas have refocused engineers
and researchers on the low carbon credentials of modern equivalents. Conventional and modern
wind towers architecture can be integrated into the designs of new buildings, to provide thermal
comfort without the use of electrical energy. Figure 2 illustrates ventilation through a four-sided
wind tower device. The wind tower is divided by partitions to create different shafts. One of the
shafts functions as inlet to supply the wind and the other shafts works as outlet to extract the warm
and stale air out of the living space. The temperature difference between the micro and macro
climate creates different pressures and result in air currents (Cheuk-Ming and Hughes, 2011).

Figure 2 A flow diagram representing ventilation through a multi-direction wind tower device
Stale Air Out
Fresh Air In
(Micro Climate)
- Leeward
+ Windward
(Macro Climate)
(Macro Climate)
Hughes et al. Advancement of natural ventilation technologies
75
The design of wind tower system has been traditionally based on the topography, climatic
conditions, personal experience of architects, social position of the occupants and variation in
height, cross-section of air channel, number of openings, size and positioning of openings, form,
construction materials, and placement of the tower with respect to the building. The efficiency of
wind towers is reliant upon creating the maximum pressure difference between the air inlet
openings and exhaust of the passive device (Elizalde and Mumovic, 2008). The air movement
around the structure will determine the size, location and form of the wind tower and its openings,
so as to maximise the pressure differential. Figure 3 illustrates different configurations of
traditional wind tower systems in Yazd, Iran.


Figure 3 Traditional wind towers with different number of openings (a) one-sided, (b) two-sided, (c) four-
sided, (d) octahedral (Hughes et al., 2012)
Naturally ventilated buildings do not require additional energy to move the airflow within a
structure. However, the cooling capabilities of conventional wind towers which depend on the
structure design itself are limited. Therefore it is essential to cool the air in order to improve the
thermal comfort of its occupants (Hughes et al., 2012). Figure 4 shows a concept design of a wind
tower system integrated with cooling devices. Evaporative cooling pads sit at the top of a wind
tower with pump re-circulating water over them. Hot air is passed through these pads and cooled by
the water evaporation. Cool moist air is denser than ambient air and sinks down the tower and into
the enclosed space. In order for the cool air to flow in, hot air must be released. Solar chimney is
located directly opposite the wind tower to establish effective cross-flow ventilation inside the
structure and exhaust the stored hot air using buoyancy-driven forces.
a b c d
Hughes et al. Advancement of natural ventilation technologies
76


Figure 4 Concept design of a passive wind tower integrated with different cooling devices (Hughes et al., 2012)
Modern architects and engineers integrated the principles of traditional wind tower with modern
technology as helpful devices to increase the quality and efficiency of the supplied air. Modern
wind towers provide natural ventilation and light to any space in a building. Commercial wind
tower systems as a top-down roof mounted, multi directional device used for naturally ventilating
buildings further enhances the portability of the device in comparison to its traditional counterpart
(Hughes and Ghani, 2008). Modern wind towers are usually compact and smaller in size compared
to the traditional wind towers as shown in Figure 5.
The device extends out from the top of a structure to catch the wind at roof level and channels fresh
air through a series of louvers into the enclosed space under the action of air pressure, and
simultaneously the negative pressure extracts stale air out of the room. Unlike the traditional wind
tower systems, air is supplied to the enclosed space through the diffusers located at ceiling level.
Hence, more free space is available for ventilation on the ceiling than on the corresponding floor of
equal area.

Vent
Opening
Hot External Air
Massive Thermal
Wall
Solar Chimney
(Buoyancy Forces)
Water Tank and
Pump
Evaporative Cooling
Pads
Pressurized
Water Line
Water Spray
Cooled air
sinks down
the tower
Hughes et al. Advancement of natural ventilation technologies
77

Figure 5 Modern roof mounted circular and square wind tower systems (Hughes et al., 2012)
In order to augment the airflow from the current practices involving wind towers, extensive
research has recommended that a passive-assisted natural ventilation system may be employed to
provide continuous supply of fresh air without affecting the energy requirements of the wind tower
(Hughes and Ghani, 2011). The hybrid system incorporates a low-powered fan installed inside the
wind tower with the fan functioning when required to assist the flow of air between the building
exterior and interior via the ventilator. The solar driven fan also functions as an exhaust device for
extracting stale air out of the building. It provides a constant supply of ventilation air, even when
there is no wind. Zero energy solution is guaranteed, allowing the building natural ventilation
design to be optimized and ensuring a low energy natural ventilation strategy to be maintained.
Figure 6 shows a windvent device incorporating solar powered internal fans. The solar fan can be
used to overcome excessive heat gains and boost the air movement through the wind tower when
extra ventilation is required.


Figure 6 Schematic of a wind tower system integrated with a solar powered fan (Hughes et al., 2012)

Top Hat
Solar
Powered Fan
Four-Sided
Wind
Tower
Stale
Air Out Fresh
Air In
Adjustable
Control
Dampers
Top Axial Fan
P-V Panels
Hughes et al. Advancement of natural ventilation technologies
78
Furthermore, by integrating heat transfer devices within the actual wind tower systems, the overall
energy consumption levels can be reduced. The cyclic process operates by inducing warm air is
passed through cooling tubes, which allows for cooling of the air stream. Heat transfer devices are
installed inside the passive terminal of the wind tower unit, highlighting the potential to achieve
minimal restriction in the external air flow stream while ensuring maximum contact time, thus
optimising the cooling duty of the device (Figure 7).

Figure 7 Schematic of the circular wind tower model with the proposed horizontal and vertical heat pipe
configurations.
4 Results and Discussion
The preliminary studies comprised of a heat exchanger system integrated within the control volume
of a modern 4x4 wind tower device in order to depict the thermal performance of the system. A
horizontal profile plane constructed at immediate downstream of the horizontal heat pipes (using
pure water as the working fluid) control volume within the wind tower structure to analyse the air
temperature differential confirmed the performance of the transfer pattern of heat as an air
temperature reduction of 14K was obtained (Figure 8) from the source temperature of 315K
respectively.

Hughes et al. Advancement of natural ventilation technologies
79

Figure 8 Cross-section of the wind tower source channel temperature differential
Figure 9 illustrates the velocity streamlines of a middle plane in the test room model integrated with
a wind tower device. From the diagram it is seen that the air flow is entering from the right side of
the enclosure (velocity inlet). The airflow splits at the wind ward side of the structure with the air
entering the wind tower openings and the remaining flow shearing across the structure and exiting
to right side of the enclosure (pressure outlet). Air-short circuiting is observed, some of the air flow
exits the leeward quadrant without entering the microclimate

Figure 9 Velocity streamlines of a cross sectional plane in the test room with an inlet velocity of 4 m/s
(Modern wind tower)
Figure 10 compares the calculated results for the achieved air supply rates for the different wind
tower configurations. It is observed that the benchmark and vertical arrangement model barely
supplied the minimum rate at external wind speed of 1 m/s. It was found that the one-sided wind
tower incorporated with the horizontal heat pipe arrangement delivered the most amount of air
supply into the room at zero air incidence angles. However, the ventilation performance of the
Air temperature 315K
Air temperature 301K
Wind tower source
channel
v outlet 0.59 v inlet 1.96
v uiffusei 1.67
Hughes et al. Advancement of natural ventilation technologies
80
device is significantly reduced as the incident wind blows away from the direction of its design
range.

Figure 10 Comparison of the air supply rates of the different configurations at different wind speeds.
Figure 11 shows a comparison between the thermal performances of traditional evaporative cooling
and heat transfer devices inside a test channel (controlled test). From the illustration (Figure 11a) it
is seen that there is a sufficient decrease in air temperature. This is due to the absorption of heat in
the evaporative cooling. The water is sprayed at the top section of the channel (h = 9 m),
temperature decrease very fast and slightly increases due to the walls of the wind tower. From
Figure 11b, the temperature reduction is greater on the left side of the channel where the air velocity
is significantly lower which increases the contact time between the heat transfer devices and air
stream.

Figure 11 Temperature contour lines of a cross sectional plane in the test channel (controlled test): (a)
evaporative cooling (b) heat transfer devices
0
5
10
15
20
25
30
35
0 1 2 3 4 5 6
L
/
s

p
e
r

o
c
c
u
p
a
n
t
External wind speed (m/s)
Benchmaik veitical Aiiangement Boiizontal Aiiangement
B
T
inlet
= 310K
T
inlet
= 310K
A
T
inject
= 293K
T
7m
= 296K
T
outlet
= 296K
T
4m
= 295K
T
2m
= 295K
T
outlet
= 295K
T
HTD
= 293K
T
7m
= 295K
T
4m
= 295K
T
2m
= 295K
Hughes et al. Advancement of natural ventilation technologies
81
Figure 12 compares the thermal performance of a wind tower system incorporated with evaporative
cooling and heat transfer devices.

Figure 12 Comparison of the variation of temperature of airflow from entering to exiting.
Figure 13 illustrates temperature distribution inside a test room integrated with a modern wind
tower device. It is observed that the temperature is reduced as it approaches the cross-dividers with
the vertical HTD arrangement. The average temperatures inside the microclimate are 296.2 K and
295.8 K with the macro climate temperature set at 310 K.


Figure 13 Temperature distributions inside the test room with a modern wind tower incorporating the
vertical HTD arrangement
Figure 14 illustrates temperature distribution inside the test room integrated with a one-sided wind
tower device with horizontal HTD arrangement. Air temperature reduction is observed inside the
microclimate, average temperature of 295.8 is obtained inside the models with the outdoor
temperatures set at 310K.
293
295
297
299
301
303
305
307
309
311
0 2 4 6 8 10
S
t
a
t
i
c

T
e
m
p
e
r
a
t
u
r
e

(
k
)
Position (m)
Evapoiative cooling
Beat Tiansfei Bevice
TInuooi
296.2 K
Tuiffusei
293.6 K
Tinlet 310 K
Toutlet 305 K
Hughes et al. Advancement of natural ventilation technologies
82

Figure 14 Temperature distributions inside the test room with a modern wind tower incorporating the
horizontal HTD arrangement
5 Future work
For temperate climatic conditions, an effective alternative to reduce the building mechanical loads
is by introducing heat recovery systems such as a heat pipe heat recovery unit or a heat pipe heat
exchanger, highlighting constant scope of reverse cycles to be developed for regions with hotter
climates. Conceptual wind tower systems with integrated heat recovery channels provide a means of
recovering warm air stream allowing the incoming cold air stream to enter at relatively higher
temperatures for superior thermal comfort in comparison to a conventional wind tower system
(Figure 15).
TInuoo
295.8 K
Tinlet
310 K
Tuiffusei
294.8 K
Toutlet305 K
Hughes et al. Advancement of natural ventilation technologies
83

Figure 15 Schematic of a wind tower system with integrated heat recovery channel
6 Conclusions
The integration of wind towers as a low energy alternative to HVAC systems has vast potential and
presented here are the technological advancements to date. The introduction of a low energy
cooling towers into Qatar will provide a milestone in the development of advanced natural
ventilation technologies. The UK market requires an additional heat recovery system to make the
wind tower a viable alternative and early results indicate this is an imminent development.
References
Calautit, J.K. Hughes, B.R. & Ghani, S.A. (2012). A Numerical Investigation into the Feasibility of Integrating Green
Building Technologies into Row Houses in the Middle East, Architectural Science Review 55, 1-18.
Cheuk-Ming, M. & Hughes, B.R. (2011). A Study of Wind and Buoyancy Driven Flows Through Commercial Wind
Towers, Building and Environment 43, 1784-1791.
Elizalde, T. & Mumovic, D. (2008). Simulated Performance of Windcatchers in an Urban Environment, Conference on
Passive and Low Energy Architecture, Dublin
Ghani, S.A.A and Hughes B.R (2009). Integration of passive ventilation and novel cooling systems for reducing air
conditioning loads in buildings NPRP 09 - 138 - 2 059, Qatar National Research Fund 3
rd
Cycle.
Hughes, B.R. & Ghani, S.A. (2009). A numerical investigation into the effect of windvent dampers on operating
conditions, Building and Environment 44, 237-248.
Hughes, B.R. & Ghani, S.A. (2008). Investigation of a windvent passive ventilation device against current fresh air
supply recommendations, Energy and Buildings 40, 1651-1659.
Hughes, B.R. & Ghani, S.A. (2011). A numerical investigation into the feasibility of a passive-assisted natural
ventilation stack device, International Journal of Sustainable Energy 30, 193-211.
Hughes, B.R. & Ghani, S.A. (2008). A numerical investigation into the effect of Windvent louvre external angle on
passive stack ventilation performance, Building and Environment 45, 1025-1036, 2010.
Cool Fresh
Air In
Warm Stale Air
Out
Heat Recovery System
Warm Stale Air
Out
Warm Fresh
Air In
Hughes et al. Advancement of natural ventilation technologies
84
Hughes, B.R. Calautit J.K. & Ghani, S.A. (2012). The Development of Commercial Wind Towers for Natural
Ventilation: a review. Applied Energy 92, 606-627.
Hughes, B.R. Chaudhry H.N. & Ghani, S.A. (2011). A review of sustainable cooling technologies in buildings,
Renewable and Sustainable Energy Reviews 15, 3112-3120.
Lomas, K.J (2007). Architectural design of an advanced naturally ventilated building form. Energy and Buildings 39,
166-181.
Transforming the market: Energy efficiency in buildings. World Business Council for Sustainable Development. April
2009.
U.S Department of Energy. Energy Efficiency and Renewable Energy.
http://www1.eere.energy.gov/buildings/commercial/hvac.html. Last accessed online (26/08/2009).




85

Using an Urban Futures tool to analyse complex longterm
interactions between technological, human and natural systems
Dexter V.L. Hunt
1
, Ian Jefferson
1
and Chris. D.F. Rogers
2

1
School of Civil Engineering, University of Birmingham, Edgbaston, Birmingham, B152TT
E-mail: huntd @bham.ac.uk; i.jefferson@bham.ac.uk; c.d.f.rogers@bham.ac.uk;
Abstract
Determining an urban environments built form, whilst engineering the underlying infrastructure
upon which it depends, has all too often been predicated on historical trends, legacies and hindsight.
In the 21
st
Century increased interconnectivity and interdependence requires management of
complex systems (i.e. coupled human, natural and technological) where decision-making and
planning for the future is far from straightforward. Moreover a greater awareness of sustainability
and resiliency issues (e.g. engineering within resource constraints and climate change) now requires
consideration of a range of possible city changes and assessment of their short- and long-term
impacts. Decision-making in the face of such uncertainty requires interdisciplinary foresight tools
that facilitate understanding and communication between diverse ranges of stakeholders.
This paper introduces the Urban Futures (UF) excel-based Tool derived for conducting long-term
future-scenarios-based analyses. As a freely available open source tool it allows an array of end-
users (i.e. academics to practitioners) to interrogate and compare up to five different scenarios at a
time for both water and energy. The user is able to easily change weather patterns, building sizes,
user demands (through user-behaviour and technological efficiency), supply sources (including
recycling and re-use options) and assess all inflows (e.g. water, gas and electricity supplies) and
outflows (wastewater, stormwater and emissions) both daily and annually from the scale of one
household to a whole development. As such it allows better definition of the behaviour of nodes in
order that behaviour in a system (a combination of links and nodes) can be modelled. When used in
conjunction with the UF methodology it has previously allowed users to test the resilience of
sustainability solutions in four plausible, relevant, yet highly diverse scenarios using 5 key drivers
of change (i.e. Social, Technological, Economic, Environmental and Political). This paper will
show how its use can be extended to analyse long-term complex interactions between technological,
human and natural systems and thereby facilitate decision making surrounding provision of city
infrastructure.


Hunt et al. Using an urban futures tool
86
1 Introduction
Earth Systems Engineering ESE (Allenby, 1999, 2005, Schnieder, 2001 Hall & OConnell, 2007),
a discipline in which the key concept is to examine environmental problems with systems analysis,
is vital in the quest to examine the potential for sustainable adaptation to global change. Adopting a
systems analysis approach to interrogate complex city centre landscapes and explore the potential
for sustainable provision of infrastructure assets would appear to be wholly appropriate. According
to Gibson (1991) this should consist of six steps:
1. Determine goals of system
2. Establish criteria for ranking alternative candidates
3. Develop alternative solutions
4. Rank alternative candidates
5. Iterate
6. Action
Allied to the above is the development of the following:
(a) A Descriptive scenario (a baseline that tells us where we are, how the things got this way and
what is good and bad herein i.e. what should be kept and what might be changed?);
(b) Normative scenarios (a future scenario that tell us where we want to be or where a system
should operate under ideal conditions); and
(c) Transitive Scenarios (an implementation process/strategy, i.e. the how to get from the
Descriptive scenario to the Normative scenario).
The Normative scenario (of which there may be many) requires complex analysis, using numerous
well-defined variables and datasets, and therefore if system changes are to be implemented, their
impacts must be fully understood and clearly communicated to a range of stakeholders. As such any
tool that can facilitate this process by allowing for a system to be pushed and pulled in real time,
whilst providing instantaneous graphical interpretations of impacts, will result in more innovative
and sustainable solutions being implemented whilst allowing for trade-offs to be managed to best
effect. [Lombardi et al (2011) illustrated that elucidating tensions and trade-offs early on (i.e. at the
visioning stage) can lead to more sustainable decision-making.] These are key elements to the
advancement of ESE. This paper outlines the distinct links between ESE and the broader subject
area of futures analysis per se and provides a new methodology that facilitates moving from
qualitative explorative scenario narrative development to quantitative ESE scenario analysis
(Section 2). An Urban Futures tool, developed to facilitate quantification (with respect to urban
flows), is described in detail (Section3) and implications for this new methodological approach are
drawn in the concluding discussion (Section 4).
Hunt et al. Using an urban futures tool
87
2 Moving from Qualitative to Quantitative analysis
Explorative scenarios form an underlying feature of futures analysis, whilst not adopted directly
within an ESE approach this section will show how they are inextricably linked. These scenarios
allow the users to step inside worlds that are possible, plausible, relevant, and in many cases
distinctly different from the world we live in today. There are many ways to produce explorative
scenarios, the most common of which is the axes-of-uncertainty approach (Hunt et al, 2012). Axes
are selected by considering key drivers of change (e.g. STEEP Social, Technological, Economic,
Environmental and Political or PESTER, where Regulation is also included, Stout, 2002) and by
selecting aspects of greatest importance and uncertainty. In Figure 1 future demand is selected as it
is the key influencing factor on both supply and disposal streams (flows) for food, water, energy,
waste and transport within any development and at any scale. By considering the two most
influential drivers of change therein, i.e. S (changes in behaviour) and T (changes in technological
efficiency) four quadrants can be formed, each would traditionally be referred to as an explorative
scenario. In qualitative terms Quadrant 1 represents a world where user behaviour (e.g. duration of
shower) and technology (e.g. flow rate of shower) has changed for the worse (e.g. an 8 minute
power shower). Quadrant 2 represents a world where technological efficiency is worse but user
behaviour has improved significantly (e.g. a 4 minute power shower). Quadrant 3 represents a
world where user behaviour is worse but steps to improve technological efficiency have been
employed (e.g. a low flow shower). Finally Quadrant 4 represents a world where the best
technological efficiencies are accompanied by step-changes in user behaviour (e.g. a 4 minute low
flow shower). Figure 1 defines a very useful immersive futures space that facilitates top down
interdisciplinary communication.

Figure 1 Qualitative analysis of demands using an axis-of-uncertainty approach
Sustainability, a key thread of ESE analysis, might traditionally be associated within Quadrant 4
alone; here technological efficiency improves and is accompanied by similar (willingly adopted)
step-changes in user behaviour, i.e. both work in parallel. In contrast Resilience, not formally part of
Hunt et al. Using an urban futures tool
88
ESE analysis, requires much broader considerations, i.e. a deeper understanding of the likely
impacts that could occur if society moves into any of four quadrants. This provides a clear visual
representation of the distinction that exists between sustainability and resilience, something that is
not clearly defined within the literature. Moreover, it emphasises a critical point; what is sustainable
may not be resilient if the world unexpectedly moves in a different direction to what we expect or
have planned for. This is why the Urban Futures (UF) long-term explorative scenarios for 2050,
have proved so useful in testing the resilience of engineering solutions adopted today in the name of
sustainability (Rogers et al., 2012). Adapted from GSG scenarios and tailored for UK analysis
(Lombardi et al., 2012, Hunt et al, 2012a) they define a clear set of archetypal features which are
evident within >280 scenarios (Hunt et al., 2012a). They are particularly useful because they
provide clarity on how the key drivers of change can push or pull us toward, or away from, each
future. For example, behaviour change in NSP is exacted through free will, technological changes
in PR are enforced through tightening of policy measures, and free market economics in MF allows
unchecked user behaviour and technological efficiency to get worse. All of this information
enriches a futures analysis, whether explorative in nature, or ESE specific. Hunt et al. (2012a)
suggested these scenarios should be located (albeit qualitatively) within a quadrant diagram
allowing for greater specification and therefore development of numerical type analysis (Figure 2).
The globe refers to a descriptive scenario, as in ESE, and shows where we are today.

Figure 2 Possibility space and explorative scenarios used for UF water demand analyses
Engineers inherently seek to quantify what they see and faced with two (nominally scaled) axes
they are likely to question the exact location of any world within each of these quadrants, rather
than leaving their placement down to qualitative judgement, as exampled in Figure 2. Therefore
within this paper a method is proposed for defining more clearly a possibility space, in this case
specific to current and future demands. The first step is to adopt an exact scale of change for both
axes. For this paper a scale running from 100% increase in demand to 100% decrease in demand is
User behaviour
significantly
improved
User behaviour
significantly worse
QUADRANT 1 QUADRANT 2
QUADRANT 3 QUADRANT 4
Technological
efficiency
significantly worse
Technological
efficiency
significantly improved
NSP PR
FW
HN
MF FW
H
Hunt et al., (2012)
Farmani et al., (2012)
Hunt et al. Using an urban futures tool
89
adopted (Figure 3). This not inappropriate given it allows a factor-four increase and decrease
respectively to be considered. The second step is to define a boundary (denoted by a thick black line
in Figure 3) between demand increases (shaded region above line) and demand decreases (non-
shaded region below line) this is where no-change in user demand occurs. The descriptive
scenario shows where we are now, our baseline as it were. Scenario 1 represents a world where user
behaviour has improved by 40% (perhaps due to peoples free will or through policy measures
requiring metering) however technological efficiency has decreased by 60% (perhaps driven
through ageing technologies and system leakage in the case of water) meaning that no overall
change in demand has occurred. In Scenario 2 these % changes are reversed on each axis, with the
same overall effect, i.e. demand remains unchanged. This is an important line to define as it shows
that any efforts that are made to reduce demands in one direction can be directly counteracted if
they are not kept in check in the other direction.

Figure 3 Moving from qualitative to quantitative analysis for demands
The third step is to define an equation that can be used to for plotting generic contours with any
pairing of axes (Equation 1).

(1)
Where A is the % change contour curve; X and Y respectively represent % changes on horizontal
and vertical axes. Figure 4 shows the resulting contours when using values of A ranging from 400%
Hunt et al. Using an urban futures tool
90
to 0%. When A = 100 (i.e. the 100% contour) no change in household demands occur, when A =
400 (i.e. the 400% contour) household demands quadruple, when A = 0 (i.e. the 0% contour)
demand has been removed completely, initiated through technology or user behaviour changes.

Figure 4 A fully quantified generic possibility space (contours at 20% spacing)
This generic plot can then be used for analysing Explorative, Normative and Transitive type
scenarios. A long-term Normative scenario might be the achievement of 80% reduction in resource
flows (or emissions) within cities (Figure 5). This proves extremely useful because it shows
explicitly that this level of performance can be achieved in numerous ways along the 80% reduction
contour.

100
80
60
40
20
0
20
40
60
80
100
100 80 60 40 20 0 20 40 60 80 100
20%
100%
400%
% Change in X
%

C
h
a
n
g
e

i
n

Y

Increase Decrease
Decrease
0%
Hunt et al. Using an urban futures tool
91

Figure 5 Examples of 80% demand reduction scenarios (A, B, C, D, E)

What is most important is that this highlights also that city wellbeing it is not just about achieving
a specified level of performance (e.g. 30l/person/day) in the future. The way in which it is achieved
requires a deeper understanding of how levels of performance are being (or could be) achieved. For
example, it might be argued that C is sustainable due to the dual role of technology and behaviour
but could this justified for A, B, D and E?
Figure 6 shows the transitive steps (long-term pathway) that might be adopted in order to reach
Scenario D in Figure 5. This very much mirrors current UK policy and some of the UK
benchmarking systems like Code for Sustainable Homes and yet simply ignores the influence (and
role) user behaviour could play (See Hunt et al, 2012b). In other words Di to Div could just as
easily be transposed onto the horizontal axis. By creating a possibility space and making it
quantitative in nature it is possible to perform meaningful and detailed quantitative scenarios
analysis. Such an application is recommended for ESE, a scientific approach which requires
quantifiable analysis. The advantage of this multi-layered possibility space approach is that it can be
presented to a range of audiences (i.e. from engineers, social scientists, planners, developers and the
general public) in a range of forms that are internally consistent and translatable. It makes what
engineers might be consider intangible (figure 1) coherent allowing for detailed numerical
investigation, and it allows what might be considered complex (figure 4) to be simplified engaging
audiences that are happy with simple messages.
100
80
60
40
20
0
20
40
60
80
100
100 80 60 40 20 0 20 40 60 80 100
D
C
E
B
A
% change in demand due to User behaviour
%

c
h
a
n
g
e

i
n

d
e
m
a
n
d

d
u
e

t
o

T
e
c
h
n
o
l
o
g
i
c
a
l

e
f
f
i
c
i
e
n
c
y


Descriptive
scenario
No Demand
change contour
80% reduction
contour
Hunt et al. Using an urban futures tool
92

Figure 6 Descriptive, Transitive and Normative scenarios within demand possibility space

Table 1 Plausible demand quantification for Descriptive, Transitive and Normative scenarios

Scenario name
(Type)
Nominal
Long-term
Timeline
Specified
Reduction
(%)
Internal
Water use
(l/person/day)
Energy use
for heating
(kWh/m
2
/yr)
Waste
production
(kg/person)
D-i
(Descriptive)
2012 0 150 229 350
D-ii (Transitive) 2015 20 120 183.2 280
D-iii
(Transitive)
2020 40 90 137.4 210
D-iv
(Transitive)
2030 60 60 91.6 140
D-v (Normative) 2050 80 30 45.8 70

100
80
60
40
20
0
20
40
60
80
100
100 80 60 40 20 0 20 40 60 80 100
% change in demand due to User behaviour
%

c
h
a
n
g
e

i
n

d
e
m
a
n
d

d
u
e

t
o

T
e
c
h
n
o
l
o
g
i
c
a
l

e
f
f
i
c
i
e
n
c
y


Di
Dii
Diii
Div
Div
Hunt et al. Using an urban futures tool
93
Having developed this methodological framework, it was recognised that the complexities of
numerical analysis therein would require an integrated platform that allowed academics, engineers,
planners, developers or even end-users to manipulate and test (in isolation and combination) the
impacts of user behaviour and technological efficiency. Chapter 3 will now introduce the UF tool
that is being developed for such a purpose.
3 Quantitative Analysis through an Urban Futures tool
Having defined a framework from which to qualitatively define, then quantitatively derive a future
possibility space for scenario-based analysis (Figures 1 to 6), this section will describe the Urban
Futures (UF) Tool; an excel-based (soon-to-be freely available) open-source tool capable of
providing detailed quantitative analysis therein. The current version of the UF tool (v1.18) includes
a range of advanced excel features such as macros and Visual Basic (VBA). The tool was originally
derived as part of a broader research theme which looked to estimate future utility infrastructure
requirements within the broad sphere of sustainable use of underground space. Its main deliverable
was the provision of a quantitative evidence-base for testing the resilience of a range of engineering
solutions through explorative future scenarios analysis. The excel platform was chosen due to its
widespread adoption and ease of use; allowing for modification and upgrading as new technologies
emerge and as datasets age. In deriving the tool it was firstly deemed necessary to map (at a
fundamental level) the interdependencies which exist between what is built above ground and what
utility infrastructure is required below ground. The flow model shown in Figure 7 provides the basis
upon which the UF tool was derived. It illustrates effectively to engineers, planners, academics and
the general public a range of complex interactions, from where the UF tool can test capacity and
functionality requirements (e.g. performance) of engineered solutions above and below ground. In
order to facilitate this process further the system was necessarily broken down into micro-nodes (i.e.
an individual building) and links (i.e. infrastructure connections for flows between micro-nodes), as
illustrated in Figure 7. For this example the micro- node is a domestic dwelling. The UF tool allows
users to quantify (and better understand) the diversity (and impact) of operating conditions (i.e.
technological, human and natural) that may exist at a micro-node in current and future world
situations.
Hunt et al. Using an urban futures tool
94

Figure 7 A Micro-node with inflows, demands and outflows (simplified in top left).
Its adoption as an ESE tool appears to be wholly appropriate, partly due to the simple operation
procedures that have been refined over a period of two years, but also due to its ability to
interrogate (in isolation and combination) the changing effect of a significant amount of system
variables that affect long-term performance and capacity requirements of nodes and (ultimately)
links. Currently the UF tool allows the user to interrogate:
i. Changing climates (i.e. different locations)
ii. Climate changes (Low, Medium and High variants)
iii. Scale (individual to development)
iv. Building type (i.e. houses and offices)
v. Floor area, roof area and type (i.e. sloped/flat)
vi. Occupancy rates (i.e. single and multiple occupancy)
vii. User technologies (e.g. cookers, fridges, dishwashers, showers, taps, WCs)
viii. User behaviour (e.g. WC flushes per day, TV watching per day)
ix. Energy demands (thermal and non-thermal)
x. Fuel supply options (i.e. electricity vs gas)
xi. Energy supply technologies (mains, solar thermal, PV, wind, GSHP, CHP)
xii. Water demands (potable vs non-potable)
xiii. Water supply and reuse/recycling options (mains, RWH and GW)
xiv. RWH tanks (sizing and storage volumes)
xv. Garden watering requirements (including effect of crop type).
xvi. Benchmarking (e.g. CSH)
Hunt et al. Using an urban futures tool
95
The added value of the UF tool, in terms of advancing ESE, is its ability to easily assess an infinite
arrangement of inflows, user demand profiles and outflows (e.g. gas inflow and storm water and
waste water outflow). This is a pre-requisite for this type of analysis because system changes may
occur in isolation or combination due to the high level of interconnectivity that can be found within
the many networks we now employ. This is a necessary pre-cursor to any full system analysis that
considers infrastructure capacity requirements for micro-nodes and links. Previous research has
shown that users find it difficult to deal with more than four explorative scenarios (excluding the
baseline) at a time (Stout, 2002). Therefore the UF tool has been set up to compare simultaneously a
maximum of five scenarios for supply (mains and/or local supplies), demand (internal and external,
regulated and unregulated) and disposal (including renewable, recycling and re-use options) at a
micro-node scale (Figure 8a). The five scenarios could consist of one descriptive, one normative
and three transitive scenarios (as exampled in Figure 6) and therefore is directly applicable to ESE
analysis. However this is not the only application of the UF tool; development scale (i.e. a macro-
node) analysis can be considered, as shown in Figure 8b. In such cases any number of domestic
dwellings (5 user-defined types at a time) and any number of offices (5 user-defined types at a time)
can be considered in one single analysis. Although the user can choose to run several analyses, each
with different input parameters. In this mode associated harmful emissions at macro-nodes (e.g.
CO
2
, NO
x
, PM10, PM2.5) under a range of natural environment conditions can be assessed. There
are currently six interactive worksheets within the UF tool covering various input / output
information for nodes (Figure 9). To avoid information overload for the user numerous worksheets
are hidden (but accessible) and include interconnected calculations and data sets relevant to i ixi.
Each of the interactive worksheets consist of a User Interface Panel - UIP (Section 3.1) for
numerical type inputs located on the right-hand-side and a Visual Display Panel VDP (Section
3.2) for viewing graphical outputs located on the left-hand-side (Figure 9).

(a) (b)
Figure 8 (a) A micro-node scenario comparator tool (Mode A) (b) A macro-node analysis tool (Mode B)





Hunt et al. Using an urban futures tool
96








Figure 9 Basic structure of UF tool: Interactive (left) and hidden (right) worksheets
3.1 User Interface Panel (UIP)
The UIP allows the user to select relevant input data, a brief description of three of the worksheets
is given below; A to F refer to Site Details (Figure 10), G to I refer to Water: Domestic (Figure 11)
and J to K are refer to Energy: Domestic (Figure 12).
A. This button allows the user to choose whether the UF tool is used as a Scenarios
Comparator Tool (Mode A) or a Site Design Tool (Mode B).
B. If Mode B is selected an extra set of cells (highlighted in red dashed box) will appear that
allow the user to specify site details. This includes the number of domestic properties and/or
offices of a certain type (see E and F). In addition it will allow the user to specify the site
area. Land class and postcode are currently linked to air quality outputs for the site.
C. In this box the user can choose an UF Case study location (thus far this includes Lancaster,
Worcester, Exeter (all UK-based), Dublin (Southern Ireland), Malmo (Sweden), Milan
(Italy) and Barcelona (Spain). The user can also choose the month of the year (January to
December) they wish to investigate.
D. Having chosen a case study location in C the associated weather datasets (e.g. Rainfall,
Wind speeds, Irradiance, Temperature - air and ground) respective this location will be used
in all subsequent calculations. The average monthly values are shown in this table (the user
can override these if required) in addition the impact of climate change scenarios (Low,
Medium and High Variants) can be included through a drop down menu.
E. This table allows the user to input scenario names (which will automatically be updated in
all figures), occupancy rates, floor areas and roof type. Building dimensions (height, width,
length, floor height, roof pitch) can be input for all domestic design cases and the refresh
button can be used to clear all the data. It can be seen that five identical domestic buildings
have been defined.
F. This table is identical E, excepting data is specific to office design cases. No office buildings
have been defined for this example.
G. When considering the worksheet for changing domestic water demands the first option table
can be used to alter two highly influential parameters (i.e. technological efficiency and user
Air Quality
Energy: Office Energy: Domestic
Hidden worksheets
(e.g. weather data,
technology data, calculations
at building and site scale)


Water: Office
Water: Domestic
Site Details
Visual
Display
Panel
User
Interface
Panel
Hunt et al. Using an urban futures tool
97
behaviour, as discussed in Section 2) for each design case. To simplify the layout and
enhance user experience technological efficiency (e.g. bath size, shower flow rates,
dishwasher capacity) can be changed through dark blue drop down menus (Figure 10) and
user behaviour, which combines frequency of use (e.g. one shower per day) with duration of
use (e.g. a 4.37 minute shower) can be changed through manual data inputs within white
cells. The inclusion of supplementary information which supports user choices (i.e.
appropriate units of measure, a choice of numerical values and information detailing where
data is from, such as the Code for Sustainable Homes benchmarks) can be found within
excels comment boxes (Figure 10). Total demand outputs (l/person/day), according to each
technology, are shown on the right-hand-side. Total demands are split according to UK
regulated demands (i.e. internal) and unregulated (i.e. external). The user behaviour option
for Garden watering is based upon a selection of crop type (through drop down menu) and
size of garden watered (e.g. 50m
2
), as opposed to size of garden owned.
H. In this option table the user can specify whether alternative non-potable water supply
sources (i.e. GW or RWH) systems are adopted in Design Case 1. For GW it is
automatically assumed that a tank is sized to meet one days demands, whereas for RWH the
user can size tanks according to BS8515 (the larger of 5% annual rainfall or household
demand) or choose tank size according to the options available in the drop down menu. The
% of roof used for RWH also needs to be placed. The amount of mains required is
subsequently displayed in the far right-hand column.
I. Design case 1 is shown in the figure and within excel the user must scroll to the right to find
an identical set of option tables for Design Cases 2 to 5 respectively.
J. Figure 10 shows a worksheet for changing domestic energy demands within design cases.
The required layout and required user input is broadly similar to that for domestic water.
This allows for increased usability through adoption of a generalised approach. Once again
the demands are split between those that are regulated and those which are not. To ensure
internal consistency any water-related choices (i.e. shower, dishwasher and washing
machine) made in the Domestic Water worksheet are carried through to this worksheet.
The user can once again make a range of choices related to technological efficiency and user
behaviour. The user specifies the annual heating demand rather specifying thermal
properties. The demands are then re-distributed throughout the year according to a
normalised re-distribution profile (based on location specific case history data).
K. In this option table the user can specify what type of localised energy supply sources are
adopted. In most cases capacity output will be influenced directly by weather patterns and
therefore geographical location (selected in Worksheet 1).
3.2 Visual Display Panel (VDP)
The Visual Display Panel (VDP) allows the user to choose from a range of carefully selected
graphical outputs which allow interrogation and comparison of supply, demand and disposal
requirements (daily, monthly and annually) within all 5 scenarios. The same VDP output is visible
(Figure 10-12) within each interactive worksheet and this novel feature is vitally important to ESE
analysis. Firstly because it allows the user, for example, to change the bath size and shower flow
rate in one UIP (Water: Domestic) and immediately view the graphical output for household mains
water requirement within the VDP; significantly reducing analysis and post-processing time-scales.
Secondly, because it then allows user to switch worksheets (Site Details) and change, for example,
occupancy rates within the respective UIP whilst viewing the same graphical output. This facilitates
Hunt et al. Using an urban futures tool
98
understanding of likely impacts by allowing analyses to be undertaken instantaneously and
consecutively. The VDP consists of a viewing window for graphical outputs (1 in Figure 10), an
urban Futures question which provides context for what is shown (2 in Figure 10) and two selection
lists from where the user can choose appropriate graphical outputs (3 in Figure 10). Each new
selection (4 and 5 in Figures 11 and 12 respectively) brings a new output and question. Selection of
Mode B allows the supply, demand and disposal requirements at macro- and micro-node scales to
be viewed whereas in Mode A output for micro-nodes can be viewed only.
Hunt et al. Using an urban futures tool
99

Figure 10 Site details worksheet: Visual Display Panel (VDP) and User Input Panel (UIP), left and right
respectively
Hunt et al. Using an urban futures tool
100

Figure 11 Domestic water worksheet: drop down menu example (top right) comment box example (bottom
right)
Hunt et al. Using an urban futures tool
101

Figure 12 Domestic energy worksheet: comment box example (top right)
Hunt et al. Using an urban futures tool
102
4 Concluding Discussion
This paper has described a new robust methodological futures framework that allows users to move
successfully from qualitative to quantitative scenarios-based approaches. By quantitatively
contouring a generic possibility space, created by a pairing of axes of uncertainty (using any
STEEP drivers for change), an improved definition and understanding of explorative, descriptive,
normative and transitive scenarios can be achieved. Such a framework is vital for ESE analysis
because the process for describing a range of possible futures (rather than predicting a single
possible eventuality) in an informed way can identify critical issues of concern for current and
future decision-makers. When combined with a decision-making process that allows for all
disciplines to be engaged, at the concept stage, with an equal voice, pathways to truly resilient and
sustainable futures can be sort.
This paper paired together a Technological driver (efficiency) with a Social driver (user behaviour)
in order that future demands within the home could be explored and modifications therein pressure
tested. Other STEEP drivers, which are highly influential of demands within homes and offices,
can be considered as overlying layers (i.e. lenses) of information, for example, through the
Environmental lens different climatic conditions can be applied in order to investigate how they
influence behaviour (e.g. more frequent showers in summer months) and technological performance
(e.g. more water collected by a RWH system when rainfall is higher in winter months). In addition
it is about how user behaviour and technologies impact upon the environment (e.g. related carbon
emissions, embodied and operational, to atmosphere). In the same way the Economic lens can be
overlain in order to investigate investment requirements for different water strategy options or help
establish different water pricing strategies. The Political lens can be overlain to understand better
the role of current or future policies in incentivising, influencing, or even forcing changes in
demand through changes in technological efficiency and/or user behaviour. The robust framework
presented here allows this layering of information to take place in a systematic way. It might be
argued that a third axis could be included (perhaps to integrate the time element) but this may add a
further complexity that confuses rather than enhances key messages presented here. The framework
allows users such as planners, developers, engineers and academics to easily differentiate between
how we define something that is sustainable, how we test for resilience and how we can achieve the
same levels of performance in a variety of ways (some of which may be neither sustainable nor
resilient), and in this way is an extremely powerful form of communication.
Due to the interdependencies and complexities with what flows in and what flows out of a
building/site an urban futures tool was developed in parallel to the framework. This tool allows
users to change demands within homes and offices (through user behaviour and technological
efficiency - in isolation and combination) and undertake more detailed quantitative real-time
analysis / pressure testing. Users can very easily push and pull a descriptive (i.e. baseline) to
produce explorative, transitive and normative type scenarios. This is vital if we are to understand
the relative long-term impacts and interdependencies that exist when considering real-life
interdisciplinary perspectives and issues related to human, natural and technological systems, in
short Earth Systems Engineering. The chosen platform (excel) is deemed wholly appropriate
because it allows for widespread adoption, moreover it allows for transparency within calculations
and ease of modification / updating.
Hunt et al. Using an urban futures tool
103
The tool and framework in combination should be considered an important part of any ESE analysis
and an essential undertaking prior to implementation of mitigation and / or adaptation strategies
(now and in the future). Such insight is of particular importance to decision-makers, planners,
scientists, and particularly so for infrastructure engineers who have the responsibility for ensuring
flows (inward and outward) can be met within our urban environments. Collectively the framework
and UF tool allow for an abundance of explorative what if questions, related to water and energy,
to be answered through meaningful quantitative scenarios analysis. For example a home or office
owner (or potential purchaser) may want to know the answer to the following questions:
Q1. What if I invest in a certain size of RWH tank today and in the future my water demands
change dues to technology (for the better or worse)? How will this affect the ability of my system to
meet demands? How will this impact on my stormwater outflow (year round) and potential for
pluvial flood risk prevention? (See Hunt et al., 2012b). What if the climate or location changes?
What if I oversize (or undersize) the RWH tank (to save money or to seek a more resilient long-
term solution)? How will this influence the results? (Hunt et al, 2012c).
Q3. What if I have invested in a Code for Sustainable Homes Level 6 house today, but decide
(some nominal time distance in the future) to grow vegetables at home. Whilst my internal demands
and therefore benchmarks may remain broadly static year round (80l/person/day) how will my
actual mains water consumption be affected at different times in the year? What if I invest in a
RWH or GW systems how might this further influence these demands? therefore are current
benchmarking methods appropriate? What if my behaviour changes also (for better or worse), what
impact will this have? (Hunt et al, forthcoming)
Q4. What if I invest in CHP infrastructure (gas or biomass) some nominal time in the future, how
will my energy demands be affected and how will this influence localised carbon and other air
emissions (e.g. NOx, PM10)? How might other mixes of renewable energy influence the supply /
demand mix at different times in the year? What influence might changes to user behaviour and
technology have in this respect?
This list is not exhaustive and is intended only to give a flavour of the capabilities of the UF tool.
For example the framework and UF tool could be used to plan developments (both now and in the
future) by specifying the right mix of dwellings to achieve a specified level of performance (i.e.
benchmark) or to provide a mix of dwellings that will meet with existing and future supply
capabilities with minimum infrastructure investment. In this sense some might suggest that
sustainable infrastructure is eminently more possible achieved because a smaller footprint can be
planned for long before detailed design or construction takes place.
Future development of the tool will seek to include a wider selection of development types (e.g.
hotels, commercial, schools), locations (UK, European and Worldwide) and allow for geospatial
location of buildings in order that the impact of nodal changes on link performance can be better
assessed and understood.
Hunt et al. Using an urban futures tool
104
Acknowledgements
The authors wish to thank the Engineering and Physical Sciences Research Council for their support
under the current Programme Grant Liveable Cities (EP/J017698) and previous Sustainable Urban
Environments Urban Futures grant (EP/F007426/1).
References
Allenby, B.R, (1999). Earth Systems Engineering: The Role of Industrial Ecology in an Engineered World, Journal of
Industrial Ecology, 2, 7393.
Allenby, B. R. (2005). Reconstructing earth : Technology and environment in the age of humans. Washington, DC:
Island Press.
Boyko,

C. T., Gaterell, M. R., Barber, A. R. G., Brown, J., Bryson, J.R., Butler, D., Caputo, S. Caserio, M., Coles, R.,
Cooper, R., Davies, G., Farmani, R., Hale, J., Hales, A.C., Hewitt, N., Hunt, D. V. L, Jankovic, L., Jefferson, I., Leach,
J.M., Lombardi, D.R., MacKenzie, R. A., Memon, F.A., Pugh, T. A. M., Sadler, J.P., Weingaertner, C., Whyatt, J.D., &
Rogers, C.D.F. (2012). Benchmarking Sustainability in Cities: The role of indicators and future scenarios. Global
Environmental Change - 22 (1), 245-254
Farmani, R., Butler, D., Hunt, D.V.L., Memon,

F.A., Abdelmeguid
,
H., Ward, S. and Rogers, C.D.F (2012). Scenario
based sustainable water management for urban regeneration. Institution of Civil Engineers Engineering Sustainability,
165 (1), 8998.
Gibson, J. E. (1991). How to do A systems analysis and systems analyst decalog. In W. T. Scherer (Ed.), (Fall 2003 ed.)
29-238. Department of Systems and Information Engineering: U of Virginia.
Hall, J.W. and OConnell, P.E. (2007). Earth Systems Engineering: turning vision into action. Institution of Civil
Engineers: Civil Engineering, 160 (3), 114-122.
Hunt, D.V.L.; Lombardi, D.R.; Atkinson, S.; Barber, A.; Barnes, M.; Boyko, C.T.; Brown, J.; Bryson, J.; Butler, D.;
Caputo, S.; Caserio, M.; Coles, R.; Farmani, R.; Gaterell, M.; Hale, J.; Hayes, C.; Hewitt, C.N.; Jankovic, L.; Jefferson,
I.; Mackenzie, A.R.; Memon, F.A..; Whyatt, D.; Weingaertner, C. and Rogers, C.D.F. (2012a). Scenario Archetypes:
Converging rather than Diverging Themes. Sustainability 4 (4), 740-772.
Hunt, D.V.L., Lombardi, D.R., Farmani, F., Jefferson, I., Memon, F.A., Butler, D. and Rogers, C.D.F. (2012b). Urban
Futures and the code for sustainable homes. Institution of Civil Engineers Engineering Sustainability, 165 (1), 3758.
Hunt, D.V.L., Jefferson, I. and Rogers, C.D.F. (2012c). Testing the Resilience of Underground Infrastructure Solutions
through an Urban Futures Methodology. Proc. of REAL CORP 2012, 14
th
16
th
May, Vienna, 825-834.
http://programm.corp.at/cdrom2012/papers2012/CORP2012_97.pdf.
Hunt, D.V.L., Jefferson, I. and Rogers, C.D.F. (forthcoming) Domestic water benchmarking: influences of supply and
demand options.
Lombardi, D.R., Caserio, M., Donovan, R., Hale, J., Hunt, D.V.L., Weingaertner, C., Barber, A., Bryson, J.R., Coles, R.,
Gaterell, M., Jankovic, L., Jefferson, I., Sadler, J. and Rogers C.D.F. (2011).

Elucidating Sustainability Sequencing
Tensions and Tradeoffs in Development Decision-making. Environmental Planning B. 38 (6), 1105 1121.
Lombardi DR, Leach JM, Rogers CDF, Barber A, Boyko CT, Brown J, Bryson J, Butler D, Caputo S, Caserio M, Coles
R, Cooper R, Farmani R, Gaterell M, Hale J, Hales C, Hewitt CN, Hunt D.V.L, Jancovic L, Jefferson I, Mackenzie AR,
Memon FA, Phenix-Walker R, Pugh TAM, Sadler JP, Weingaertner C, Whyatt JD. (2012). Designing Resilient Cities:
A Guide to Good Practice: (EP103), IHS BRE Press.
Ratcliffe J (2001). Imagineering global real estate: a property foresight exercise. Foresight 3 (5): 453475.
Rogers, C.D.F., Lombardi, D.R., Leach, J.M. and Cooper, R.F.D. (2012). The urban futures methodology applied to
urban regeneration. Institution of Civil Engineers Engineering Sustainability, 165 (1), 5 20.
Schneider, S. H. (2001). Earth Systems Engineering and Management, Nature 409, 417-421.
Stout, D. (2002). The Use of Scenarios in Foresight 19941999. An Information Document Prepared for the OST.
Office of Science and Technology, Available online: www.foresight.gov.uk (accessed on 18 May 2012).



105

Approach towards Sustainability of Growing Cities
An Indian Case Study
Mukesh Khare
1
, Priyajit Pandit
1

1
Department of Civil Engineering (Environmental Engineering), Indian Institute of Technology,
Haus Khas, New Delhi110016, India.
Email: kharemukesh@yahoo.co.in, priyajit.pandit@gmail.com
Abstract
Sustainable development is a major concern for the developed and the developing world due to
increasing urbanisation. There exists a very complex relationship between human, technological
and environmental components that affects the sustainable growth of cities. This paper seeks to
evaluate/analyse the complex relationships among the economic, social, environmental subsystems
through their susceptibility to land use change. Further, it aims to outline an approach to model the
changes in land use in order to achieve a sustainable urban development.


Khare and Pandit Sustainability of growing cities: An Indian case study
106
1 Introduction
There have been ample writings on the debate of sustainable development in India, but so far it has
lacked a quantitative approach. However, very few studies have been carried out for the Indian
context. A previous study focussed on how different practices use different indicators according to
their particular needs through examination of 9 different practises followed worldwide and
proposed a comparative basis, namely, International Urban Sustainability Indicators List (IUSIL),
(Shen et.al, 2010). Also, a strong need has been felt for the comprehensive assessment of changes in
economic, environmental, and social conditions, but this has proved difficult because of competing
characteristics of sustainability and a lack of hard evidences, particularly for growing cities. It
therefore becomes essential to quantify sustainability in order to check whether a new policy,
decision or technical innovations are heading towards sustainable development (Fitzgerald et al,
2012). However, any evaluation of sustainability related measures are not effective unless the
behaviour of the complex relationships among the economic, social, environmental and
institutional/political subsystems which drive the systemic interactions within the environmental
sub systems are properly understood and evaluated. This is particularly significant for a growing
city where urbanisation induces detrimental effects on the eco-environment involving economic,
social and infrastructure developments. Hence, the evaluation of coupling relationships between
urbanisation and eco-environment seems to be necessary to precisely understand the approach for
modelling sustainable development in growing cities.
2 Site characteristics
2.1 Study area
The growing city of Amritsar located in the fertile plains of agriculturally rich state of Punjab in
North West India, 31 29' and 32 03' North latitudes and between 74 29' and 75 23' East
longitudes has been taken as a case study site. The dense urban agglomeration due to rapid
urbanisation simultaneously supported by socio-economic stability within the city of Amritsar
seems to have been affecting detrimentally the long term environmental sustainability.
Khare and Pandit Sustainability of growing cities: An Indian case study
107

Figure 1 Aerial View of Amritsar City (Source: City Development Plan, Amritsar 2025)
The city has a focus on the provision of basic needs to the urban population especially the urban
poor through housing and an integrated infrastructure development plan. It is a part of the Indian
Governments initiatives on sustainable development under the Jawaharlal Nehru National Urban
Renewal Mission (JNNURM). It focuses on the social and economic upliftment of society through
provisions of basic housing and services to urban poor, infrastructure development and reforms in
urban governance (www.jnnurmmis.in).
However, the JNNURM does not take into account the environmental degradations through land
use land cover change (LULC) which may occur as a result of its implementation. Since the
JNNURM works at a policy level, its translation at an operational level in the absence of specific
guidelines needs to be assessed.
The urban environment of Amritsar city can be divided into the following components according to
the land use; residential (43.99 %), commercial (3.03 %), industrial (6.6%) , transportation
(11.50%), public and semi-public spaces (6.73%), recreational (0.74%) and government owned land
(27.41%) (www.amritsar.corp.com/CDP_Amritsar(3).pdf). Invariably, the residential component is
of importance as it is the sector projected to grow within the next 25 years corresponding to a
projected population increase of 25 lakhs (2.5 million) by 2025.
2.2 Physical growth of the city
The presence of the holy shrine of Golden Temple seems to be the main driving force for
commercial, residential and agricultural development/ growths affecting the sustainability of the
complex relationship between urbanisation and the eco-environment. The historical evolution of the
city shows that the urban boundary of the city has been consistently been shifting to encompass the
growth patterns (Figure 2).

Khare and Pandit Sustainability of growing cities: An Indian case study
108

Figure 2 Land use Plan of Amritsar City(Draft Development Plan of Amritsar City, 2010-2031)
At present, it suffers from urban sprawl in its peripheral region. The city also lacks an efficient road
network and infrastructural systems. The new developments have mostly taken place in northern
part of the G.T. road in the form of an unplanned and irregular ribbon development. This has caused
detrimental changes to the LULC affecting adversely the fragile eco-environment of the city. Hence,
there is a need to understand the systemic interactions among the environmental sub systems which
are most likely to impact the long term sustainable development of the city.
3 Methodology
As urbanization undertakes a large-scale transition and centralization process of resources,
industries and population in a certain area, it tends to break the traditional agriculture-oriented
structure. Urbanization may actualize the rational and intensive utilization of resources, but it may
reversely cause environmental deterioration and ecological degradation as well. It is safe to say that
a subsystem of urbanization is set up in a greater eco-environmental system.
Objectively, there are all sorts of contradictions and intimidations between urbanization and the
eco-environment. On the one hand, the urbanization with a resource shortage background may
suffer from the intimidations of resources and destroy the surrounding environment during its
development process. On the other hand, the fragile eco-environment will restrict the development
of cities, decelerate the urbanization process and may block the spatial and economic growth of the
city.
The coupling relationship between urbanization and eco-environment can thus be addressed in a
pressure-state-response (PSR) framework; and the system that is composed of urbanization
subsystem and eco-environment subsystem can be defined as a coupling system (Liu et al., 2012,
Tanguay et al., 2010). Coupling of the relationship between urbanization and the environment
provides the sum of all non-linear relationships between the elements of the two systems. The
analysis of the coupling relationship cannot be completed without the selection and categorisation
of indicators. To accomplish this, an aggregated index system has been developed to evaluate the
urbanisation process and its eco-environmental effects (Li et al., 2012).
Historic Walled City
Amritsar City Limit
GT Road
Khare and Pandit Sustainability of growing cities: An Indian case study
109
3.1 Data sources
The statistical and socio economic data have been obtained from the City Development Plan, Draft
Development Plan 2010-2031 for Amritsar city and the Census of India, 2001.
3.2 Index system
The indicators have been selected based on the following criteria
1. Most cited indicators representing the city development plan to facilitate data collection,
understanding and dissemination.
2. Indicators describing the components of subsystems i.e., social, economic and environment
sustainability (Table 1). Based on the provisions of the Amritsar City Development Plan (from
www.amritsar.corp.com/CDP_Amritsar (3).pdf), the following indicators have been selected for
urban settlements (Table 1).
Khare and Pandit Sustainability of growing cities: An Indian case study
110
Table 1 Indicator Selection
Subsystem Functional group Index set Indicators
Social Populated
urbanisation
a. Total size Total number of urban
population Total number of
non-agricultural population,
total number of agricultural
population
b. Population
structure

urban/ urban fringe/
rural(villages)
c. Population
growth
Growth rate of urban
population
Economic Economic
urbanisation
a. Economic growth Annual growth rate of per
capita GDP
a. Urban Quantity
urban land use/
tourism urban
fringe/ rural
Total number of types of
settlements, Percentage of
built-up areas in the total
land areas, Urban density

Environment Spatial urbanisation Urban Scale Percentage of built up areas
versus open areas, total land
area, types of land use,
percentage of land uses
including agricultural use,
settlement patterns
Infrastructure Transportation network
density, per capita area of
roads
Eco- environment a. Resources and
energy
Water- ground water,
surface water
Hydro, thermal energy,
energy for electricity,
electric supply etc
b. Ecological
condition
Percentage of water and land
loss due to environmental
pollution
Eco- environmental
Pressures
a. Environmental
Disposal
Solid waste disposal,
Sanitation, sewerage, storm
water drainage
b. Environmental
Pollution
Land, Air, Water
Eco-Environmental
Management
a. Environmental
Investment
b. Environmental
Management Plan

Out of the given indicators, only those have been selected for which indexes are available as part of
the Environmental Sustainability Index Report 2005 (www.yale.edu/esi). This has been primarily
done to facilitate data collection and provide an understanding of the current state of environment in
India. Thereafter, the indexes have been determined for the urbanization-environment sub- system
for the study site in India.
Khare and Pandit Sustainability of growing cities: An Indian case study
111
Since, the coupling relationship involves the interaction between urbanisation and environment, as a
next step, the indicators are categorised and listed as follows:
1. Urbanisation subsystem
a. Demographic aspects
b. Economic aspects
c. Social aspects
d. Spatial aspects
2. Environment subsystem
a. Environmental pressure
b. Environmental control
c. Environmental management.
3.3 Evaluation of coupling relationship between urbanisation and the
environment
The next step is the determination of the indexes for the urbanization-environment sub- system. For
this purpose the data has been normalised. The level of urbanization and environmental quality have
been estimated using the entropy method within which the weight of each indicator has been
calculated (Li et.al,2012). Figures 4 and 5 describe the increasing trend in the urbanisation and
environment subsystems, respectively

Figure 4 Trends of comprehensive levels in the urbanisation subsystem

L
e
v
e
l
s

Khare and Pandit Sustainability of growing cities: An Indian case study
112

Figure 5 Trends of comprehensive levels in the environment subsystem
4 Results and Discussions
Figures 4 and 5 indicate the coupling trend between the urbanisation and the environment
subsystems. In the urbanisation subsystem, the increase in demographic levels increases the
economic, social and spatial indicators. Simultaneously, decline in the environmental pressure,
stress and control indicators are clearly observed. This estimation has been carried out for a 10 year
period i.e. 2005 - 2015. Further, the estimation of the coupling degree indicates the magnitude of
the interactions between the urbanisation and the environment subsystems.
The social urbanisation has been the main driving force affecting changes in LULC for growing
cities in India and so it carries the maximum weight. It is followed by demographic, economic and
spatial growth. Infrastructure development and environment management are the other critical
indicators which have been considered for evaluating the total impact.
Environmental pressure, management and control are significant indicators influencing the stability
of the environmental subsystem. It may be inferred that cultivated and green areas are the most
vulnerable land use pattern which deserve effective and decisive protection. This is especially
crucial for growing cities where environmental oversight requires an examination of policy
formulation and implementation.
Social aspects have also shown a similar trend in the urbanization curve. It has the significant effect
on the urbanization sub-system. However, the levels of the environment subsystem have suffered
significant decline over the 10 year period for which the coupling relationships have been
developed. It is converse to the social and economic urbanisation subsystems. Further, an impetus
on infrastructure development and environment management has shown corresponding increase for
the evaluated period.
5 Conclusion
The preliminary analysis indicates that the interaction between society and environment is one of
the significant relationships that may lead to settlement sustainability. This interaction has a direct
impact on the eco-environment and so the institutional intervention is a direct product of this
interaction. Therefore, the preliminary investigations into the coupling relationships have clearly
L
e
v
e
l
s

Khare and Pandit Sustainability of growing cities: An Indian case study
113
revealed that the current state of urbanisation being followed for the city of Amritsar is
unsustainable.
Thus, the evaluation of the coupling relationships is a useful tool to determine the direct interactions
within the environment. The interactions within the environment can further be estimated through
sensitivity analysis which will determine the process of sustainable and unsustainable urbanisations.
6 Further Research
Sustainable urbanization management should consider a wide variety of factors, such as population
increase, economic development, social improvement and demand for natural resources, as well as
environmental changes and ecological endowments which are dynamic in nature. To model the
sustainable urbanization, the methodology must also take into account the potential changes in these
factors. The system dynamics model (SD) and neural networks can be used to simulate the potential
environmental changes and thus compare the levels of sustainability (Tayebbi et.al (2011); Huang
et.al (2009)). Further, urban growth boundaries of the city can be demarcated with the applications
of tools e.g. remote sensing/GIS (Prato, 2005; Liu et.al, 2011). The evaluation tools in form of the
sustainability coefficient measuring/monitoring the extent of sustainability for the growing cities
can also be worked out. Globally, the list of indicators is common for most cities, it may perhaps be
possible to increase the adaptability of the SD models for any growing city across the world or
where the social and economic conditions are similar. This may assist the policy makers and
planners in balanced distribution of the developmental requirements of the growing cities and
maintaining the ecological footprints of the sustainable growth.
Acknowledgements
We gratefully acknowledge Amritsar Development Authority for providing us with the relevant
data, City Development Plan, Draft Master Plan, Amritsar 2010-2031 and proposed land use plans.
We would also like to sincerely thank Dr. A. K. Nema for his valuable comments and suggestions.
References
Fitzgerald, B.G; ODoherty, T; Moles R. & ORegan B. (2012).A quantitative method for the evaluation of policies to
enhance urban sustainability. Ecological Indicators 18, 37137.
Huang. S; Yeh. C; Budd. W & Chen. L. (2009). A Sensitivity Model (SM) approach to analyse urban development in
Taiwan based on sustainability indicators. Environment Impact Assessment Review 29, 116-125.
Jaeger.J; Bertillier, R; Schwick,C; Kienast, F. (2010).Suitability criteria for measures of urban Sprawl. Ecological
Indicators 10, 397-406.
Indicators of sustainable development, guidelines and methodologies. Online at
www.un.org/esa/sustdev/natlinfo/indicatorsguidelines.pdf.
Liu.Y; Yao,C.S;Wang, G. & Bao, S.(2011).An integrated sustainable development approach to modelling the eco-
environmental effects from urbanization. Ecol. Indicat. doi.10.1016/j.ecoloind.2011.04.004.
Moffat, I; Hanley. N; Faichney,R & Wilson,M. (1999). Measuring Sustainability: A time series alternative measures for
Scotland. Ecological Economics 28, 55-73.
Moffat,I & Hanley,N. (2001). Modelling sustainable development: A systems dynamic and input- output approaches.
Environmental Modelling and Software 16, 545-557.
ORegan ,B; Morrissey,J; Foley,W & Moles,R.(2009). The relationship between settlement population size and
sustainable development measured by two sustainability metrics. Environmental Impact Assessment Review 29, 169
178.
Prato,T.(2005). A fuzzy logic approach for evaluating ecosystem sustainability. Ecological Modelling 187,361-368.
Shen,L; Li-Yin, Ochoa,J; Shah, M &Xiaoling, Z.(2011).The application of urban sustainability indicators, comparison
between various practises. Habitat International 35, 17-29.
Tayebbi. A; Pijanowski. B. C & Tayebbi. A. H.(2011). An urban growth boundary model using neural networks, GIS
and radial parametrization: An application to Tehran, Iran. Landscape and urban Planning 100, 35-44.

114


115

Mapping the limits of knowledge in flood risk assessment
Bruno Merz
1

1
GFZ German Research Center for Geosciences, Telegrafenberg, 14473 Potsdam, Germany.
E-mail: bmerz@gfz-potsdam.de
Abstract
Traditionally, flood design and flood risk management are based on the notion that flood risk is
quantifiable and predictable. This notion is being challenged by the rarity of extremes and by an
increase in complexity. Flood risk is the result of physical and socio-economic processes in a
changing environment, and the increasing interdependence of sub-systems may cause higher-order
consequences of floods and unexpected side effects. To discuss the limits of knowledge in
analysing flood risks, it has to be recalled, firstly, that flood risk assessments cannot be validated in
the traditional sense by comparing observations against estimations. Secondly, it has been shown
that people (experts and laypeople) tend to overrate their knowledge, and this widespread
phenomenon of overconfidence may lead to higher confidence in assessments than is actually
warranted. And thirdly, the increasing interdependence of natural, human and technological systems
may have the potential to lead to extreme impacts, but this interdependence is rarely taken into
account in todays flood risk assessments. To better understand the limits of knowledge and the
consequences of these limits, the uncertainty associated with flood risk assessments is discussed.
We conclude that there is a need for drawing (1) a map which shows what we know and what we do
not know (terra incognita), and (2) a map which shows which knowledge limits can lead to
disastrous consequences (terra maligna).

Merz Mapping the limits to knowledge in flood risk assessment
116
1 Introduction
Traditionally, flood design and management have been based on the notion that flood risk is
quantifiable and predictable, and flood assessment and management have concentrated on
estimating and alleviating the physical characteristics of floods. Today, flood risk is seen as the
interaction of flood hazard (the probability and physical characteristics of flooding) and the
vulnerability of the elements-at-risk (humans, built environment, natural environment), and flood
risk management considers a much broader spectrum of management options (Merz et al., 2010).
Flood risk is the result of the interaction of physical and socio-economic processes in a changing
environment. The inter-dependence between sub-systems is increasing which may cause higher-
order consequences of floods and unexpected side effects. Hence, the traditional notion is being
challenged by an increase in complexity.
Inspired by Nassim Talebs book on Black Swans (Taleb, 2007) and by several own studies on the
uncertainty of flood risk assessments (e.g. Merz & Thieken, 2009), this contribution discusses two
questions: What are the limits of knowledge in flood risk assessments (Terra incognita)? Which
limits of knowledge do really matter (Terra maligna)?
2 How good are risk assessments?
2.1 Validation of risk assessments
The problem of estimating extremes, such as the 10.000-year flood which needs to be given for
safety considerations of dams in Germany, has been discussed by others (e.g. Hall & Anderson,
2002). This discussion is not repeated here. It may suffice to summarize that risk assessments aim at
quantifying characteristics (occurrence probability, space-time development, etc.) of extreme events
and failure scenarios which have hardly been (or not at all) observed before. Therefore,
observations of risk are seldom available, risk estimates cannot be validated in the traditional
sense by comparing them against observations, which impedes constraining and validating risk
analyses. Data scarcity implies that uncertainty statements are associated with considerable
subjectivity and that they are themselves highly uncertain.
Subjectivity may not be a severe problem, if short feedback loops allow learning from errors. For
example, weather forecasters receive rapid feedback about their forecast success which helps them
to understand deficits and to improve their forecasting skill (Murphy and Winkler, 1977). This is
not the case for extremes, since their rarity precludes short feedback loops: How do you know how
wrong your 10.000-year flood estimate is?
Due to the lack of observations of risk there are not many studies which analyse the reliability of
risk assessments. Merz et al. (2004) assess the direct flood damage to buildings at the community
level for a catchment in Southwest Germany, based on the best-available data for Germany. They
derive not only a best estimate, but also confidence intervals. Figure 1 compares mean estimates
and 95% and 99% confidence intervals for the 100-year flood with the reported damage for the
December 1993 flood. This flood had been classified as 100-year flood. The reported flood
damages of four municipalities are situated within the limits of the 95% confidence interval, five
reported flood damages lie within the range of the 99% confidence interval, whereas in two
municipalities the reported damages lie even outside the 99% confidence interval. This mismatch
between estimated and reported damage has to be mainly attributed to errors in the damage
Merz Mapping the limits to knowledge in flood risk assessment
117
modelling, although an unknown error component exists due to possibly erroneous damage
reporting and the assumption that the synthetically derived 100-year inundation areas are
representative for the 1993 flood.


Figure 1 Comparison of reported and estimated direct flood damage for seven communities (Merz et al.,
2004).
Another illustrative example on the problems of estimating risks is the work of Thompson et al.
(1999) on the error of studies that estimated the lifesaving benefits of passenger car airbags. Figure
2 shows that lifesaving estimates have been revised through time. US government estimates during
19771987 concluded that 6.0009.000 lives would be saved per year, if all passenger cars were
equipped with full-front airbags. These estimates were primarily based on experimental testing and
engineering judgement. After a decade of real-world crash experience, the official estimates were
reduced to 3.000 lives saved/year. This mismatch between risk estimates based on experimental
data, engineering judgement and models and those based on observation data can be explained by a
number of reasons. For example, the heterogeneity of the vehicle fleet and occupants was
inadequately addressed, and the number of adults who would wear safety belts was underestimated
(Thompson et al., 1999).

Figure 2 Estimates of airbags lifesaving effectiveness (redrawn from Thompson et al., 1999, Figure 2c).
Engineering estimates
Epidemiological estimates
Merz Mapping the limits to knowledge in flood risk assessment
118
It is interesting to point to the difference between a situation like the airbag example and extreme
flood situations. The airbag example is an extrapolation from a small sample derived from
experimental tests with dummies and occupants to the country-wide situation. This task seems
simpler than, for example, estimating the characteristics and consequences of the 10.000-year flood.
This situation requires quantifying processes, such as dike breaches or evacuation, which do not
occur during unexceptional events.
From these two examples it is concluded that risk estimates may be significantly wrong. Due to the
bolder extrapolation that is required for extreme floods, the error of flood risk assessments may be
much higher than in the airbag example or in the flood damage example.
2.2 Overconfidence and the illusion of certainty
In section 2.1 the point is made that risk assessments may be significantly wrong, and that there is
usually no data to quantitatively assess the error of our risk estimates. Starting from this situation,
an essential question is: Do we have a decent understanding of the reliability and deficits of our risk
assessments?
Attention has to be given to the widespread phenomena of over-confidence and illusion of certainty.
It has been shown that people (experts and laypeople) tend to overrate their knowledge (Kahneman
et al., 1982, Hammitt and Shlyakhter, 1999). Typically, the distribution representing the true error is
much larger than the subjective error: 55-75% of true values are typically outside the subjective
interquartile range, and 20-45% outside the subjective 98% confidence region (Hammitt &
Shlyakhter, 1999). The conclusion from such findings: We think we do know more than we actually
know.
Human judgement and decision making are plagued by well-documented heuristics and biases.
People tend to see patterns and trends where none exist, and fall prey to the mechanism of
confirmation bias, i.e. the tendency to favour information which confirms existing preconceptions
or hypotheses (Nickerson, 1998). Cognitive biases are particular likely in situations where close
feedback loops are missing or are hard to interpret, and where many factors are involved and the
environment is inherently complex and uncertain (Bunn & Salo, 1993). These conditions should
apply for many flood risk assessments.
Other cognitive biases may also contribute to significantly erroneous risk estimates. People often
show a positive recency bias: They expect more of the same by extrapolating existing tendencies to
the future. This bias is related to the availability bias: The probability that is assigned to a certain
events depends on its availability; events that are easily conceivable or that can be easily recalled
are connected with a higher occurrence probability (Kahneman et al., 1982). Further, people
perceive desirable events to be more likely for themselves than for others, and feel uncomfortable
about planning for situations that would be damaging to them (Bunn & Salo, 1993). These biases
may be sloppily summarized as if you have not seen it, you consider it less likely and it will not
be that bad and in many cases these biases may lead to overly optimistic risk estimates,
neglecting really dramatic events or consequences.
Taleb (2007) points to the confusion between no evidence of disease (NED) and evidence of no
disease (END). No evidence of disease is a medical term that is used when examinations and tests
can find no cancer in a patient who has been treated for cancer. NED does not mean that the patient
is cured. It may be that the examination does not detect the disease. However, no evidence of
Merz Mapping the limits to knowledge in flood risk assessment
119
disease is frequently interpreted as evidence of no disease. Similarly, we should be cautious not to
confuse no evidence of disaster with evidence of no disaster when we assess the risk of extreme
events. If observations do not reveal an extreme event in the past, it does not mean that it cannot
occur.
3 Mapping terra incognita and terra maligna in flood risk
assessment
3.1 Flood risk assessment
Flood design decisions and flood risk management require a reliable flood risk assessment, i.e. the
quantification of flood hazard, vulnerability and risk. Figure 3 shows that floods and their impact
touch different compartments, from the atmosphere to indirectly affected areas. Assessing flood risk
needs, therefore, to take into account the dominant processes in each compartment and the
interactions between these processes and compartments. The following list contains a selection of
questions that might have to be addressed in a flood risk assessment. Later, we discuss these
questions against our limits of knowledge and against the potential to have very severe
consequences:
a) 100-year flood discharge for river gauge with long observation time series as basis for flood
zonation,
b) 10.000-year flood discharge for river gauge for safety considerations of dams,
c) 100-year flood discharge at given river gauge for far future period as basis for flood design,
d) Probability and characteristics of dike failure under hydrological load for derivation of
inundation scenarios,
e) Direct damage to buildings estimated at the community level for estimating the cost-benefit
ratio of flood defence,
f) Far-area and/or long-term consequences of floods, such as business interruption.

Merz Mapping the limits to knowledge in flood risk assessment
120

Figure 3 The different compartments that influence flood risk and some examples of human influences on
flood risk (Merz et al., 2010).
3.2 Terra incognita: Areas beyond the limitsofknowledge
Drawing a limits-of-knowledge map for flood risk assessments requires criteria to locate the
different topics on the map. We propose four criteria summarized in Table 1.

Merz Mapping the limits to knowledge in flood risk assessment
121
Table 1 Criteria for mapping the limits-of-knowledge.
Criterion Effect Explanation
Interdependence The higher the
interdependence of a
process or system, the
more difficult is the
prediction and assessment
of the process or of the
evolution of the system.
High interdependence in space, in time, between
elements of the system studied, or combinations
thereof, lead to high complexity. Temporal
interdependence means that a process is correlated in
time, e.g. todays state has an effect on the future state.
Spatial interdependence occurs when a process at one
location affects the process at other locations.
Interdependence between elements means that a
certain process or element is correlated to another
element or process.
Non-linearity High non-linearity
increases uncertainty and
impedes reliable
predictions.
Non-linearity is given if a small change in one variable
leads to an unproportionally strong change in the
dependent variable, or if threshold effects occur. For
example, a small increase in one variable leads to a
sudden transition from near accident to severe
accident.
Time frame Statements about the (far)
future are much more
difficult than statements
about the present or
immediate future.
Future states are usually influenced by developments
that occur in the future. Hence, statements on the
future can only be accurate if such future
developments can be forecasted with certainty.
Experience Higher availability of
experience, data or
process knowledge
shrinks the limits of
knowledge.
Risk estimates can be obtained by different
approaches. If long observations of the process studied
are available, a statistical approach such as extreme
value statistics may be suited. If a data base is lacking,
a model-based decomposition approach including
strong assumptions and uncertain modelling may be
necessary.

Figure 4 attempts to locate the selected questions on the limits-of-knowledge map extending over
the four criteria of Table 1. Topics that are associated with small uncertainty are located at the
border of the map. High-uncertainty topics touch the centre of the map. Figure 4 illustrates that the
selected questions are located very differently on the limits-of-knowledge map. Although the
location is subjective and qualitative only, this representation emphasizes that the uncertainty
associated with different questions may be very different. Taleb (2007) uses the terms mild and wild
uncertainty to underline these differences. Processes or variables associated with mild uncertainty
are predictable, although the estimate is uncertain. Those associated with wild uncertainty are not
predictable. High interdependence, high non-linearity and future time frame favour wild uncertainty,
whereas low experience, for example due to a lack of data, hinders the validation of risk estimates
and may favour over-confidence.

Merz Mapping the limits to knowledge in flood risk assessment
122

Figure 4 Terra incognita map with selected flood risk assessment topics. a) 100-year flood discharge for
river gauge with long observation time series, b) 10.000-year flood discharge for river gauge, c) 100-year
flood discharge at given river gauge for far future period, d) probability and characteristics of dike failure
under hydrological load, e) direct damage to buildings at the community level, f) far-area and/or long-term
consequences of floods.
Estimating the 100-year flood discharge for river gauge with long observation time series (Figure 4a)
is a task in the area of mild uncertainty. Rich observations at the study site allow using simpler
estimation methods, and allow to some extent validating the estimate. Although there may be
many non-linear processes during the 100-year flood event in the catchment, the integration of these
processes in the flood hydrograph usually results in a rather tame behaviour. More demanding is the
task of estimating the 10.000-year flood discharge (Figure 4b). The processes leading to a 10.000-
year flood may be qualitatively different from those leading to 10-year or 100-year flood. Runoff
generation processes may differ. Retention basins and dikes upstream of the gauge may fail leading
to increased non-linearity in the runoff. Experience of such situations is usually not available.
Even more demanding may be the task of estimating the 100-year flood discharge at a given river
gauge for a time period some decades in the future (Figure 4c). The answer to this task would
require representing the dominant drivers of change that would affect the 100-year discharge in the
future, including human-induced climate change, natural climate variability, land-use changes in the
catchment, changes in technical flood defence, changes in the river morphology, etc. Although
climate change impact studies routinely estimate changes in hydrological extremes, such studies
must be seen (at least today) as hypotheses which are based on very strong and possibly wrong
assumptions.
The assessment of the probability and characteristics of dike failure under hydrological load (Figure
4d) is a field which has not received much attention. The general failure modes are known; however
random effects, such as local dike weaknesses or preferential flowpaths in the dike, and threshold
processes impede the quantification of failure characteristics, such as time, location and width of the
breach.
The estimation of direct damage to buildings estimated at the community level (Figure 4e) seems
rather in the area of mild uncertainty. Although Figure 1 shows that the assessment of building
damage may be very uncertain, this uncertainty is expected to decrease with increasing number of
Merz Mapping the limits to knowledge in flood risk assessment
123
flooded objects within a community (Merz et al., 2004). The estimation of the damage to single
buildings or to a small sample of buildings is plagued by high object-to-object randomness. Hence,
there is hope that the random error dominates over the systematic error, and that the central limit
theorem applies.
In contrast, far-area consequences, such as business interruption at distant locations, and long-term
consequences of floods, and such as loss of trust in the government, changes in legislation, or
changes in safety demand of the public (Figure 4f) are very difficult or impossible to predict. Such
higher-order consequences are characterized by high interdependence in space, in time, and
between elements, and by high non-linearity. The earthquake in March 2011 in Japan may serve as
a well-known example for a natural extreme event having far-ranging and unexpected implications.
It triggered a whole chain of events (tsunami, nuclear power accident, etc.), leading, for example, to
a complete change in the energy policy of Germany within a few months after the event.
3.3 Terra maligna: Areas where we get hurt by what we dont know
Taleb (2007) stresses that the consequences of some errors are benign, whereas the consequences of
others are devastating. He warns against depending on predictions that may have large-scale
harmful consequences if being wrong: evaluate predictions and models not according to their
plausibility but by the harm they may cause. Following this proposal requires drawing another map,
namely a map which shows the areas where consequences are disastrous if our prediction is wrong.
The selected flood risk assessment questions are plotted in Figure 5 in a 2-dimensional graph, one
axis representing the uncertainty associated with their quantification, and the other axis showing
their potential for disastrous consequences if the risk assessment is wrong. The coordinate on the
uncertainty axis can be regarded as a qualitative projection of the respective limits-of-knowledge
map on a 1-dimensional axis: Questions located closer to the centre of the limits-of-knowledge map
are closer to the area of wild uncertainty.


Figure 5 Terra maligna map with selected flood risk assessment questions.
The question of estimating a) the 100-year flood discharge as basis for flood zonation, c) the 100-
year flood discharge for far future period as basis for flood design, and e) the direct building
damage at the community level for planning flood defence are located at a relatively low position
close to tame consequences. The rationale behind this evaluation is that the potential for surprises
Merz Mapping the limits to knowledge in flood risk assessment
124
and dramatic consequences in case of a wrong estimate is rather small. For instance, flood design in
Germany is usually oriented at the 100-year flood level. In case of a strong over- or underestimation
of the 100-year flood, the implications are mainly of economic type. More or less money is spent
for flood defence than it would be the case if the estimation would be correct. The dominance of the
100-year flood as safety target has been historically developed and is somehow arbitrary; there have
been no explicit deliberations, e.g. showing that the 100-year flood safety level is the most
appropriate level, which resulted in this dominance. Similarly, errors in estimating the direct
building damage should lead to over- or underestimation of the economic consequences of
inundation scenarios, but not to dramatic consequences.
Errors in the 10.000-year flood, estimated for the purpose of appraising the safety of a large dam
(case b), may have a higher potential for disastrous consequences. Such a flood is seen as some kind
of worst case event, and consequently, failure scenarios and emergency measures may be developed
on the base of this estimate. A severe underestimation may lead to the neglect of failure possibilities
with potentially disastrous consequences downstream. Likewise, large uncertainties in the
estimation of dike failure (case d) may lead to significant adverse surprise, if estimated failure
scenarios are used, for instance, for triggering evacuation. Higher-order consequences (case e) are
located highest on the vertical axis, since floods have the potential to lead to disastrous, completely
unexpected consequences in regions not directly affected by the flood and long after the occurrence
of the flood.
4 Implications for flood science and flood risk management
If we accept that our knowledge is limited, then a sensible approach is to understand the
consequences of these limitations before our limited knowledge leads to a large-scale disaster. An
attempt is made to apply this idea to the field of flood risk assessment. Two maps are proposed with
the aim of illustrating the limits of knowledge and the areas where our limitations hurt. This
discussion suggests that there is a whole spectrum of uncertainty, from mild uncertainty where we
can predict with decent skill to wild uncertainty where prediction may be futile. More importantly,
the idea is brought forward that in some instances even large uncertainty may not be a dramatic
problem; because a large error in our risk estimate does not take us by surprise. Somehow
disturbing is the suggestion that there are questions which are associated with very large uncertainty
that, at the same time, have to potential to lead to dramatic negative impacts. These are the Black
Swans of Taleb (2007) in flood risk assessment.
It is interesting to contrast the thematic distribution of scientific work in the flood risk field. There
is a vast amount of studies on some aspects; examples are flood frequencies, impact of climate
change on flood peaks, hydraulic inundation modelling. Other aspects are rarely investigated, such
as higher-order consequences of flooding or complex failure situations (e.g. earthquake
destabilizing a dike, woody debris leading to clogging of dam). Much more work has been invested
in areas which tend to be characterized by mild uncertainty and/or tame consequences if erroneous
than in areas of wild uncertainty and/or of potentially dramatic consequences. These thoughts may
contribute to a discussion what the important issues are. Should we spend more time on topics in
terra maligna and less in terra benigna? This would mean to change course in flood risk research,
for example, from flood frequency to second-order consequences, from small impact to large
impact events; from single processes to coupled processes.
Merz Mapping the limits to knowledge in flood risk assessment
125
The discussion of terra incognita and terra maligna for flood risk assessment is a shift in perspective.
The traditional approach is to attempt to quantify uncertainty and to reduce uncertainty. The
position of Taleb (2007) that some events (e.g. the probability and impact of a war) cannot be
quantified and that there are no typical events in highly interdependent and non-linear environments
with feedback loops is to some extent also valid for extreme floods and their consequences. This
can be taken as a starting point for flood mitigation: Accept that some important aspects cannot be
quantified and invest in preparedness, risk management and risk aversion. Consider, for instance,
high impact/low probability events. Usually, they do not play an important role in risk assessments
because the traditional risk definition (expectation of damage) neglects such events (Merz et al.,
2009). However, such events may be the Black Swans that have the potential to be really disastrous.
References
Bunn, D. W. & Salo, A. A. (1993). Forecasting with scenarios. European Journal of Operational Research,68, 291
303.
Hall, J. & Anderson, M. (2002). Handling uncertainty in extreme or unrepeatable hydrological processes the need for
an alternative paradigm. Hydrological Processes,16, 18671870.
Hammitt, J.K. & Shlyakhter, A.I. (1999). The expected value of information and the probability of surprise. Risk
Analysis. 19, No. 3, 135152.
Kahneman, D., Slovic, P. & Tversky, A. (eds.) (1982). Judgement under uncertainty: Heuristics and biases, Cambridge
University Press, Cambridge.
Merz, B., Kreibich, H., Thieken, A., Schmidtke, R. (2004). Estimation uncertainty of direct monetary flood damage to
buildings. Natural Hazards and Earth System Sciences, 4, 153163.
Merz, B., Elmer, F., Thieken, A. H. (2009). Significance of high probability/low damage versus low probability/high
damage flood events. Natural Hazards and Earth System Sciences, 9, 1033 1046
Merz, B. & Thieken, A. (2009). Flood risk curves and uncertainty bounds. Natural Hazards, 51, 437458.
Merz, B., Hall, J., Disse, M., Schumann, A. (2010). Fluvial flood risk management in a changing world. Natural
Hazards and Earth System Sciences, 10, 509527, www.nat-hazards-earth-syst-sci.net/10/509/2010/.
Murphy, A. H. & Winkler, R. L. (1977). Reliability of subjective forecasts of precipitation and temperature. Applied
Statistics, 26, 4147.
Nickerson, R. (1998). Confirmation Bias; A Ubiquitous Phenomenon in Many Guises. Rev. Gen. Psychol. (Educational
Publishing Foundation), 2, 175220.
Taleb, N. N. (2008). The Black Swan. The impact of the highly improbable, Penguin Books, London.
Thompson, K.M., Segui-Gomez, M. & Graham, J.D. (1999). Validating analytical judgments: the case of the airbags
lifesaving effectiveness, Reliab. Eng. Syst. Safety, 66, 5768.




126


127

Critical materials for lowcarbon infrastructure: the analysis of
local vs. global properties
Phil Purnell
1
, David Dawson
1
, Katy E Roelich
1,2
, Julia K Steinberger
2,3
and Jonathan Busch
2

1
Institute for Resilient Infrastructure, School of Civil Engineering, University of Leeds, Leeds, LS2 9JT, UK.
E-mail: P.Purnell@leeds.ac.uk, D.A.Dawson@leeds.ac.uk, K.E.Roelich@leeds.ac.uk
2
Sustainability Research Institute, School of Earth & Environment, University of Leeds.
Email: J.K.Steinberger@leeds.ac.uk, J.Busch@leeds.ac.uk
3
Institute of Social Ecology, Vienna, Alpen-Adria University, Austria

Abstract
Introducing new technologies into infrastructure wind turbines, electric vehicles, information
systems, low-carbon materials etc. will change its materials mix. Many of the new materials
required by infrastructure are critical; their supply is highly likely to be subject to disruption
owing to combinations of limited geological reserve, geopolitical instability, environmental issues,
increasing demand and limited substitutability and recyclability. Other materials not currently
considered critical may become so if introduced into infrastructure, owing to the giga-tonne scale of
its annual growth. This potentially poses significant risk to the development of a resilient, low-
carbon infrastructure. Analysis of this risk is the subject of increasing study. One previously
overlooked aspect of this study is the relationship between the properties that determine the
selection or commissioning of a material, component or technology the local properties and the
overall vulnerability of the system; the global property. Many materials or technologies have
properties that vary considerably and treating them as elements having fixed properties overlooks
the possibility that there may be optima within the local-global variable space that could be
exploited to minimise vulnerability whilst maximising performance. In this study, we present a
framework for such analysis and a review of studies that have informed our approach. We define
measure of relative materials criticality RMC in terms of fraction of critical material, national-scale
consumption of the material, a pre-defined criticality index and an output parameter. The analysis is
applied to the case study of a wind turbine generator, both at a materials level and a component
level. Preliminary analysis suggests that even where the introduction of critical materials (in this
case, rare earth metals) enhances technical performance by up to an order of magnitude, the
associated increase in criticality may be two or three orders of magnitude. Analysis at the materials
and component levels produce rather different results, suggesting that design decisions should be
based on analysis at several levels. The relative materials criticality values derived here should be
treated as preliminary, as the relationships between its component parameters and the probability of
supply disruption are not know with confidence. Nonetheless, this analysis serves to highlight the
importance of analysing the introduction of critical materials into infrastructure and introduces a
methodology for further development.
Purnell et al. Critical materials for low carbon infrastructure
128
1 Introduction
In response to sustained criticism regarding the crumbling condition of the UK infrastructure (e.g.
CST, 2009; ICE, 2009) the Government initiated a series of National Infrastructure Plans (HM
Treasury, 2010; HM Treasury, 2011), proposing long-term upgrades amounting to over 250 billion.
It is also recognised that this enhanced infrastructure must be designed to drive and enable our
transition towards a low-carbon economy in a changing climate. Infrastructure must undergo a
technological transformation: radically increasing the proportion of electricity generation from low-
carbon electricity sources, such as wind and nuclear power; introducing electric vehicles and their
associated recharging infrastructure; preparing physical infrastructure for more intense loading from
weather conditions; enhancing both the capacity and controllability of the electricity network (the
smart grid); changing the balance of rail, road and water-borne freight; and exploiting low-carbon
bulk materials (steel, concrete, etc.) for large civil engineering artefacts (HM Government, 2009;
HM Government 2010a).
The scale and pace of the proposed upgrades to infrastructure will certainly place pressure on
traditional bulk material resources such as metals, aggregates and cement. The changes to the
nature of infrastructure via the widespread introduction of novel mix of generation, motive, control
and information technologies will introduce materials and components to the infrastructure that
were previously not required. Some of these are described as critical, at risk of supply chain
disruption and difficult to substitute (European Commission, 2010). These include various rare
earth metals used in permanent magnets for electric vehicles and wind turbines, or chromium used
in stainless steel components for nuclear power stations or high-resilience reinforced concrete.
Others, although not currently considered critical, could become so if introduced into infrastructure
owing to the sheer scale of the total output (gigatonnes per year). These may include lithium for
electrical storage cells used in distribution grids and vehicles, or magnesium compounds used in
low-carbon cements. As well as our growing internal requirement for these materials, external
demand from rapidly growing economies is expected to increase by 500 1000% (Graedel and Cao,
2010).
Without consideration of this criticality, the roll-out, operation and maintenance of low-carbon
infrastructure will become vulnerable to disruption of the supply of these materials and components.
While the constraints to technological progress imposed by e.g. critical metals have received
extensive academic attention (e.g. Kleijn et al., 2011; Moss et al., 2011), few if any credible
scenarios for implementation of widespread low-carbon technology explicitly consider criticality.
Engineers faced with designing our new infrastructures are at worst ignorant of criticality, and at
best bereft of the modelling tools required to model it. Thus, choices are likely to be made that,
while reducing carbon emissions per unit of performance (e.g. kWh supplied, miles travelled or
water purified), could lock us into technologies that become prohibitively expensive or simply
impossible to commission, operate or maintain. This, overall, will reduce the sustainability,
adaptability and resilience of our infrastructure.
Clearly, we ought to be able to keep track of important materials and components in infrastructure.
At the least, we need to know the stock already contained therein, how much flows in (as
components for new infrastructure or maintenance, retrofit and repair of old infrastructure) and how
much flows out (as waste, recoverable or otherwise). The process can be operated bottom up,
where information on stocks is analysed and flows inferred from changes in the (usually annual)
Purnell et al. Critical materials for low carbon infrastructure
129
data, or top down, where stocks are calculated from differential analysis of inflows and outflows.
The traditional accounting tool used for this type of analysis is called stocks & flows (S&F)
modelling. This is very useful for analysing the quantities of a single substance moving through a
system, particularly national economies (Spatari et al., 2002; Binder et al.,2006). However, we also
need to know where the material or component is and when it enters and leaves the system, in order
that resources can be targeted and possibly recovered. Accordingly, some studies are now adding
information on long-term changes in materials flow (e.g. Bratteb et al., 2009), basing analysis on
stock dynamics (e.g. Sonigo, 2011) and/or adding 4D spatio-temporal data layers (e.g. Tanikawa
and Hashimoto, 2009). These studies are focussed on national or city scale analyses, rather than
analysing infrastructure systems. Mller (2006) studied housing in the Netherlands using a stock
dynamics based approach focussed on the services enabled by the materials stocks, rather than
stocks themselves; this allows technology-level interventions, such as substitutions, to be analysed.
A study in progress by the SuRe-Infrastructure group (e.g. Busch et al., 2012; Roelich, 2012a) is
radically extending S&F modelling, using the work of Mller as a starting point. The study is
redesigning S&F to specifically analyse infrastructure transition in response to resilience or low-
carbon agendas, adding layers of dynamic information on historic and future stocks and flows, the
vulnerability of the future material inflows to materials criticality, and the properties or quality of
the materials and components to inform reuse and recycling. The new model disaggregates the
system stock into infrastructure, technology and materials levels; the technology stocks are further
sub-divided into multiple (as necessary) structures and components. At its present stage of
development, the model takes two top level inputs technology roll-out scenarios, and the materials
mix required for each technology and produces plots of material (or component) requirements vs
time for each material (or component) of interest. These are then compared with current material
supply data (e.g. import or consumption figures) to provide insight into likely material supply
bottlenecks. Figure 1 shows the projected lithium required (used in Li-ion battery technology) to
fulfil the roll-out of UK passenger vehicles up to 2050, based on politically accepted scenarios. It
can be seen that within a short period, the UK requirement of lithium for this single technology will
represent a multiple of the current UK consumption of lithium. This raises questions regarding the
sustainability of current scenarios and the future criticality of lithium, which is not currently
considered as a critical material.

Figure 1 Lithium required for new electric vehicles in the UK calculated according to model of Busch et al.
(2012) using UK government Department of Energy and Climate Change MARKAL scenarios for
technology rollout (HM Government, 2010b, 2011). Total UK Lithium consumption (imports) according to
Bide et al. (2011).
0
2500
5000
7500
10000
12500
15000
17500
20000
2015 2020 2025 2030 2035 2040 2045 2050
L
i

r
e
q
u
i
r
e
d

f
o
r

U
K

e
l
e
c
t
r
i
c

v
e
h
i
c
l
e
s

t
o
n
s

p
e
r

y
e
a
r
Year
Total UK Li Consumption (2010)
"Low" scenario
"High" scenario
UK DECC baseline technology rollout
scenarios, based on economic cost
optimisation.
Purnell et al. Critical materials for low carbon infrastructure
130
1.1 Criticality
Analysis of criticality is more sophisticated than simply comparing current and projected supplies.
Roelich (2012b) identifies six issues that will affect the probability of supply disruption for a given
material and/or component:
1. Geological reserves the balance between consumption, reserves, reserve base and
recoverability is complex and driven by economic factors, often with very long lag times
between market stimuli and response since mining operations are expensive and time-
consuming to commission. Many critical materials are produced as co-products of other
mining operations and not mined in their own right, adding further complexity.
2. Geopolitics - the geographic concentration of materials in particular countries can serious
impede supply and disrupt normal market operations. The supply of many critical materials
is concentrated in politically unstable states (e.g. cobalt in the Democratic Republic of
Congo); other jurisdictions may restrict supplies of critical materials for political or
economic reasons (e.g. Chinas Autumn 2010 restrictions on exports of rare earth metals to
Japan).
3. Increasing demand as described above, introducing new materials into a system with the
unparalleled scale of infrastructure can place severe pressure on critical materials and push
previously abundant materials into scarcity. In addition, the pace of development in Asia
and Africa, and the global move towards a low-carbon infrastructure, increases competition
for resources from overseas. Many critical materials are used in competing high volume,
high specific value sectors (e.g. IT) that can afford to pay a higher premium than
infrastructure projects.
4. Environmental impact production of many materials and components, particularly those
requiring metallic elements, can result in significant discharge of pollutants to air, water and
waste repositories; it also consumes energy mostly from fossil fuels, contributing to
depletion and CO
2
emissions and water. As ore grades decline, the impacts induced and
energy expended during extraction of the metals increases rapidly. Increasingly stringent
environmental legislation adds further costs and barriers to production.
5. Substitutability economic theory dictates that a material which has become scarce (and
thus costly) will be substituted by another. This is not always possible, especially for the
highly refined materials used in modern infrastructure e.g. rare earth magnets; where it is
possible, the substitute candidates are likely to be as critical as the original material.
6. Recyclability a materials or component that is recyclable can be re-used or recovered from
the infrastructure at the end of the life of the component for which it forms a part. The more
recyclable the material including variables such as possibility, availability of facilities,
knowledge of material location and exit time etc. the less vulnerable the system is the
primary supply thereof.
Combining these myriad and complex factors into time-dependent measures of criticality as
function of materials mix (for a material or component) and vulnerability (of an infrastructure
system) to augment the simple material requirement plots is a major challenge for the project. A key
part of this analysis will be to understand the relationship between:
Purnell et al. Critical materials for low carbon infrastructure
131
x the design properties of the materials and components that determine their technical
performance and hence control decisions regarding their proposed introduction into
infrastructure products;
x the criticality of components or technologies proposed to be introduced, engendered by their
particular materials mix, and;
x the change in the vulnerability of the infrastructure system to disruption by critical material
supply induced by the proposed technology change;.
This requires a framework for relating local and global properties and the aim of this paper is to
propose the basis for such analysis.
1.2 Relationship between local, translation and global properties.
Interventions in infrastructure technology, at elemental, material or component scale, are made on
the basis that improvements in some combination of specific design criteria (e.g. tensile strength,
magnetic energy product, or mass; defined here as the local primary properties) will lead to
improvements in a specific property of the whole system (e.g. capital expenditure required, running
costs or system capacity; defined here as the global primary property).
However, the detailed relationships between local and global properties are generally poorly
understood. Specifically, the consequences of changing local properties on global properties other
than those directly considered in the design (e.g. embodied carbon, or vulnerability to material
criticality; defined here as the global secondary properties) are unknown. Global properties will
also change according to local properties that are not necessarily central to design, defined here as
local secondary properties, which may be strong or weak functions of the local primary properties.
To understand these relationships, the translational properties the subset of local properties,
primary and/or secondary, that link local and global properties must be identified and evaluated.
Consider the hypothetical example of steel bodywork for a vehicle. In response to the requirement
to optimise its fuel consumption, a designer decides to reduce the mass of the bodywork by
specifying steel with higher tensile strength, thus requiring less steel for the same performance. This
may require a change from mild steel to alloy steel. The designer will be aware of the cost of this
more expensive high-performance steel and the budget available, which will further constrain his
design. The alloy steel will have a higher processing energy requirement, and thus higher embodied
energy and carbon. It will also contain larger quantities of elements with much higher criticality
than iron and carbon (e.g. chromium or manganese).
In this case, the local primary properties are tensile strength and the cost of steel. The global
primary properties are fuel consumption and the overall cost of the vehicle. The translational
properties are the density of the steel (a very weak function of the tensile strength) and the cost of
the steel (a stronger function of the tensile strength, and also a local primary property). There is of
course a multiplicity of secondary properties, but only those useful for evaluating global properties
(i.e. those that can act as translational properties) will be of interest. If the lifetime and embodied
carbon of the vehicle are the global secondary properties of interest, then the translational properties
required would include the corrosion resistance and embodied carbon of the steel, ideally as
continuous functions of the tensile strength. (Note that for many systems, issues related to
environmental impact such as embodied CO
2
(eCO
2
) are analysed as secondary global properties).
Purnell et al. Critical materials for low carbon infrastructure
132
Thus the attributes of primary, secondary and/or translational are not intrinsic to a property, but
rather a function of the study of the system at hand.
Properties are not restricted to the materials level. At the component or technology level, the
relationship between the local primary properties dominating the design and the global properties of
the system is still of interest. For example, the engine (local) for the vehicle (global) will be chosen
on the basis of the relationship between: the global primary properties of speed, acceleration and
economy; local primary variables such as power output and torque curves; translational variables
would include mass, rotational inertia. Fuel consumption per unit power output would be both a
primary and translational property. At the component or technology level, however, property
relationships will not generally be continuous functions as at the materials level, but discrete values
for each artefact.
It follows that property relationships have to be tracked through the materials, component, structure
and technology levels of the system in order that the effect on global variables of interest can be
properly determined. Interventions at any level will cascade in both directions and modelling the
local-global property relationships could avoid unintended consequences.
1.3 Comparing properties
Most investigators when considering the national stocks and flows of strategically important
materials with widely varying compositions and properties (such as concrete, steel or plastics) treat
these materials as effectively elemental; i.e. it is implicitly or explicitly assumed that each has a
single set of local properties. Mller (2006) used a dynamic stocks and flows model to examine
the impact of past and projected concrete usage in the Netherlands on consumption of mineral
resources and production of waste. In this work, it was for example assumed that the cement content
of concrete has (and will) remain constant (at 11%) between 1900 and 2100; the effect of the wide
variation in the primary local property of concrete (i.e. the compressive strength) was not examined.
Pauliuk et al. (2012) applied a similar approach to the Chinese steel cycle, explicitly stating that
analysis did not differentiate between steel, cast iron and all other iron alloys. These simplifications
are of course necessary in order that initial analysis of complex systems can be made. Adding a
layer of information to the analysis on the relationship between local and global properties would
allow opportunities to reduce the impact and improve the performance of the system by searching
the local-translational-global property space for optima. Investigators often allude to this in their
narrative, even if not including it in the formal analysis. Pauliuk et al. (2012) discuss how the
change in the quality of recycled steel (i.e. its content of tramp elements) could affect the degree
to which a closed steel cycle could be achieved, a point also recognised by those investigating
anthropogenic chromium cycles (Johnson et al., 2006). Mller et al. (2006) note that changing the
density of the concrete used in the analysis can affect the balance between the output of demolition
waste and requirements for new construction. Where the variation in local properties is explicitly
acknowledged, it is normally presented either as a data sheet with ranges of properties (for
plastics), rather than as an analysis of the local-translational property relationship, or particular
metallic alloys are presented in elemental fashion (Giudice et al, 2005).
Traditional engineering analysis of the relationships between properties used for materials selection
by designers is carried out using the Ashby plot (Ashby and Johnson, 2002). This relates two
desirable local indices, either single properties or combinations that allow multiple properties to be
Purnell et al. Critical materials for low carbon infrastructure
133
analysed on the same plot (e.g. strength-to-weight ratio). It is generally applied to compare families
of materials.

Figure 2 example of an Ashby diagram. Taken from http://www.tangram.co.uk/TI-Polymer-
High_temperature_plastics.html, 6
th
June 2012.
This approach is extremely useful for narrowing down a wide choice from hundreds of materials to
a few likely candidates for a given design, which can then be analysed in more detail by other
methods. However, a few investigators have used it to evaluate single materials, or technology
choices in the context of understanding the effect of local changes material enhancement and
substitutions, or technology choices and thus the local-translational-global property relationship.
The global property of interest is usually an eco-indicator of some kind either a single factor such
as global warming potential, or composite indices derived from formal LCA studies and thus
translational properties include eCO
2
, recycled content, or primary energy use.
Rydh and Sun (2005) assigned materials to one of 17 groups or families according to their typology
(ferrous, non-ferrous, composites, wood etc.) and a suite of local primary properties (density, elastic
modulus and yield strength) and presented a series of Ashby charts of a composite ECO99 index
vs primary properties. However, only ranges of primary properties and the corresponding eco-
indicator were presented, rather than formal functional relationships between the two. Kobayashi
(2006) combined formal product design methodology with LCA-derived eco-indicators to produce
Factor-X charts of environmental factors vs. product value factors, with the objective of
optimising product performance whilst minimising environmental impact. The relationship between
Purnell et al. Critical materials for low carbon infrastructure
134
descriptors of the product function and technical parameters of the design i.e. translational vs
local variables is also discussed.
Purnell (2012) analysed the variation on embodied CO
2
(eCO
2
) per unit of structural performance
for steel, reinforced concrete and timber beams and columns (a translational variable implying the
GWP was the global variable of interest) as a function of size and loading (local primary variables
for structural engineering design). The complexity of the relationships uncovered (Figure 3a)
demonstrated clearly that any materials comparison based on simple consideration of eCO
2
per unit
volume or mass is likely to be deeply flawed. Purnell and Black (2012) reported that concrete,
which can vary over a wide range of compressive strength (its local primary property) from ~20
100 MPa, has a distinct optimum in the strength vs. eCO
2
curve at around 50 60 MPa. They also
demonstrated that over a range of 256 standard mix designs for concrete, eCO
2
varied by a factor of
almost an order of magnitude (Figure 3b). Thus there are considerable opportunities for minimising
CO
2
emissions afforded by a knowledge of the local-transitional-global property relationship for
structural materials.

Figure 3 eCO
2
per unit of structural performance.
(a.) Long structural columns. SC = steel UC section; CC (open) = high-strength CEM1 reinforced concrete,
(closed) = 50 MPa CEM1-PFA reinforced concrete; GC = glulam timber beams. For more details see
Purnell (2012).
(b.) Unreinforced concrete. Solid lines = CEM1 mixes, dashed lines = CEM1-PFA mixes. Points = data
from the ICE database (Hammond & Jones, 2008). For more details see Purnell & Black (2012).
Studies concerned with scarcity, criticality or vulnerability do not typically address the properties
issues at all although they may distinguish between substitutions of elemental choices; for example
the criticality index of Graedel et al.(2012) includes both elemental substitution potential and the
supply risk of the substitute element. No previous work has analysed how criticality might vary at
different levels in the system e.g. materials, component, technology or infrastructure. Thus, in this
paper, we present a preliminary analysis of criticality (as the translational property pertaining to
vulnerability to critical supply as the global property) vs. properties at the material and component
level, uncovering complex, non-monotonic relationships.
2 Methodological approach
The concept is explored using two case studies of the same technology wind turbine generators
at two different system levels. The first, at the material level, examines various permanent magnet
technologies to determine the variation of material criticality (the translational property) as a
0.001
0.01
0.1
1
10 100 1000 10000 100000
E
C
c
o
l
u
m
n
/

k
g
C
O
2
k
N

1
m

1
Load capacity/ kN
SC: 35560%8m
CC: 900%8m
GC: 32c8m
CC: 5040%8m
0
0.001
0.002
0.003
0.004
0.005
20 40 60 80 100
e
C
O
2
p
e
r

M
P
a
Target mean cube strength / MPa
Hammond Hammond PFA
4
11
15
Mix #
8
12, 3
16, 7
a.
b.
Purnell et al. Critical materials for low carbon infrastructure
135
function of a local primary property (the magnetic energy product BH
max
). The second, at the
component level, examines the variation in materials criticality with design options for drive train
technologies. Material criticality, or vulnerability to supply chain disruption, is an important
consideration in determining system vulnerability (i.e. the global property).
2.1 Relative materials criticality
In this study, we derive a relative material criticality (RMC) based on the elemental materials mix
for the magnets (since the materials used in the gearboxes and ancillaries steel, polymers etc. are
generally of negligible criticality). Criticality assessments for elements have been published by
various sources but in this study we shall use as a basis the supply risk advanced by the European
Commission Raw Materials Supply Group (European Commission, 2010). This varies from 0 5
and combines assessment of the political-economic stability of the producing countries, the level of
concentration of production, the potential to substitute and the recycling rate. It has been rebased
in this study to vary from C
EC, n
= 0 1 and assumed to approximate the probability of a disruption
to supply for a given element over a standard time frame (note that since we are presenting relative
criticality, the actual time frame is not important). Since each magnet technology employs a mix of
elements, RMC also takes into account the proportion by mass of each element p
n
in a given magnet
(either in terms of relative concentrations at the materials level, or tons per generator at the
component level). However, to reflect the fact that the availability of various elements differs
enormously obtaining an extra ton of Nd is significantly more difficult than obtaining a similar
increment of Fe, for example these proportions are divided by a number reflecting the relative
availability of each element; in this case, we have used UK import data (Bide et al., 2011) for each
element, I
n
. The partial contribution from each element is then added to derive an overall figure.
Finally, for the component case, the difference in the outputs of the various technologies Q (in
MWh per year) is taken into account in order that the functional unit remains correct.
Mathematically, this is represented (for number of different elements n) by:

n
n
n n EC
I
p C
Q
RMC
1
,
1
(1)
For a material level analysis, Q = 1 and the units of RMC are tons
-1
. For a component level analysis,
the summation is dimensionless and the units of RMC are determined by the nature of Q.
2.2 Local properties
For the materials level approach, the local primary property is BH
max
since this is the single property
that most closely determines the utility of a given permanent magnet (PM) composition (Coey,
2012). Values of BH
max
and the corresponding permanent magnet compositions were taken from
studies by Coey (2012) and Gutfleisch et al. (2011). The magnet technologies considered are
detailed in the caption to Figure 4a. Note that only permanent magnet technologies are considered
as BH
max
is undefined for electro-magnets (EM). Further work is underway to develop a local
primary variable common to both PM and EM technology.
For the component level analysis, the choice of local primary variable is more difficult. Wind
turbine generators are to some degree optimised for minimum weight, to reduce the static and
dynamic loads on the towers and foundations. The main contributions to overall generator weight
come from the mass of the active material (i.e. that contributing to the generation of magnetic field;
iron and copper for the EM technologies, and iron, copper and Nd-Fe-B magnet for the PM
Purnell et al. Critical materials for low carbon infrastructure
136
technologies) and the mass of the gearbox. Changing drive train technology (from 3-stage gearbox,
to single stage gearbox, to direct drive; see caption, Figure 4b) may decrease gearbox weight at the
expense of active material weight and vice versa: thus the local primary property used here is active
material mass plus gearbox mass. The necessary technical information for gearbox design and
associated materials mixes were obtained from various sources (Li et al., 2009; Orbital2, 2012;
Polinder et al., 2006). Note that this component level analysis includes both PM and EM
technologies. The PM composition is assumed in this analysis to be Nd
2
Fe
14
B.
3 Results
3.1 PM technology: material level analysis
Figure 4a shows the material level analysis of PM technologies, illustrating the technological
improvement in energy product (BH
max
) of permanent magnets over the 20
th
century as we move
leftwards from A to E, enabled by a combination of new processing techniques and the discovery of
new materials. In assessing the relative material criticality of the evolution, the preliminary results
show a significant range of supply risk in orders of magnitude (not fractions) and, coupled with
improvements in energy product, this offers an interesting narrative.
The strontio-ferrite magnet (A) has the lowest magnetic energy product (45 kJ m
-3
) in this analysis,
owing to much of volume being occupied by large O
2-
anions which carry no magnetic moment
(Coey, 2012). The next generation of Alnico magnets (B, introduced ~1940) gave broadly similar
levels of magnetic performance but reduced criticality by ~100, as a result of replacing a reliance on
strontium with the less critical cobalt. However, the introduction of first generation samarium cobalt
magnets (C1, ~1980) reverses this; introducing a reliance on the rare earth metal samarium
increases energy product by a factor of four (to 160 kJ m
-3
) but intensifies the relative criticality
by >1000.
Next-generation Sm-Co magnets (C2) show a more favourable design transition; a reduction in
criticality (owing to reduced Sm content) is observed with an increase in energy product (to 250 kJ
m
-3
as a result of improved processing). The most recent technology advancement is the
introduction of neodymium-iron-boron permanent magnets (D, ~1990) which have dominated the
market over last 10 years (Gutfleisch et al., 2011). Performance is again enhanced (by ~40%) and
despite the introduction of an alternative rare earth metal that attracts headlines neodymium into
the materials mix, criticality is reduced, since C
EC, Nd
= C
EC, Sm .
(European Commission, 2010) and
proportionally less Nd is required for D than Sm is required for C
1,2
.
The energy product of Nd-Fe-B comes within 10% of the theoretical limit for traditional PM
technology and advancements are slowing (Gutlfleisch et al., 2011; Coey, 2012). The final stage of
analysis presents an anticipated move from Nd-Fe-B to high power superconducting (SC)
permanent magnets (e.g. YBa
2
Cu
3
O
x
), such as those used in the Large Hadron Collider. The energy
product of SC magnets is estimated to be ~850-900 kJ/m
3
(American Magnetics, 2012). Although
this can produce a ~130% improvement in energy product the cost is a significant increase in
relative criticality.
Purnell et al. Critical materials for low carbon infrastructure
137

Figure 4 Relative materials criticality (on a logarithmic scale) vs. local properties.
(a.) Materials level analysis of permanent magnet technologies: A = Strontio-ferrite; B = AlNiCo; C1 =
SmCo
5
; C2 = Sm
2
Co
17
; D = Nd-Fe-B; E = Superconducting Y-Ba-CuO.
(b.) Technology level analysis of wind generator technologies: A = 3-stage geared electromagnet (DFIG); B
= single stage geared electromagnet (GDFIG); B = direct drive electromagnet (DDSM); C = direct drive
permanent magnet (DDPM); D = single stage geared permanent magnet (GPM). For acronym details, see
Polinder et al. (2006).
3.2 Generator technology: component level analysis
Figure 4b presents the preliminary result of the component-level analysis of five wind generator
technologies, namely those modelled by Polinder (2006) comparing options for a 3 MW turbine. As
before, technology options moving rightward A through to D are in chronological order of
introduction. All 5 technologies produced very similar annual energy yields Q of 7.69 GWh (A) to
7.89 GWh (C).
The transition from A to B involves a transition from a 3-stage (A) to a single-stage gearbox (B),
reducing gearbox mass from 37 to 16 tons but increasing total active material mass (to compensate
for the lower angular velocity of the rotor) from 5.2 to 11.4 tons. The small increase in criticality is
associated with the extra copper (and to a lesser extent, iron) required by the enlarged magnet. The
transition from B to C involves both a transition from a single-stage gearbox to direct drive i.e.
elimination of the gearbox and the introduction of a PM to more efficiently generate and maintain
the high magnetic fields necessary to compensate for the further reduced rotor speed. Note that the
relatively small improvement in the local primary property (the gearbox is eliminated but the total
mass of active material is 24 tons, including 1.7 tons of PM) increases criticality by a factor of
~1000 owing to the introduction of neodymium. Further development (D) involves re-introducing a
single-stage gearbox to increase the rotor speed, allowing a reduction in the quantity of active
material (to 6 tons, including 0.4 tons of PM) and concomitant reduction in criticality, at the cost of
a 16 ton gearbox.
An alternative development path at B involves a direct-drive EM generator; however, the increase
in active material mass (from 11 to 45 tons) involves an increase in both criticality and total mass
(as the gearbox mass saving is only 16 tons).
0.1
1
10
100
1000
10000
0 200 400 600 800 1000
R
e
l
a
t
i
v
e

m
a
t
e
r
i
a
l
s

c
r
i
t
i
c
a
l
i
t
y
(
p
e
r

k
g
)
Max. magnetic energy product BH
max
/ kJ m
3
A
B
C1
C2
D
E
0.001
0.01
0.1
1
10
20 30 40 50
R
e
l
a
t
i
v
e

m
a
t
e
r
i
a
l
s

c
r
i
t
i
c
a
l
i
t
y
(
p
e
r

M
W
h
/
y
r
)
Mass (Active material + Gearbox) / Tonnes
A
B
B'
C
D
a.
b.
Purnell et al. Critical materials for low carbon infrastructure
138
4 Discussion
At the material level (Figure 4a), the introduction of Nd into the system (D) appears to be justified,
as it both increases performance and decreases criticality over the previous technology. Rolling
back to a reduced criticality PM technology Alnico (B) would involve a performance penalty of
a factor of 10. However, a decision as to whether this would be balanced by the concomitant
reduction in criticality ( 400) would be based on factors outside of this analysis, i.e. the relative
value of criticality vs. performance. At the component level, introducing the Nd-based PM
technologies has a much less dramatic effect on performance (in terms of generator weight) but a
similar effect on criticality. Thus the value of the introduction of Nd is less clear-cut; although the
analysis does not at this stage include other factors that contribute to the decision regarding
generator technologies (e.g. that simplifying or removing the gearbox significantly reduces both
maintenance costs and the frequency of incidents of generator failure). Roll-back to a previous
technology in order to reduce criticality would be a far more likely design decision resulting from
the component-level analysis than it would from the material-level analysis, suggesting that a multi-
level analysis is required in order to produce a coherent design strategy concerning criticality
reduction.
The derivation of RMC as presented here is necessarily a first approximation and several of the
component terms should attract further scrutiny. First, it is unlikely that the value of C
EC
maps
directly onto a probability of supply disruption and alternative formulations of criticality indices
need to be explored and tested in a similar framework. Secondly, the variation of RMC over several
orders of magnitude is largely driven by the normalisation against I
n
. This tends to swamp the effect
of C
EC, n
unless materials are used in similar quantities on a national scale. The implicit assumption
is that if element X is imported at a rate one-tenth that of element Y, then the likelihood of being
able to obtain a further fixed increment of X is one-tenth that for Y i.e., X is ten times more difficult
to obtain. This is also unlikely to be strictly true and a more sophisticated relationship between I
n

and the probability of supply disruption could possibly be derived from e.g. price elasticity data,
either in a general or element-specific sense.
5 Conclusion
A framework has been presented for analysis of vulnerability to critical material supply as a
function of local properties and applied to a case-study of wind-turbine generation. Preliminary
analysis suggests that even where the introduction of critical materials (in this case, rare earth
metals) enhances technical performance by up to an order of magnitude, the associated increase in
criticality may be two or three orders of magnitude. Analysis at the materials and component levels
produce rather different results, suggesting that design decisions should be based on analysis at
several levels. The relative materials criticality values derived here should be treated as preliminary,
as the relationships between it component parameters and the probability of supply disruption are
not known with confidence. Nonetheless, this analysis serves to highlight the importance of
analysing the introduction of critical materials into infrastructure and introduces a methodology for
further development.
Acknowledgements
The support of the Engineering and Physical Sciences Research Council (grant number
EP/J005576/1) is gratefully acknowledged.
Purnell et al. Critical materials for low carbon infrastructure
139
References
Ashby, M.; Johnson, K. (2002). Materials and Design, the Art and Science of Materials Selection in Product Design;
Butterworth Heinemann, Oxford, UK. 352pp.
American Magnetics. (2012). Characteristics of superconducting magnets. http://www.americanmagnetics.com
Accessed on 29
th
May 2012
Bide, T., Idoine, N.E., Brown, T.J. and Smith, K. (2011) United Kingdom Minerals Yearbook 2010. British Geological
Survey: Open report OR/11/032 104pp
Binder, C.R., Graedel, T.E., and Reck, B. (2006) Explanatory Variables for per Capita Stocks and Flows of Copper and
Zinc: A Comparative Statistical Analysis, Environmental Studies, 10 (1)
Bratteb, H., Bergsdal, H., Sandberg, N.H., Hammervold, J., and Mller, D.B. (2009) Exploring built environment
stock metabolism and sustainability by systems analysis approaches, Building Research & Information, 37(5-6), 569-
582.
Busch, J., Dawson, D., Roelich, K., Steinberger, J.K., and Purnell. P. (2012). Enhancing Stocks and Flows modelling to
support sustainable resource management in low carbon infrastructure transitions. International Environmental
Modelling and Software Society (iEMSs) 2012 International Congress on Environmental Modelling and Software
Managing Resources of a Limited Planet, Sixth Biennial Meeting, Leipzig, Germany. R. Seppelt, A.A. Voinov, S.
Lange, D. Bankamp (Eds.) http://www.iemss.org/society/index.php/iemss-2012-proceedings.
Coey, J.M.D. (2012). Permanent magnets: Plugging the gap. Scripta Materialia. Available online 3 May 2012.
http://dx.doi.org/10.1016/j.scriptamat.2012.04.036.
Council for Science and Technology (CST). (2009). A national infrastructure for the 21
st
century. 60pp
European Commission. (2010) Critical Materials for the EU, Report of the Ad-hoc Working Group on defining critical
raw materials, European Commission Enterprise and Industry. Version 30 July, 2010
Giudice, F., La Rosa, G. and Risitano, A. (2005). Materials selection in the Life-Cycle Design process: a method to
integrate mechanical and environmental performances in optimal choice. Material and Design, 26, 9-20.
Graedel, T.E. and Cao, J. (2010). Metal spectra as indicators of development. PNAS 107 (49) 20905-20910 Kleijn, R.,
van der Voet, E., Kramer, G.J., van Oers, L., and van der Giesen, C., Metal requirements of low-carbon power
generation, Energy, 36, 5640-5648, 2011.
Graedel, T. E., Barr, R., Chandler, C., Chase, T., Choi, J., Christoffersen, L., Friedlander, E., et al. (2012). Methodology
of Metal Criticality Determination. Environmental Science & Technology 46 (2), 10631070.
Gutfleisch, O., Willard, M.A., Bruck, E., Chen, C.H., Sankar, S.G. and Ping Liu, J. (2011). Material Views: Magnetic
materials and devices for the 21
st
Century: Stronger, lighter and more energy efficient, Advanced Materials, 32, 821-
842.
Hammond, G.P.; Jones, C.I. (2008) Embodied energy and carbon in construction materials. Proceedings of the Institute
of Civil Engineers Energy 161 (2), 87-98.
HM Government, (2009) The UK Low Carbon Transition Plan: National strategy for climate change and energy. 15
July 2009, HMSO, 220pp.
HM Government, (2010a) Low-Carbon Construction Innovation and Growth Team (IGT) Final Report. Autumn 2010,
HMSO, 231pp.
HM Government, (2010b) 2050 Pathway Analysis, and calculator tool. Department of Energy and Climate Change
(DECC), HMSO, 2010.
HM Government, (2011) The Carbon Plan - Delivering a low carbon future, Department of Energy and Climate
Change (DECC), HMSO, 2011.
ICE, (2009) State of the Nation Report: Defending Critical Infrastructure, Institute of Civil Engineers (ICE), 16pp.
HM Treasury, (2010) National Infrastructure Plan 2010. HMSO.
HM Treasury, (2011) National Infrastructure Plan 2011. HMSO.
Johnson, J., Schewel, L. and Graedel, T.E. (2006). The contemporary anthropogenic chromium cycle. Environmental
Science & Technology, 40, 7060-7069.
Kara, H., Chapman, A., Crichton, T. Willis, P. and Morely, M. (2010). Lanthanides Resources and Alternatives: a
report for Department of Transport and Department for Business, Innovation and Skills. Prepared by Oakdene Hollins,
May 2010. 59pp
Kobayashi, H. (2006). A systematic approach to eco-innovative product design based on life cycle planning. Advanced
Engineering Information, 20, 113-125.
Li, H., Chen, Z. and Polinder, H. (2009). Optimization of multibrid permanent-magnet wind generator systems. IEEE
Transactions on Energy Conversion, 24 (1), 82-92, March 2009.
Moss, R., Tzimas, L.E., Kara, H., Willis, P. and Kooroshy, J. (2011). Critical Metals in Strategic Energy Technologies
Assessing Rare Metals as Supply-Chain Bottlenecks in Low-Carbon Energy Technologies, European Commission Joint
Research Centre Publication No. JRC65592. Publications Office of the European Union.
Purnell et al. Critical materials for low carbon infrastructure
140
Mller, D.B. (2006). Stock dynamics for forecasting material flows - Case study for housing in The Netherlands,
Ecological Economics, 59(1), 142-156.
Orbital2 (2012). Projects. http://www.orbital2.com/index.php/projects. Accessed on 31
st
May 2012.
Pauliuk, S., Wang, T. and Mller, D.B. (2012). Moving towards the circular economy: The role of stocks in the Chinese
Steel Cycle. Environmental Science and Technology, 46 (1), 148-154.
Purnell, P. (2012). Material nature versus structural nurture: The embodied carbon of fundamental structural elements.
Environmental Science & Technology, 46, 454-461.
Purnell, P. and Black, L. (2012). Embodied carbon dioxide in concrete: Variation with common mix in design
parameters. Cement and Concrete Research, 42 (6), 874-877.
Polinder, H., van der Pijl, F.F.A., de Vilder, G.J, and Tavner, P. (2006). Comparison of direct-drive and geared
generator concepts for wind turbines. IEEE Transactions on Energy Conversion, 21 (3), 543-550.
Roelich, K. (2012a), Undermining Infrastructure Briefing Note 1, Material Criticality. http://SuRe-
Infrastructure.leeds.ac.uk, 6pp.
Roelich, K. (2012b), Undermining Infrastructure Briefing Note 2, Stocks and Flows Modelling. http://SuRe-
Infrastructure.leeds.ac.uk, 6pp.
Rydh, C.J. and Sun, M. (2005). Life cycle inventory data for material grouped according to environmental and material
properties. Journal of Cleaner Production, 13 (13-14), 1258-1268
Sonigo, P., Turbe, A., Johansson, L., Lockwood, S., Mitsios, A., Steinberger, J.K., Wiedenhofer, D., Eisenmenger, N.,
and Maxwell, D. (2011) Large Scale Planning and Design of Resource Use, Final Report prepared for European
Commission (DG ENV), Paris: Bio Intelligence Service (BIOIS).
Spatari, S., Bertram, M., Fuse, K., Graedel, T.E., and Rechberger, H. (2002). The contemporary European copper cycle:
1 year stocks and flows, Ecological Economics, 42(1-2), 27-42
Tanikawa, H, and Hashimoto, S. (2009) Urban stock over time: spatial material stock analysis using 4d-GIS, Building
Research & Information, 37(5), 483-502.
USGS. (2012) United States Geological Survey, Mineral Resources, mineral commodities summaries: accessed from
http://minerals.usgs.gov/minerals/pubs/mcs/ on the 22/01/12.



141

Modelling of Evolving Cities and Urban Water Systems in
DAnCE4Water
C. Urich
1
, P. M. Bach
2
, R. Sitzenfrei
1
, M. Kleidorfer
1
, D. T. McCarthy
2
,A. Deletic
2
and W. Rauch
1
1
Unit of Environmental Engineering, University of Innsbruck, Technikerstr. 13, Innsbruck
6020, Austria
2
Centre for Water Sensitive Cities, Civil Engineering Department, Monash University,
Clayton VIC 3800, Australia
E-mail: christian.urich@uibk.ac.at
Abstract
Urban water systems are under increasing pressure due to the impact of climate, population growth
and urbanisation. Conventional water infrastructure is frequently classified as highly unsuited to
address future challenges. In order to make our urban water systems more resilient to these
challenges, the development of new water management strategies is vital. During the last 20 years,
many new decentralised technologies have emerged. Their mix with existing centralised
technologies in particular creates complex interactions within the urban water system. To deepen
our understanding of these interactions at a city scale and to identify possible transition strategies to
a resilient city, the development of the DAnCE4Water model within the project PREPARED
enabling change as a potential strategic planning tool is thus proposed. Within the model three
major modules are linked under consideration of complex interactions to simulate an entire urban
water system. The modules are (I) the urban development module (UDM) to evolve the urban
environment (II) the biophysical module (BPM) to generate the urban water infrastructure and
assess the performance (Bach et al., 2011) and III) a societal transition module (STM) to assess the
societal system (de Haan et al., 2011) and its future development. This paper focuses on the design
of the dynamics and interaction within the urban development module and the biophysical module
of DAnCE4Water. The dynamics of the module are illustrated using virtual case studies based on
the dataset of Innsbruck (Austria). This includes the development of the urban environment and of
the drainage system. The paper presents the state of the development after the first project phase. In
the next project phase the focus is set on the integration of the societal transition module as well as
the integrated performance assessment. After the initial testing and validating of the DAnCE4Water
tool, it will be applied and validated to greater Melbourne.


Urich et al. Modelling of evolving cities and urban water systems
142
1 Introduction
Conventional urban water systems have served society well in industrialized countries for more
than 150 years, but the fundamental concept is becoming compromised by climatic and urban
changes. Sustainability has become an important topic in urban water management and asset
management. Thereby, decision makers have to balance environmental and social impacts, risks and
investment costs. While the urge to transition towards sustainable solutions in urban water
management is still low in areas with an abundance of water, both water scarce areas as well as
cities under development are facing the necessity to question these fundamental principles. The
main problem is that conventional water infrastructures (especially the transportation networks with
its long life capacities) are too inflexible to address future challenges (Ashley et al. 2005). To make
our urban water systems more sustainable and adaptable for future challenges , the development of
new water management strategies is vital. In the last 20 years many new decentralised technologies
have emerged, especially in storm water management (Low Impact Development LID in the United
States or Water Sensitive Urban Design WSUD) in Australia). Most importantly, these concepts
integrate stormwater management into urban design. But, as highlighted in (Wong et al. 2009), not
only is a technical overhaul of the conventional system is required, it is also necessary to question
the existing social-political environment to enable sustainable and water sensitive decision making
and behaviours. Similar ideas have emerged also for managing water on a catchment scale, where
Wagener et al. (2010) claim the necessity to consider the human induced changes in natural systems.
Numerous small scale projects have been successfully delivered, but the transition from a
conventional urban water system to an adaptable and sustainable one on a city scale is still unknown.
Especially the mixture of existing centralised and novel decentralised system causes complex
interactions within the urban water system. To deepen our understanding of the interactions on a
city scale, and to identify possible transition strategies, new analysis tools are required. Work Area
6 of the PREPARED project, that focuses on enabling much needed change towards more
sustainable urban water systems, aims to deliver such a tool that will embody the synergies between
social, urban and water systems modelling. This software, known as DAnCE4Water (that stands for
Dynamic Adaptation for eNabling City Evolution for Water) is a decision-support tool for urban
planners, government, watershed managers, water utilities and local councils.
This paper aims to present an overview of the DAnCE4Water model with focus on the urban
development and evolution of the urban water systems. The dynamics of the module are illustrated
using virtual case studies based on the dataset of Innsbruck (Austria). This includes the
development of the urban environment and of the drainage system. The paper presents the current
state of the development. In the next project phase the focus is set on the integration of the societal
transition module as well as the integrated performance assessment. After the initial testing and
validating of the DAnCE4Water tool, it will be applied and validated to greater Melbourne.
2 Framework
DAnCE4Water is designed as a software tool that enables a wide variety of stakeholders to explore
possible future scenarios and consequences of policies and action strategies on development of
urban water infrastructure for supply, drainage and sewage. What-if scenarios for the urban water
system can be investigated in a dynamically evolving environment, which considers the interactions
Urich et al. Modelling of evolving cities and urban water systems
143
between urban water infrastructure, urban environment and the societal system in space and time.
Users can identify sustainable and reliable adaptation strategies for the urban water system.
DAnCE4Water has three key modules to simulate the urban system and its future development.
They can be run independently of one another system (Figure 1). The modules are (I) the urban
development module (UDM) to evolve the urban environment, (II) the biophysical module (BPM)
to generate the urban water infrastructure and assess the performance (Bach et al. 2011) and (III) a
societal transition module (STM) to explore the societal system (de Haan et al. 2011). The model
has also the Conductor that manages the flow of information between the input/output interface and
the modules.
Figure 1 DAnCE4Water Model Framework
The Societal Transition Module (STM) simulates the influence of society on the evolution of the
urban water system. Within this module, an urban water system is considered to be composed of
several so-called constellations, each representing a water servicing solution (e.g. different
technological solutions for water supply, drainage and sewage) in terms of infrastructure and
institutions. Different water servicing solutions, and therefore different constellations, have a
different relative influence on the urban water system and therefore different power. The way the
urban water system develops over time is considered to be interplay between external influences,
societal needs and the power dynamics in the system.
The Urban Development Module (UDM) spatially translates the population projections and the
master plan for the urban development (e.g. growth corridors) in time steps of one year. The well-
known and widely applied UrbanSim package (Waddell 2002) is integrated in the DAnCE4Water
model and enables a detailed projection of the urban environment including, not only future land
use and population, but also number of households, housing types etc. UrbanSim is a software tool
that is designed to reflect the interdependencies in dynamic urban systems, focusing on the real
estate market and the transportation system. The model reflects the key decision makers
Urich et al. Modelling of evolving cities and urban water systems
144
households, businesses and developers - and choices impacting urban development (The UrbanSim
Project 2011).
The Biophysical Model (BPM) has three primary purposes: (1) adequate spatial representation of
the information provided by other modules, (2) adapting existing and placing new water
infrastructure into the urban environment and (3) assessing the performance of the region. Output
from this module comprises a map of the region containing existing and new infrastructure
information as well as a set of performance indicators. These will feed back into the societal
transitions and urban development modules in the next development cycle.
Each module simulates how external drivers, such as climate change, urban development, and
socialite changes, impact on the development of urban water system (this includes water supply,
sewage and drainage services). These external drivers (climate and demographic change, etc.), as
well as the hypotheses that are tested (e.g. adaptation strategies) are defined in the Scenario.
Scenarios for future system developments form the key inputs that drive the system dynamics.
Scenarios of development plans (e.g. urban growth patterns, demographic projections), contextual
trends (e.g. climatic, social, economic, political), societal needs (e.g. water security, flood
protection, ecosystem protection, urban amenity) and policy experiments (e.g. options for strategic
action) are quantified as functions of time and defined by the user as model inputs.
The scenarios are feed into the Conductor by the Scenario input module. The Conductor
orchestrates the simulation and evolves the urban water system into the future. The simulation time
step is determined by the underlying modules but is at least in annual increments. The results are
feed into the Conductor and are presented to the user with the reporting and presentation module.

3 Urban development module
To model the urban environment the well-known and widely applied UrbanSim package (Waddell
2002) has been integrated in the urban development module. UrbanSim is a software tool that is
designed to reflect the interdependencies in dynamic urban systems, focusing on the real estate
market and the transportation system. The model reflects the key actors households, businesses
and developers - and choices impacting urban development (The UrbanSim Project 2011).
UrbanSim evolves the urban environment based on a master plan, population and demographic
projections in an annual time step in the future. UrbanSim is an open source project developed at
the Center for Urban Simulation and Policy Analysis at the University of Washington.
Key features of the UrbanSim Model System (The UrbanSim Project 2011):
x The model simulates the key decisions makers and choices impacting urban development; in
particular, the mobility and location choices of households and businesses, and the
development choices of developers
x The model explicitly accounts for land, structures (houses and commercial buildings), and
occupants (households and businesses)
x The model simulates urban development as a dynamic process over time and space, as
opposed to a cross-sectional or equilibrium approach
Urich et al. Modelling of evolving cities and urban water systems
145
x The model simulates the land market as the interaction of demand (locational preferences of
businesses and households) and supply (existing vacant space, new construction, and
redevelopment), with price adjusting to clear market)
x The model incorporates governmental policy assumptions explicitly, and evaluates policy
impacts by modelling market responses
x The model is based on random utility theory and uses logit models for the implementation of
key demand components
x The model is designed for high levels of spatial and activity disaggregation, with a zonal
system identical to travel model zones
x The model presently addresses both new development and redevelopment, using parcel-
level detail
As shown in Figure 2 following components are used to evolve the urban environment in the future.
First the policy assumption component induces government policies like land use plans and growth
boundaries. Next the real estate development model develops new buildings and redevelops existing
ones. The value of the buildings is calculated by the real estate price model. Based on the
population and job prognosis the total number of households and jobs is calculated for every year.
The total number of households is used by the control total component to create or remove
households from the simulation. The relocation choice model relocates jobs and households due to
their characteristic. Unplaced households and jobs pick a vacant real estate unit based on their
characteristic (location choice model). The model evolves the urban environment in an annual time
step.
Urich et al. Modelling of evolving cities and urban water systems
146

Figure 2 UrbanSim Data Flow (The UrbanSim Project, 2011) p. 15
3.1 Integration of UrbanSim in DAnCE4Water
The UrbanSim project has been fully integrated in DAnCE4Water. The model has been connected
with the scenario, social transition and the bio-physical module. To link the bio-physical model with
the urban development model the real estate price model is used. Recent studies from the
Netherlands (Visser et al. 2008) have shown that beside physical and social attributes like the type
of the building and societal status of a neighbourhood also environmental attributes like the
presence of water or open space are important amenities influence the value of a property. For
example, the presence of an attractive landscape with water features increases the value of a
property by 7%. (Joke 2000) on the other side claims that the presence of flooding within a property
has a significant negative impact (minus 20-40%) on the house prices (White et al. 2004). The
house price has an impact on the relocation and location choice model and so flooded areas become
less attractive and areas with green open space more attractive for the citizens. The societal
transition module can be linked with the relocation and location choice model. For example, if the
societal need for urban amenity is growing, areas with open space in walking distance become more
attractive.
Currently, a grid-based UrbanSim Model has been integrated in DAnCE4Water. It is derived from
the Eugene-Springfield model delivered with UrbanSim. The model is based on a grid
representation (200m x 200m) of the urban environment. The results from the UrbanSim model are
mapped on the grid cells. This includes demographic information like population but also
information about the number of residential units or square meter of commercial areas (see Figure 3).
Urich et al. Modelling of evolving cities and urban water systems
147
Also tables of households and jobs are generated. The grid cells and tables are feed back into the
conductor and can be used by bio physical model or the societal transition module.
Figure 3 Example Results Urban Development Module
3.2 Simulation Setup and Scenarios
Setting up an UrbanSim model is split up into 2 phases, the calibration and validation phase and the
prognosis phase. For the first phase a detailed description of the urban environment at a specific
point in time is required. This includes demographic data about households (e.g., size, number of
children, income), buildings and jobs as well as buildings footprints, parcels etc. Also data of house
prices to identify the parameters for the relocation and location choice model are required. If not all
data are available data can be stochastically generated based on statistical data, like the distribution
of the household size by using existing DynaMind modules. Since the urban development is highly
driven by transport, information about travel times between different districts is needed. To validate
and calibrate the UrbanSim model historical data are used. Alternatively UrbanSim scenarios can be
generated with the VIBe approach. VIBe generates virtual urban environments based on parameter
ranges derived from real world case studies (Sitzenfrei, et al. 2010). An introduction of how to set
up an UrbanSim model is given in (Patterson et al. 2010).
For phase 2 (the prognosis phase) a master plan for the urban development is required. This
includes population and demographic projections as well as future land use maps and growth
boundaries. This data is usually available from the urban planning departments of local planning
authorities.
4 Adaptation & Placement of the Water Infrastructure
The adaptation of the infrastructure is computed based on a) the necessities of water management
and b) on the (simulated) preferences of the stakeholder towards decentralized solutions.
Centralised systems are first adapted to the changes in the local environment followed by the
implementation or decommissioning of decentralised technologies. Larger facilities such as
treatment plants are pre-defined before the simulation starts and enabled if certain conditions are
met.
Within the bio-physical module the urban environment is discretised in blocks (Bach et al. 2011).
Blocks aggregate informations of the urban environment (land use, population, elevation etc).
Based on this information the bio-physical module builds a representative urban form. The urban
form provides a conceptual description of the urban environment with all parameters needed to
Urich et al. Modelling of evolving cities and urban water systems
148
describe the urban water system. This includes; impervious area split up into contributions from
roof and street; area available to place infiltration systems on lot or precinct scale as well as water
demand split up in potable and non- potable. Based on the urban form and under consideration of
the existing water infrastructure and preferred technologies DAnCE4Water populates blocks with
urban water infrastructure and connects newly developed blocks with the existing water
infrastructure.
4.1 Adaptation of Existing Drainage Networks
To connect newly populated blocks (output of the UDM) to the existing infrastructure the agent
based approach for generating combined sewer systems (Urich et al. 2011) was adopted and
enhanced. An agent is a mobile object that acts and interacts with its environment and also with
other agents according to achieve the required aim (Batty 2005). Here an agent is a mobile object
on a cell grid.
The agent operates on multiple landscapes e.g. digital elevation map, map of the land use, map of
attraction fields of previous successful agents and existing networks. The decision in which cell the
agent moves in the next time step depends on his current status and his neighbourhood. Therefore it
evaluates his neighbourhood on all landscapes with defined rules (e.g. the agent prefers cells that
are lower than its current position or closer to the river) under consideration of his current state (e.g.
depth below surface). The results are combined and the agent moves in one of the preferred cells.
Every agent has the aim to reach the existing sewer network. If an agent successfully reached his
aim he marks his path and creates an attraction field around his path. Following agents are attracted
by these paths (see Figure 4). By analysing the path of the agent the layout of the new sewer
network is determined. The new pipes are designed by using the time area method.
Figure 4 Example of sewer system adaptation following new urban development (Source: Urich et al, 2010)
4.2 Adaptation of Existing Water Supply Networks
Based on GIS-data for land use and population growth scenarios, the infrastructure development
model adapts (expands) an existing WDS (see Figure 5). The WDS expansion is based on WDS
network motifs, which are reoccurring in complex WDS (Sitzenfrei et al. 2012). From the GIS-data
for land use and population growth for different time steps, growth corridors for new developments
Urich et al. Modelling of evolving cities and urban water systems
149
are identified. For these growth corridors, new WDS-parts are automatically added to the existing
WDS. Subsequent, with different loading conditions these parts are pipe-sized with different
strategies.
Figure 5 Adaption of centralized water distribution system (WDS) Source:(Sitzenfrei et al. 2012)

Figure 6 Example of water distribution adaptation following new urban development (Sitzenfrei et al. 2012)
Figure 6 shows the time-line for an example water distribution system with land use data and an
Epanet2 model. Three time steps are illustrated. While the base scenario has some separated supply
areas, with proceeding time the areas are growing together and expanding at the edges. For pipe-
sizing scenario in the illustrated case, the determined pipe-diameters of the base scenario are
retained also for the future time steps.

Urich et al. Modelling of evolving cities and urban water systems
150
4.3 Adaptation and Implementation of Decentralised Technologies
A key feature of the bio-physical module is to be able to investigate a likely balance between
centralised and decentralised technologies. Following major infrastructure adaptation, existing
decentralised technologies need to also be adapted and new technologies (available from a large
toolbox) implemented. Narrowing down a large toolbox to a select few technologies that are suited
to a particular environment is carried out in several steps known as opportunity assessment (see
Figure 7). Once this step has been completed, the most suitable technology can then be chosen by
means of a multi-criteria decision-making framework (the base structure of which is shown in
Figure 8).
Figure 7 Opportunity Assessment Process for Decentralised Technologies
Opportunity is characterised in the BPM as satisfying Legislative (compliance with legal rules),
Judicial (compliance with stakeholder expectations) and Executive (physical compatibility with
environment) requirements. For each building block (or groups of building blocks in the case of
larger-scale systems), a shortlist of potential technologies is created from a toolbox of available
options. Design curves and targets (for quantity and quality set forth by legislation) are used with
site characteristics (obtained from the input data) to calculate the systems physical configuration.
Each design option (including combinations of systems) is checked for compatibility with the local
environment based on urban form and demographics (from the previous steps). Options in the final
Opportunities Shortlist are subsequently scored against multiple criteria.
The multi-criteria evaluation framework (see Figure 8) is constructed from four criteria. Indicators
from each criterion are grouped into one of three possible scopes. Scope refers to the extent of
rigour poured into the decision making, where the focus can be on only the technology (general), its
surroundings (context) and long-term temporal aspects (dynamics). The user can freely customise
the detail involved in the evaluation depending on the aims of their investigation. Once the decision
has been made, information on each implemented technology is written to each corresponding
building blocks list of characteristics and called upon to setup a performance simulation.
Urich et al. Modelling of evolving cities and urban water systems
151
Figure 8 Overall multi-criteria structure of the DAnCE4Waters Technology Decision-making framework
4.4 Performance Assessment
Modelling the performance of the entire region will require integration of a large number of
subsystems. The urban drainage stream (dark blue) will encompass hydrology, water quality,
decentralised WSUD and feature both combined and separated sewer systems. The water supply
stream (light blue) conducts a water balance across the regions major reservoirs and considers the
major distribution network. Both streams are linked at the End User component, which simulates
water demand. A climate component (green) uses information from the regions hydrology and
climate data to conduct an energy balance and determine likely thermal comfort impacts.

Urich et al. Modelling of evolving cities and urban water systems
152
Figure 9 Scope of Performance Assessment in DAnCE4Water
The integration of all components is accomplished by interfacing several sub-models with one
larger integrated modelling framework. The CityDrain3 framework (Burger et al. 2010) offers a
modular development environment for integrated models. Many sub-algorithms for stormwater
treatment) have been implemented. These are coupled into a larger model in a conceptual node-link
fashion. Alongside CityDrain3, there is an interface with two hydrodynamic models: EPA SWMM
(Rossman 2008) for modelling drainage networks and flood assessment as well as EPANET
(Rossman 2000) for the water distribution network.
Following infrastructure adaptation and planning, the Bio-physical Module selects the suitable
combination of sub-models for each building block. Using the information about spatial
connectivity, the blocks are connected to the larger supply and drainage networks, which are in turn
connected to emerge from larger treatment facilities. During performance assessment, the drainage
stream is first simulated. Water demand information (including substitution of supply through
recycling) is logged at each time step (of 5 to 6-minute intervals) and fed as input into the supply
model, which is initiated thereafter. Finally, performance indicators are calculated following
completion of all assessments.
5 Testing the Model (Urich et al. 2010)
5.1 Scenario
To show the dynamics of UDM and the BPM, VIBe cities are generated with the characteristics of
the alpine city of Innsbruck (Sitzenfrei et al. 2010) and subsequently evolved 20 years into the
future. For the urban development, the population and demographic projections for Innsbruck are
used (ROK-Prognosen 2010). As a climate change adaptation strategy, the impact of
implementing on-site stormwater infiltration systems is evaluated. Therefore, for newly built or
redeveloped buildings in areas with discontinuous urban fabric, an infiltration system for roof water
is installed on the lot. The aim is to determine which renewal rates are required to compensate the
Urich et al. Modelling of evolving cities and urban water systems
153
effects of climate change and urban development. In the evaluation of system performance, a
decrease of 5% in the described performance indicators is tolerated in the period 2010 to 2030.
As a climate change scenario, a shift in rainfall intensities is considered. Depending on duration,
return period and anticipated technical lifetime of sewer systems, (Arnbjerg-Nielsen 2012) suggests
an intensity increase by 10 50%. In that range, four climate change scenarios are investigated.
Therefore, a linear function of time is used to model the increase in rainfall intensities from 2010 to
2030. Three end points were chosen for this linear approximation: 100% (i.e. rainfall intensities do
not change with time), 110%, 130% and 150% (i.e. the rainfall intensity in 2030 was 50% higher
than that in 2010). Of course the integration of more sophisticated climate chance projections is
possible when such data is available. In our case a design storm Euler Type II (described in (De
Toffol 2006) for an alpine region, with a duration of two hours and a return period of 5 years is used.
Building stock renewal rates of 0%, 1%, 3% and 5% are investigated.
As each renewal rate is combined with each climate change scenario, a total of 16 scenarios were
investigated. For each scenario, 100 VIBe cities were created and evolved 20 years into the future,
resulting in 1600 simulations in total. The results are statistically evaluated.
5.2 Results
In Figure 10 and Figure 11 the evolution of the urban population (pop) and the impervious area (imp)
as well as the impervious area connected to the sewer network (drained imp) is shown for one
realisation is. On the right, the development of PI1 (combined sewer overflow efficiency) and PI4
(flooding efficiency) is shown. It can be seen that a renewal rate of 3% cannot sufficiently
compensate the increase in rainfall intensity.
Figure 10 Storyline of the urban environment: PI1 combined sewer overflow efficiency; PI4 flooding
efficiency
Urich et al. Modelling of evolving cities and urban water systems
154
Figure 11 Evolution of the urban environment
Conclusions
To enable transition to a more sustainable and resilient urban water system, new strategic planning
tools that consider the complex interaction of the urban water infrastructure with the urban
environment and the societal system are required and has prompted the development of
DAnCE4Water. The DAnCE4Water model is an important strategic tool that enables a wide variety
of stakeholders to explore possible futures and consequences of policies and strategic action of the
urban water system under consideration of the interactions between urban water infrastructure,
urban environment and the societal system. Within the model three major modules are linked under
consideration of complex interactions to simulate an entire urban water system. The modules are (I)
the urban development module (UDM) to evolve the urban environment (II) the biophysical module
(BPM) to generate the urban water infrastructure and assess the performance (Bach et al., 2011) and
III) a societal transition module (STM) to assess the societal system (De Haan et al., 2011) and its
future development. This paper focuses on the design of the dynamics and interaction within the
urban development module and the biophysical module of DAnCE4Water. The dynamics of the
module are illustrated using virtual case studies based on the dataset of Innsbruck (Austria). This
includes the development of the urban environment and of the drainage system.
The paper presents the current state of the development. In the next project phase the focus is set on
the integration of the societal transition module as well as the integrated performance assessment.
After the initial testing and validating of the DAnCE4Water tool, it will be applied and validated to
greater Melbourne.

Urich et al. Modelling of evolving cities and urban water systems
155
Acknowledgements
This research is part of a project that is funded by the EU Framework Programme 7PREPARED:
Enabling Change. This research is also partly funded by the Australian Governments Department
of Industry Innovation, Science and Research.
References
Arnbjerg-Nielsen, K. (2012). Quantification of climate change effects on extreme precipitation used for high resolution
hydrologic design. Urban Water Journal (March): p.1-9.
Ashley, R.M., Balmforth, D.J., Saul, A.J., & Blanskby, J. (2005). Flooding in the future: predicting climate change,
risks and responses in urban areas. Water Science & Technology 52(5): p.265273.
Bach, P. et al. 2011. Characterising a city for integrated performance assessment of water infrastructure in the
DAnCE4Water model. In Proceedings of the 12th International Conference on Urban Drainage, Porto Alegre/Brazil,
10-15 September 2011, Porto Alegre/Brazil.
Batty, M. (2005). Cities and Complexity: Understanding Cities with Cellular Automata, Agent-Based Models, and
Fractals. The MIT Press.
Burger, G., Fach, S., Kinzel, H., & Rauch, W. (2010). Parallel computing in conceptual sewer simulations. Water
science and technology: a journal of the International Association on Water Pollution Research 61(2): p.283-91.
de Haan, J., Ferguson, B., Brown, R, & Deletic, A. (2011). A Workbench for Societal Transitions in Water Sensitive
Cities. In Proceedings of the 12th International Conference on Urban Drainage, Porto Alegre/Brazil, 10-15 September
2011, Porto Alegre/Brazil.
De Toffol, S. (2006). Sewer system performance assessment - an indicator based methodology. University of Innsbruck.
Joke, L. 2000. The value of trees, water and open space as reflected by house prices in the Netherlands. Landscape and
Urban Planning 48(3-4): p.161-167.
ROK-Prognosen. (2010). Kleinrumige Bevlkerungsprognose fr sterreich 2010-2030 mit Ausblick bis 2050
(ROK -Prognosen).
Patterson, Z., & Bierlaire, M. (2010). Development of prototype UrbanSim models. Environment and Planning B:
Planning and Design 37(2): p.344366.
Rossman, L.A. (2000). EPANET 2 User Manual. Water Supply and Water Resources Division National Risk
Management Research Laboratory Cincinnati, OH 45268.
Rossman, L.A. (2008). Storm water management model Users manual version 5.0. Water Supply and Water Resources
Division National Risk Management Research Laboratory Cincinnati.
Sitzenfrei, R., Fach, S., Kinzel, H., & Rauch, W. (2010). A multi-layer cellular automata approach for algorithmic
generation of virtual case studies: VIBe. Water Science & Technology 61(1): p.37-45.
Sitzenfrei, R., Fach, S., Kleidorfer, M., Urich, C., & Rauch, W. (2010). Dynamic virtual infrastructure benchmarking:
DynaVIBe. Water Science & Technology: water supply 10(4): p.600609.
Sitzenfrei, R., Mderl, M., Mair, M., & Rauch, W. (2012). Modeling Dynamic Expansion of Water Distribution
Systems for New Urban Developments. In World Environmental & Water Rescources Congress, Albuquerque, New
Mexico, May 20-24, 2012, Albuquerque, New Mexico
The UrbanSim Project. (2011). The Open Platform for Urban Simulation and UrbanSim Version 4.3: Users Guide and
Reference Manual. University of California Berkeley, and University of Washington.
Urich, C. et al. (2011). Dynamics of cities and water infrastructure in the DAnCE4Water model. In Proceedings of the
12th International Conference on Urban Drainage, Porto Alegre/Brazil, 10-15 September 2011, Porto Alegre/Brazil
Visser, P., Van Dam, F., & Hooimeijer, P. 2008. Residential environment and spatial variation in house prices in the
Netherlands. Tijdschrift voor economische en sociale geografie 99(3): p.348360.
Waddell, P. 2002. UrbanSim: Modeling urban development for land use, transportation, and environmental planning.
Journal of the American Planning Association 68(3): p.297314.
Wagener, T. et al. 2010. The future of hydrology: An evolving science for a changing world. Water Resources Research
46(5): p.1-10.
White, I., & Howe, J. (2004). The mismanagement of surface water. Applied Geography 24(4): p.261-280.
Wong, T., & Brown, RR. (2009). The water sensitive city: principles for practice. Water Science & Technology 60(3):
p.673682.


156


157

The challenges of assessing the cost of geoengineering
Naomi E. Vaughan
1

1
Tyndall Centre for Climate Change Research, School of Environmental Sciences,
University of East Anglia, Norwich, NR4 7TJ, UK.
E-mail: n.vaughan@uea.ac.uk

Abstract
There is a growing interest in geoengineering, that is proposals for large scale interventions in the
Earths climate system, as a possible responsible option to climate change. The Integrated
Assessment of Geoengineering Proposals (IAGP) project is a research programme currently
underway that brings together a diverse interdisciplinary team including climate modellers and
stakeholder engagement experts to assess geoengineering proposals. IAGP however, does not
contain any economists and does not explicitly consider the cost of geoengineering, despite cost
emerging a one of many questions raised by IAGPs public deliberative and stakeholder
engagement work. Here, I seek to stimulate a discussion about how to assess the cost of
geoengineering proposals and the difficulties faced in attempting this. Current estimates focus on a
narrow definition of cost or affordability, and in doing so, generates particular relative perceived
preferences for geoengineering proposals. I argue that assessing the cost of geoengineering is
beyond the scope of current understanding of these proposals and their impacts. Also, there are
many additional costs that emerge when adopting a broader remit or set of considerations. These
additional costs can create a total cost of geoengineering proposals that can be quite different to the
narrower affordability cost currently used. I also suggest that this more holistic notion of the total
cost of geoengineering is more appropriate when considering geoengineering as a possible
response option to climate change.

Vaughan The challenges of assessing the cost of geoengineering
158
1 Introduction
Geoengineering, also known as climate engineering, is a term used to describe a collection of
proposals that seek to intervene in the Earths climate system to moderate global warming (The
Royal Society, 2009). Recently there has been a resurgence of interest within the research
community about geoengineering as a possible societal response option to climate change, usually
framed as a complement to reducing greenhouse gas emissions (Crutzen, 2006; Wigley, 2006).
There are two main ways in which geoengineering proposals seek to moderate global warming; (1)
by reducing incoming solar radiation (solar geoengineering) or (2) by removing carbon dioxide
(CO
2
) from the atmosphere and transferring it to long-lived reservoirs (carbon geoengineering)
(Lenton & Vaughan 2009). The main driver of global warming is the elevated atmospheric CO
2

concentrations due to anthropogenic emissions from industry and land use, carbon geoengineering
address this main driver directly whilst solar geoengineering address an impact of it, i.e. increased
global mean temperatures. These two types of geoengineering have very different efficacies,
impacts and risks (Vaughan & Lenton, 2011). It is important to note, that not all scholars consider
both these categories of geoengineering, many just focus on solar geoengineering (e.g. Barrett,
2008).
In the UK there are currently two research projects focussed on geoengineering, funded by the
Engineering and Physical Sciences Research Council (EPSRC) which both commenced in October
2010. The Stratospheric Particle Injection for Climate Engineering (SPICE) project is investigating
one particular geoengineering proposal, that of injecting particles into the stratospheric to reduce
incoming solar radiation (a form of solar geoengineering). The second project, the Integrated
Assessment of Geoengineering Proposals (IAGP), is an interdisciplinary project that aims to
evaluate the effectiveness and side effects of a range of geoengineering proposals, the
controllability of the climate using these proposals and to elicit and include stakeholder and public
values into the evaluation. A final output of the project will be to present a framework for assessing
geoengineering proposals. IAGP has no economists as part of its research team, and does not
attempt to address a cost assessment of geoengineering.
2 Assessing geoengineering
A number of assessments of geoengineering have been undertaken, mostly from a expert-technical
perspective, for example NAS (1992), Boyd (2008) and Lenton & Vaughan (2009). A select few
have sort to incorporate a wider range of perspectives, including the NERC public dialogue exercise,
Experiment Earth (NERC 2010) and The Royal Societys (2009) seminal report on geoengineering.
IAGP also aims to include a wider set of perspectives in its assessment framework and achieves this
through stakeholder and public deliberative engagement.
2.1 Stakeholder and public deliberations
A integral component of the IAGP project is to discover and engage with a wider set of stakeholder
values in the assessment of geoengineering. This engagement is being conducted in two streams;
through public deliberative workshops and informed stakeholder workshops. The overarching aim
of this work is to elicit a range of perspectives on criteria for the assessment of geoengineering.
The workshops have been conducted over the period February 2011 to May 2012 and are currently
being analysed. The first piece of analysis was published as a working paper last year which
includes a more detailed methodology regarding the public workshops (Parkhill & Pidgeon 2011).
Vaughan The challenges of assessing the cost of geoengineering
159
The informed stakeholder group consisted of 29 participants from public bodies, private sector and
civil society representatives all with an interest in climate change and UK science policy.
The issues of cost, affordability or cost effectiveness were raised by the lay public and by the
informed stakeholders. The public generated questions and concerns about how much the
geoengineering proposals would cost, including who would pay for them and the relative cost
between different proposals as well as other responses to climate change. In the stakeholder
workshops, cost was identified as a criterion for assessing geoengineering proposals by all six
groups during an activity to generate assessment criteria. These results are not surprising at all, but
what is of more interest to this paper is the additional questions and concerns raised by the public
participants, and criteria and issues raised by the stakeholders that contribute to a total cost of
geoengineering. Drawing on ideas generated from the literature and from issues and concerns
raised by stakeholders and the public, the following section presents some of the factors that may
constitute additional costs to geoengineering proposals.
3 Costs
3.1 Affordability
Assessments of geoengineering often include criteria such as efficacy/effectiveness,
affordability(cost), safety and timeliness/rapidity (The Royal Society, 2009; Boyd, 2008). It is the
affordability that represents this narrower framing of the cost of any geoengineering proposal.
Given the nascent nature of many of the different geoengineering ideas there is often very limited
information for detailed costings. However the use of analogous technologies provides a basis for
scaling up and estimating the order of magnitude costs of development, implementation and
maintenance of different geoengineering ideas.
The summary figure from The Royal Society (2009) report on geoengineering is a good starting
point for engaging with the difference between the narrower concept of cost, or affordability, and a
broader, more holistic total cost. The summary figure from the report is presented as an axis of
affordability versus effectiveness with size of marker indicating timeliness (quick or slow) and
colour of marker indicating safety (a five point colour scale of high to low). It is important to note,
this diagram is attempting a very difficult task of reflecting a vast range of issues concisely and
clearly, with a number scoring that is more qualitative than quantitative. For example, the
stratospheric aerosols geoengineering idea is given a high (good) score for affordability (4 out of 6,
the second best affordability score). However, it has a 2 out of 5 for safety. I would argue that the
cost of the low safety score would increase the cost of this idea, generating a higher total cost. This
geoengineering idea is also given a high effectiveness score, (4 out of 5, one of 4 proposals to be
give that score), as I detail in the following Section of this paper there are residual and risks of
novel climate change damages associated with this type of intervention. These additional damages
and risks may add notable additional costs to the total cost of this proposal. In the following section
I highlight a few examples of possible addition costs.
3.2 Additional costs
The aim of geoengineering is to moderate global warming by reducing global mean temperatures
either by reflecting more sunlight back to space or removing CO
2
from the atmosphere. In cost
terms, reducing global warming lowers the climate change damages. However, there are additional
damages that arise from the elevated CO
2
concentration in the atmosphere, such as ocean
Vaughan The challenges of assessing the cost of geoengineering
160
acidification and its impact on marine ecosystems (Zeebe et al, 2008). Carbon geoengineering
would ameliorate ocean acidification, but solar geoengineering would not, therefore solar
geoengineering would incur residual climate damages. Modelling work shows that solar
geoengineering can reduce global mean temperatures, but residual regional climate changes would
occur and modelling work indicates that global mean precipitation would be reduced slightly, with
the regional pattern of changes that are less well known (Matthews & Caldeira 2007, Bala et al
2008). Solar geoengineering would also create a new risk, termed the termination effect, whereby
because the concentration of CO
2
in the atmosphere remains the same (or increases with increasing
anthropogenic emissions) whilst the intervention lowers the temperature, should the intervention
cease, the temperature will quickly return to its elevated level, at a rate that would be difficult for
ecosystems to adapt to (The Royal Society, 2009; Matthews & Caldeira, 2007). The damages
arising from a rapid warming could be greater than allowing a gradual warming of the same
magnitude. This feature, the termination effect, plays a significant role in the analyses and
conclusions of Goes et al (2011) integrated assessment of applying solar geoengineering as a
response option to climate change.
There are also features of the Earth system response to changes that do not map neatly to a
reduction in climate damages. For example, sea level rise, which is a key contributor to climate
damages, has a longer response time to changes in global mean surface temperatures. Modelling
work shows that much greater levels of solar geoengineering intervention would be required to limit
sea level rise (Irvine et al 2011). Therefore an amount of solar geoengineering that would be
sufficient to say, hold global mean surface temperatures at a given level, e.g. 1 C or 2 C above
pre-industrial, would not be sufficient to halt sea level rise and would not therefore make less of a
reduction in climate damages. The inclusion of these residual, unresolved or novel additional
climate damages and therefore costs to different geoengineering options may alter the relative costs
of different geoengineering ideas.
The costs of monitoring, verification and reporting may significantly increase the total cost of
certain geoengineering proposals. Ocean fertilisation is a type of carbon geoengineering idea where
limiting nutrients are added to nutrient-limited regions the surface ocean to stimulate a
phytoplankton bloom and thus increase carbon export to the deeper ocean (Vaughan & Lenton
2011). In experiments to understand the biogeochemistry of the oceans, where nutrients were added
to the surface open ocean only half of the experiments triggered a bloom (Boyd et al, 2007). This
success rate is indicative of the complex nature of the marine ecosystem and biogeochemistry. Also,
the carbon export to depth is very difficult to measure and verify. To be sure carbon had been
successfully removed from the atmosphere and transferred to depth by this intervention could entail
a notable additional cost of monitoring, verification and reporting. Other examples include the solar
geoengineering idea of stratospheric aerosols which may necessitate the provision of observation
platforms (e.g. satellite instruments) to monitor and verify the expected changes to the stratospheric
chemistry. An alternative component of these monitoring, verification and reporting costs may also
arise in the form of identification and attribution of undesired or negative impacts of a
geoengineering intervention.
A significant area of research interest around geoengineering is focussed on governance,
particularly international governance. Participants in the IAGP workshops also raised issues about
governance, both at the national and international level. Solar geoengineering by stratospheric
aerosols is seen by some as potentially capable of unilateral action and as such possibly able to
Vaughan The challenges of assessing the cost of geoengineering
161
circumnavigate international agreement processes. Whilst many consider international agreement
and governance structures to be a necessary component of implementation (and certain scales and
types of experimentation e.g. see SRMGI, 2011), due to the risks of residual or regional climate
changes, such as changes in precipitation or regional weather systems, and the notable risk of the
termination effect. These additional costs that may be required for solar geoengineering which is
acting at the global scale are not required for all carbon geoengineering proposals. For example,
biomass energy capture and storage would occur within national state boundaries, does not carry the
same residual climate change risks or the termination effect risk, and can more readily fit within
existing governance frameworks relating to carbon trading, permits and markets. Therefore only
certain geoengineering proposals may be subject to additional costs arising from the negotiation and
creation of international governance structures.
I have not attempted to generate an exhaustive set of potential additional costs of geoengineering or
attempted to define the broader remit or wider set of considerations that would be sufficient to
achieve a more holistic total cost. What I have sought to do is highlight a number of efficacy and
impacts issues that arise from the Earth system response to climate change. I also detail a number of
socio-political issues that arise from intentionally intervening with the climate system at a global
scale. I suggest that these issues generate additional costs of geoengineering proposals that may
alter the relative preferences or ranking of geoengineering proposals relative to one another or to
other response options to climate change. Although cost is merely one metric by which any
geoengineering proposal can be assessed, it carries an authority in many spheres. Therefore it is
important to consider what is meant by cost and any assertion that certain response options are
cheaper than others.
4 Conclusions
In conclusion, I have sought to highlight that, although not addressed explicitly as part of the IAGP
project, cost is a important consideration in the assessment of geoengineering, as evidenced in our
stakeholder and public workshops. However there are many layers to what constitutes the total cost
of geoengineering, some of which I have presented here. I argue that initial attempts to compare
geoengineering proposals based on cost have focussed on a very narrow definition of affordability.
As such the relative preferences for certain geoengineering proposals over others, or in comparison
to emissions reductions, may be altered by considering broader total costs. One of the key
difficulties in estimating total cost is the magnitude of unknowns about geoengineering efficacy and
impacts. Despite this, there might be ways of incorporating these additional cost components that
allow a more holistic total cost comparison of geoengineering proposals. In turn this would
facilitate a more robust assessment of the possible role of geoengineering in responding to climate
change impacts.

Vaughan The challenges of assessing the cost of geoengineering
162
Acknowledgements
The ideas in this paper arise from on-going engagement and discussions with all partners involved
in the IAGP project (www.iagp.ac.uk), particularly Nick Pidgeon, Karen Parkhill, Adam Corner,
Piers Forster, Andrew Jarvis, Ed Pitt, Annabel Jenkins, Sarah Jones and Rob Bellamy.
References
Bala, G., Duffy, P. B. & Taylor, K. E. (2008) Impact of geoengineering schemes on the global hydrological cycle.
Proceedings of the National Academy of Sciences of the United States of America 105, No. 22, 76647669.
Barrett, S. (2008) The incredible economics of geoengineering. Environmental and Resource Economics 39, No. 1, 45-
54.
Boyd, P.W. (2008) Ranking geo-engineering schemes. Nature Geosciences 1, 722724.
Boyd, P. W., Jickells, T., Law, C. S., Blain, S., Boyle, E. A., Buesseler, K. O., Coale, K. H., Cullen, J. J., de Baar, H. J.
W., Follows, M., Harvey, M., Lancelot, C., Levasseur, M., Owens, N. P. J., Pollard, R., Rivkin, R. B., Sarmiento, J.,
Schoemann, V., Smetacek, V., Takeda, S., Tsuda, A., Turner, S., Watson, A. J. (2007) Mesoscale iron enrichment
experiments 19932005: synthesis and future directions. Science 315, No. 5812, 612617.
Crutzen, P. J. (2006) Albedo enhancement by stratospheric sulphur injections: a contribution to resolve a policy
dilemma? Climatic Change 77, No. 34, 211219.
Goes, M., Tuana, N. & Keller, K. (2011) The economics (or lack thereof) of aerosol geoengineering. Climatic Change
109, 719-744.
Irvine, P. J., Sriver R. L. & Keller, K. (2011) Tensions between reducing sea-level rise and global warming through
solar radiation management. Nature Climate Change 2, 97-100.
Lenton & Vaughan (2009) The radiative forcing potential of different climate geoengineering options. Atmospheric
Chemistry and Physics 9, 55395561.
Matthews, H. D. & Caldeira, K. (2007) Transient climate-carbon simulations of planetary geoengineering. Proceedings
of the National Academy of Sciences of the United States of America 104, No. 24, 99499954.
National Academy of Sciences. Policy implications of greenhouse warming: mitigation, adaptation, and the science
base. National Academy Press, Washington, 1992.
Natural Environment Research Council. Experiment Earth: Report on a Public Dialogue on Geoengineering. Swindon:
Natural Environment Research Council, 2010. See www.nerc.ac.uk/about/consult/geoengineering-dialogue-final-
report.pdf Accessed 6 June 2012.
Parkhill, K. & Pidgeon, N. (2011) Public Engagement on Geoengineering Research: Preliminary Report on the SPICE
Deliberative Workshops. Understanding Risk Working Paper 11-01. Cardiff: School of Psychology.
The Royal Society Geoengineering: science, governance and uncertainty. The Royal Society, London, 2009.
Solar Radiation Management Governance Initiative Solar Radiation Management: the governance of research. 2011.
See http://www.srmgi.org/files/2012/01/DES2391_SRMGI-report_web_11112.pdf Accessed 6 June 2012.
Vaughan, N. E. & Lenton, T. M. (2011) A review of climate geoengineering proposals. Climatic Change 109, 745-790.
Wigley, T. M. L. (2006) A combined mitigation/geoengineering approach to climate stabilization. Science 314, 452
454.
Zeebe, R. E., Zachos, J. C., Caldeira, K. &Tyrell, T. (2008) Carbon emissions and ocean acidification. Science 321, 51
52.




163

A spatiotemporal modelling framework for the integrated
assessment of cities
Claire L. Walsh
1
, Alistair C. Ford
1
, Stuart Barr
1
, Richard J. Dawson
1

1
Centre for Earth Systems Engineering Research, School of Civil Engineering and Geosciences, Newcastle University,
Newcastle upon Tyne, NE1 7RU, UK.
E-mail: Claire.Walsh@ncl.ac.uk; Alistair.Ford@ncl.ac.uk; Stuart.Barr@ncl.ac.uk; Richard.Dawson@ncl.ac.uk;
Abstract
Urban areas are faced with a number of challenges in the context of climate change and sustainable
development. Many activities in urban areas directly and indirectly release greenhouse gas
emissions that drive climate change. Given their high concentrations of population and
infrastructure, cities are vulnerable to climate change and therefore need to adapt to the possible
impacts as well as mitigate their greenhouse gas emissions. Synergies and conflicts of mitigation
and adaptation measures need to be taken into consideration in decision making, at an appropriate
scale. Strategic decisions need to be made against a background of socio-economic change,
alongside climate change scenarios. This paper describes the Urban Integrated Assessment Facility
(UIAF) that can deal with multiple aspects of long term change in urban areas. A suite of models
are presented, in particular the paper focusses on models that have been developed for modelling
land use and transport, and highlights how these interact with larger and smaller scale climate and
impact models. Finally, we consider how the UIAF will contribute to the vision for Earth Systems
Engineering and highlight future improvement, opportunities and advancements.


Walsh et al. Spatio-temporal modelling framework for the integrated assessment of cities
164
1 Introduction
Urban areas occupy less than 2% of the Earths land surface (Balk et al., 2005), but house just over
50% of the worlds population, a figure that was only 14% in 1900 (Douglas, 2004) and one which
is expected to increase to 60% by 2030 (UN, 2004). Urban activities release greenhouse gases
(GHGs) that drive global climate change directly (e.g. fossil fuel-based transport) and indirectly (e.g.
electricity use and consumption of industrial and agricultural products). As much as 80% of global
GHG emissions are estimated to be attributable to urban areas (OMeara, 1999). Cities are also
potential hot spots of vulnerability to climate change impacts by virtue of their high concentration
of people and assets. However, despite being vulnerable and GHG emission contributors, cities
provide concentrated areas of adaptation opportunities to climate impacts and mitigation of GHG
emissions. It is increasingly being recognised that cities are the first responders in adapting to and
mitigating climate change (Rosenzweig et al. 2010).
Responding to climate change by mitigating carbon dioxide emissions and adapting to the impacts
of climate change is placing new and complex demands on urban decision makers and engineers.
Targets for mitigation of carbon dioxide emissions are now urgent and imply reconfiguration of
urban energy systems, transport and the built environment. Adaptation of cities requires integrated
thinking that encompasses a whole range of urban functions. Despite strong interactions between
mitigation and adaptation objectives, a number of conflicts and issues around integration exist.
Understanding the synergies, conflicts and trade-offs between mitigation and adaptation measures
would contribute to a more integrated climate policy and more effective climate-proofing of urban
environments (Dawson 2007). Intensification of the urban heat island effect coupled with predicted
hotter summers would lead to an increased use of air-conditioning or an increase in city-dwellers
using transport to leave the area; both would lead to an increase in emissions. Conversely,
incorporation of green and blue space in urban design would contribute to reducing the impacts of
urban heat, and also increased pluvial flooding from the predicted increases in intense rainfall and
wetter winters coupled with urbanisation, by providing cooling and producing storage and
infiltration, as well as providing opportunities for sequesting carbon (McEvoy et al. 2006).
Assessing measures of mitigation and adaptation needs to occur in an integrated manner. Within
cities, interactions occur through land use, infrastructure systems and the built environment.
Viewing cities as systems helps avoid conflicts between the different objectives, as well as between
economic growth and sustainable development (Dawson, 2007). Interactions between different
urban functions and objectives occur at a range of scales from individual buildings to whole cities
and even beyond. Integrated assessment provides a means of a collaborative platform that is able to
address different drivers to aid shared designs and visions of the future. This paper presents
developments in the Centre for Earth Systems Engineering Researchs (CESER) Urban Integrated
Assessment Facility (UIAF). This integrated assessment facility enables urban policy-makers,
planners, engineers and other stakeholders to compare alternative adaptation and mitigation
strategies. Central to advancing this work has been creating a suite of spatial simulation tools for the
study of urban areas which are described. Finally, we consider how the UIAF will contribute to the
vision for Earth Systems Engineering and highlight future opportunities and advancements.
Walsh et al. Spatio-temporal modelling framework for the integrated assessment of cities
165
2 Urban Integrated Assessment Facility
To facilitate spatially-focused yet wide-ranging analysis, CESER, as part of the UK Tyndall Centre
for Climate Change Research, developed a Cities research programme (Tyndall Cities) that aimed
to improve understanding in the area of climate change impacts on urban areas. As part of this
research programme an Urban Integrated Assessment Framework (UIAF) has been developed
which attempts to simulate at the city-scale the major processes of long-term change, such as
population growth and urban development, and the impact upon these of future events such as
climate change induced heat stress and flooding. The development of such an integrated assessment
framework that helps to inform decision making has, until very recently, proved difficult due to a
combination of technical and practical challenges of assimilating complex model-based evidence
into decision-making processes. The provision of such tools, however, allows a greater
understanding of the potential direct and indirect consequences of decisions, and the development
of portfolios of measures that aim to address in a synergistic manner different social, engineering
and natural challenges (Hall et al, 2010).
The UIAF is driven by global and national scenarios of climate and economic change and allows
the testing of policy options within a spatial analysis framework in order to understand the impacts
of decisions across the city. Individual simulation modules model a number of aspects of urban
dynamics, such as land-use and socio-economic change, climate change, such as droughts, heat
waves and fluvial floods, and carbon dioxide emissions from energy, personal and freight transport.
The uncertainties surrounding future socio-economic, demographic and climate changes may be
large but exploring the range of possible futures allows the identification of options that are as far as
possible robust to uncertainties.
The UIAF is designed to allow the appraisal of policy and infrastructure decisions over a long
timescale (up to a century), allowing long-term processes of climate and urban change to unfold.
This is essential given the long life of infrastructure systems, the extended legacy of planning
decisions and the timeframe required for climate assessments. To understand changes on these
timescales it is necessary to examine them on a broad spatial as well as temporal scale. Thus,
analysis is carried out across the whole city in order to allow the representation of the relevant
interactions that exist throughout the city domain, and to facilitate an understanding of both the
local and global-scale impacts of a multitude of potentially contrasting policy options. The structure
of the UIAF, the sequence of the various components, and their interconnections are shown in
Figure 1.
Global economic and demographic scenarios (1) are produced by Cambridge Econometrics E3MG
model (Barker et al, 2008) and used as a driver in the production of possible multi-sector UK
regional economic futures (3), generated by Cambridge Econometrics MDM model (Junankar et al.,
2007). These are in turn downscaled to the city level using spatial attractors and weightings for
employment sites to give a city-wide mapping of the spatial distribution of employment. Similarly,
global climate scenarios from various models (2) are downscaled to give regional representations of
temperature, rainfall and storm surges using standard methods (Fowler and Wilby, 2007). A spatial
interaction module, the Land-use Transport Model (LUTM) (4), provides high resolution spatial
scenarios of population and employment change at the inter-city administrative zone level, such as
census wards, which in turn are downscaled further by the Tyndall Urban Development Model
(UDM) to a fine grid. The land-use and population scenarios produced by the land-use transport
Walsh et al. Spatio-temporal modelling framework for the integrated assessment of cities
166
model can be used to undertake emissions accounting, (5) in association with future energy and
transport scenarios. The fine-scale representation of urban development can be used to assess the
impacts of climate change at a local scale based on global, national and regional policies and
decisions (6). The flow of the framework is from top to bottom; scenarios encapsulating policy
options and climate change are used to as drivers, from global and national scale to local level. In
the case of this London study, a range of GDP scenarios for the UK are used to classify economic
growth and UK Office of National Statistics (ONS) Subnational Population Projections are used to
give various possible future population figures. The UKCP09 Climate Projections are used to
provide an insight into possible future climatic conditions.

Figure 1 The Urban Integrated Assessment Framework (UIAF) employed in the UK Tyndall Cities research.
3 Developing a suite of models
The modelling framework developed under the Tyndall Cities programme provides capabilities to
undertake the integrated assessment of climate adaptation and mitigation options. This is mainly
achieved using a downscaling process as described above. Through this process of linking models it
is possible to simulate the effects of global trends at a much finer spatial scale, by using consistent
scenarios and aggregated or disaggregated data. In subsequent work, this Tyndall UIAF framework
has been further advanced and supplemented with a number of other models, creating a suite of
spatial simulation tools for the study of urban areas.
The suite of models developed allows us to study phenomena at a range of scales which may be
presenting challenges for an urban area. For example, a large city is embedded within a system of
processes which happen at global (climate change, economic systems and trade), national (national
1 2
4
3
5
6
7
Walsh et al. Spatio-temporal modelling framework for the integrated assessment of cities
167
planning, economic decisions, transport infrastructure investment), regional (regional planning
frameworks, competing urban centres), city-level (planning decisions, urban development, urban
heat island) and local scale (flooding, heat impacts). The framework described in this paper is a
means of ensuring that these processes can be studied together and that modelling is undertaken at
an appropriate scale with respect to the processes of interest and the scale of available observations
and data.
Table 1 summarises the main components of the modelling suite of the UIAF described in this
paper, in terms of increasing spatial granularity. A description the purpose of each model, the scale
at which each model operates, the scenarios they can test and their data requirements are listed.
The UIAF allows both relatively simple single-pass top-down linear modelling and also some
ability to represent feedbacks between various sub-models in order to allow a better understanding
of how population growth, urban development and impacts upon these feedback to higher-level
processes. For example, in our modelling it is possible to link the impacts on an urban system back
to the economy to allow an understanding of indirect effects to be quantified, whilst also, for
example, allowing issues of how impacts may directly affect transport capacity and congestion via
coupling of the land use and transport modelling (LUTM) components of UAIF with feedbacks
from the impact assessment modules.

In the UIAF, the characterisation of transport accessibility between locations of population and
employment is a key input to the LUTM. In earlier Tyndall Cities work, this characterisation was by
means of a generalised cost of travel, including monetary and time-related factors, between the
zonal units of analysis. This represented the ease with which the population could travel to a
potential place of work, and thus a factor in their choice of residential location. In more recent work,
this accessibility cost characterisation has been further extended to include a measure of capacity
into the transport networks and assignment of commuting trips from residential location to
employment location onto the network. This allows for congestion to be included in the generalised
cost computation through closer feedbacks between the LUTM and the trip assignment transport
model.

The capacities for the road network are modelled through the use of speed-flow curves (Ortuzar and
Willumsen, 2009) which modify the speed of traffic on a road depending on the flow of vehicles.
Standard national data of road networks are augmented with speed-flow characteristics (e.g. from
the DfTs COBA model (DfT, 2012) based on the classification of each road link. A shortest route
is then calculated by road from the residential origin zone to employment destination zone and the
observed flow (from census travel-to-work statistics) is assigned to each link in that shortest route.
After this process is completed for all origins and destinations the revised speed is computed for
each link and the shortest route recalculated. This assignment routine is repeated until equilibrium is
reached. An example of the road network with flows assigned is shown in Figure 2. For public
transport a similar process is followed but with capacities given per service and a number of
services per hour calculated from frequency information. Congestion in this case leads to an
increase in waiting time due to being unable to board a service.
Walsh et al. Spatio-temporal modelling framework for the integrated assessment of cities
168
Table 1 Summary of the main components of the modelling suite.


M
o
d
e
l

P
u
r
p
o
s
e

S
c
a
l
e

S
c
e
n
a
r
i
o
s


D
a
t
a

r
e
q
u
i
r
e
m
e
n
t
s

M
D
M

T
o

a
n
a
l
y
s
e

a
n
d

f
o
r
e
c
a
s
t

c
h
a
n
g
e
s

i
n

e
c
o
n
o
m
i
c

s
t
r
u
c
t
u
r
e
,

e
n
e
r
g
y

d
e
m
a
n
d

a
n
d

r
e
s
u
l
t
i
n
g

e
n
v
i
r
o
n
m
e
n
t
a
l

e
m
i
s
s
i
o
n
s
.

G
l
o
b
a
l
,

w
i
t
h

s
u
b
-
n
a
t
i
o
n
a
l

r
e
p
o
r
t
i
n
g

u
n
i
t
s

(
e
.
g
.

G
o
v
e
r
n
m
e
n
t

O
f
f
i
c
e

R
e
g
i
o
n
s

i
n

t
h
e

U
K
)
.

F
u
t
u
r
e

p
r
o
j
e
c
t
i
o
n
s

o
f

e
c
o
n
o
m
i
c

v
a
r
i
a
b
l
e
s

i
n
c
l
u
d
i
n
g

e
m
p
l
o
y
m
e
n
t

t
o
t
a
l
s
,

G
D
P
,

w
a
g
e
s

a
n
d

t
r
a
d
e
.

C
a
r
b
o
n

e
m
i
s
s
i
o
n
s

f
r
o
m

e
n
e
r
g
y

c
a
n

a
l
s
o

b
e

m
o
d
e
l
l
e
d
.

N
a
t
i
o
n
a
l

i
n
p
u
t
-
o
u
t
p
u
t

t
a
b
l
e
s

o
f

a
c
t
i
v
i
t
i
e
s

b
e
t
w
e
e
n

e
c
o
n
o
m
i
c

s
e
c
t
o
r
s
.

U
K
C
P
0
9

a
n
d

W
e
a
t
h
e
r

G
e
n
e
r
a
t
o
r

P
r
o
j
e
c
t
i
o
n
s

o
f

f
u
t
u
r
e

c
h
a
n
g
e
s

t
o

c
l
i
m
a
t
e
.

2
5
k
m

s
c
a
l
e

f
o
r

t
h
e

p
r
o
j
e
c
t
i
o
n
s
,

5
k
m

f
o
r

w
e
a
t
h
e
r

g
e
n
e
r
a
t
o
r
.

P
r
o
b
a
b
i
l
i
s
t
i
c

s
c
e
n
a
r
i
o
s

u
s
i
n
g

1
1
-
m
e
m
b
e
r

R
C
M
,

w
i
t
h

h
i
g
h
,

m
e
d
i
u
m
,

l
o
w

e
m
i
s
s
i
o
n
s

s
c
e
n
a
r
i
o
s

f
o
r

3

3
0
-
y
e
a
r

t
i
m
e

p
e
r
i
o
d
s
.



N
/
A

L
U
T
M

Z
o
n
a
l

m
o
d
e
l

o
f

f
u
t
u
r
e

p
o
p
u
l
a
t
i
o
n

a
n
d

e
m
p
l
o
y
m
e
n
t
.

I
n
i
t
i
a
l

z
o
n
e
s

e
m
p
l
o
y
e
d

a
r
e

C
e
n
s
u
s

A
r
e
a

S
t
a
t
i
s
t
i
c

(
C
A
S
)

w
a
r
d
s
,

a
s

t
h
i
s

i
s

t
h
e

s
c
a
l
e

a
t

w
h
i
c
h

c
a
l
i
b
r
a
t
i
n
g

d
a
t
a

(
c
e
n
s
u
s

p
o
p
u
l
a
t
i
o
n
s

a
n
d

e
m
p
l
o
y
m
e
n
t

c
o
u
n
t
s
)

a
r
e

a
v
a
i
l
a
b
l
e
.

S
c
e
n
a
r
i
o
s

o
f

p
l
a
n
n
i
n
g

p
o
l
i
c
y

b
a
s
e
d

o
n

t
h
e

l
o
c
a
t
i
o
n

o
f

a
v
a
i
l
a
b
l
e

j
o
b
s
,

a
c
c
e
s
s
i
b
i
l
i
t
y

c
o
s
t
s

a
n
d

l
a
n
d

m
a
r
k
e
t
s
.


C
u
r
r
e
n
t

p
o
p
u
l
a
t
i
o
n
,

c
u
r
r
e
n
t

e
m
p
l
o
y
m
e
n
t
,

f
u
t
u
r
e

a
v
a
i
l
a
b
l
e

f
l
o
o
r

s
p
a
c
e

f
o
r

p
o
p
u
l
a
t
i
o
n

a
n
d

e
m
p
l
o
y
m
e
n
t
,

a
c
c
e
s
s
i
b
i
l
i
t
y

f
o
r

t
r
a
v
e
l

b
e
t
w
e
e
n

z
o
n
e
s
.

T
r
a
n
s
p
o
r
t

A

n
e
t
w
o
r
k

m
o
d
e
l

o
f

t
r
a
n
s
p
o
r
t

c
o
s
t

a
c
r
o
s
s

v
a
r
i
o
u
s

m
o
d
e
s
,

u
t
i
l
i
s
i
n
g

g
e
n
e
r
a
l
i
s
e
d

c
o
s
t

t
o

c
a
p
t
u
r
e

b
o
t
h

t
h
e

m
o
n
e
t
a
r
y

a
n
d

t
i
m
e
-
b
a
s
e
d

c
o
s
t
s

o
f

t
r
a
v
e
l
.

N
e
t
w
o
r
k
s

a
r
e

b
u
i
l
t

f
r
o
m

p
u
b
l
i
c
a
l
l
y
-
a
v
a
i
l
a
b
l
e

s
p
a
t
i
a
l

d
a
t
a
.

A
c
c
e
s
s
i
b
i
l
i
t
y

c
o
s
t
s

a
r
e

c
a
l
c
u
l
a
t
e
d

a
t

t
h
e

s
a
m
e

z
o
n
a
l

s
c
a
l
e

a
s

L
U
T
M

i
s

e
m
p
l
o
y
e
d
.

S
c
e
n
a
r
i
o
s

o
f

i
n
v
e
s
t
m
e
n
t

i
n

t
r
a
n
s
p
o
r
t

n
e
t
w
o
r
k
s
,

g
i
v
i
n
g

c
h
a
n
g
e

i
n

a
c
c
e
s
s
i
b
i
l
i
t
y

p
a
t
t
e
r
n
s

a
c
r
o
s
s

t
h
e

a
r
e
a

o
f

i
n
t
e
r
e
s
t

e
.
g
.

n
e
w

r
a
i
l
w
a
y

l
i
n
e
s
.


A

s
p
a
t
i
a
l

r
e
p
r
e
s
e
n
t
a
t
i
o
n

o
f

t
r
a
n
s
p
o
r
t

n
e
t
w
o
r
k
s

t
o

b
e

e
m
p
l
o
y
e
d
,

p
l
u
s

a
t
t
r
i
b
u
t
e

o
f

t
h
i
s

n
e
t
w
o
r
k

(
s
p
e
e
d
,

f
r
e
q
u
e
n
c
y
,

f
u
e
l

c
o
s
t
s
,

t
i
c
k
e
t

p
r
i
c
e
s

e
t
c
.
)
.


U
D
M

A

r
a
s
t
e
r
-
b
a
s
e
d

m
o
d
e
l

o
f

u
r
b
a
n

g
r
o
w
t
h

f
o
l
l
o
w
i
n
g

C
e
l
l
u
l
a
r

A
u
t
o
m
a
t
a

p
r
i
n
c
i
p
l
e
s
.

T
h
e

s
c
a
l
e

i
s

f
l
e
x
i
b
l
e
,

b
u
t

t
h
e

m
o
d
e
l

h
a
s

b
e
e
n

e
m
p
l
o
y
e
d

d
o
w
n

t
o

a

r
e
s
o
l
u
t
i
o
n

o
f

1
0
0
m

i
n

o
r
d
e
r

t
o

m
a
p

u
r
b
a
n

d
e
v
e
l
o
p
m
e
n
t

f
o
r

a
s
s
e
s
s
m
e
n
t

o
f

f
l
o
o
d
i
n
g

i
m
p
a
c
t
s
.

P
l
a
n
n
i
n
g

s
c
e
n
a
r
i
o
s

r
e
p
r
e
s
e
n
t
e
d

t
h
r
o
u
g
h

a

s
e
t

o
f

a
t
t
r
a
c
t
o
r
s

(
t
o

p
r
o
m
o
t
e

d
e
v
e
l
o
p
m
e
n
t

i
n

a
n

a
r
e
a
)

a
n
d

c
o
n
s
t
r
a
i
n
t
s
.

T
h
e
s
e

c
a
n

b
e

c
o
n
s
i
s
t
e
n
t

w
i
t
h

t
h
e

L
U
T
M

i
n

o
r
d
e
r

t
o

e
n
s
u
r
e

r
e
a
l
i
s
t
i
c

p
a
t
t
e
r
n
s

o
f

d
e
v
e
l
o
p
m
e
n
t

b
a
s
e
d

o
n

p
o
p
u
l
a
t
i
o
n

s
c
e
n
a
r
i
o
s
.

S
p
a
t
i
a
l

r
e
p
r
e
s
e
n
t
a
t
i
o
n

o
f

a
t
t
r
a
c
t
o
r
s

a
n
d

c
o
n
s
t
r
a
i
n
t
s
,

f
r
o
m

a

n
u
m
b
e
r

o
f

d
i
s
p
a
r
a
t
e

n
a
t
i
o
n
a
l

d
a
t
a

s
o
u
r
c
e
s

(
e
.
g
.

v
e
c
t
o
r

m
a
p
s

o
f

b
u
i
l
d
i
n
g

l
o
c
a
t
i
o
n
s
,

u
n
d
e
v
e
l
o
p
e
d

s
p
a
c
e
,

p
r
o
t
e
c
t
e
d

l
a
n
d
,

r
e
g
e
n
e
r
a
t
i
o
n

z
o
n
e
s
,

o
r

t
r
a
n
s
p
o
r
t

p
r
o
v
i
s
i
o
n
)
.

I
m
p
a
c
t

M
o
d
e
l
s

T
o

m
o
d
e
l

c
l
i
m
a
t
e

i
m
p
a
c
t
s

e
.
g
.

f
l
o
o
d
i
n
g
,

d
r
o
u
g
h
t
s
,

u
r
b
a
n

h
e
a
t
.

G
e
n
e
r
a
l
l
y

e
m
p
l
o
y
e
d

a
t

a

f
i
n
e

s
p
a
t
i
a
l

s
c
a
l
e
,

a
l
l
o
w
i
n
g

a
n

u
n
d
e
r
s
t
a
n
d
i
n
g

o
f

l
o
c
a
l

e
f
f
e
c
t
s

o
f

g
l
o
b
a
l

p
r
o
c
e
s
s
e
s
.

F
o
r

e
x
a
m
p
l
e
,

t
h
e

C
i
t
y
C
a
t

m
o
d
e
l

o
f

u
r
b
a
n

f
l
o
o
d
i
n
g

o
p
e
r
a
t
e
s

a
t

a

1
m

r
e
s
o
l
u
t
i
o
n
.

V
a
r
i
o
u
s

V
a
r
i
o
u
s

d
e
p
e
n
d
i
n
g

o
n

i
m
p
a
c
t

e
.
g
.

f
l
o
o
d

d
a
m
a
g
e

c
u
r
v
e
s
.

F
i
n
e
-
s
c
a
l
e

r
e
p
r
e
s
e
n
t
a
t
i
o
n
s

o
f

i
n
f
r
a
s
t
r
u
c
t
u
r
e
,

b
u
i
l
d
i
n
g
s
,

a
n
d

c
l
i
m
a
t
e

p
a
t
t
e
r
n
s
.


Walsh et al. Spatio-temporal modelling framework for the integrated assessment of cities
169

Figure 2 A road network for a portion of greater London with trips assigned to each road link after one
iteration of assignment. The thicker the green line the greater the number of trips that have been assigned to
a particular road within the network.
By utilising a trip-assignment transport model module in the UIAF, it is possible to begin to
consider the feedbacks between population, employment and transport infrastructure where LUTM
scenarios of possible future spatial patterns of population are regulated by the capacity of the
transport network(s) to service them. Thus, more realistic estimations of future population and
employment are possible. Additionally, the ability to map commuting trips to the network
infrastructure which carries them allows further analysis to be undertaken in order to assess
potential indirect impacts of future climate hazards.

One pathway for climate impacts to impinge of the people of a city is through disruption to the
transport network by severe weather events such as floods or heat waves. In the ARCADIA project
(Adaptation and Resilience in Cities: Analysis and Decision making using Integrated
Assessment), work has been undertaken in conjunction with Oxford Environmental Change
Institute to model these impacts and gain an understanding of their potential effects on the urban
economy. This is undertaken by statistically sampling the outputs from spatial weather generator
ensembles (Kilsby et al., 2007) to develop a range of representative extreme events which can be
mapped spatially across the study region. These are then overlaid on the network models and
damage functions used to assess the probability of failure or damage to the network, and thus the
reduction in the service capacity which may be experience. These effects can be fed back to the
transport model in the form of reduced travel speeds or complete closures of network components
and the aggregate delay to commuting trips from this disruption can be calculated. An example of
the combination of climate event and networks is shown in Figure 3.
Walsh et al. Spatio-temporal modelling framework for the integrated assessment of cities
170

Figure 3 A grid of annual frequency of heat wave events in 2050 (heat wave being defined as 32C-18C-32C
maximum temperature in day1-night-day2). This is overlaid onto the overland rail network to assess the
effect on the rail network and appropriate speed restrictions or closures implemented in the network model.

In addition to the indirect impacts of extreme events such as those outlined about, direct damages
can also be measured in the form of damage to residential or non-residential buildings. The LUTM
outputs of projected future zonal population and employment can be mapped at a finer spatial fine-
scale through the use of the UDM (Urban Development Model), a cellular automata driven spatial
simulation model of urban development. Future spatial patterns of urban development can then be
combined with spatial maps of climate hazard and standard damage functions to produce
estimations of future damages. Figure 4 presents land use in East London on a 100 x 100m grid
showing existing and future developments under the baseline land use paradigm and under
conditions where a policy to reduce exposure to flood risk has led to the banning of future
floodplain development. Furthermore, the risk of flooding from surge tides and high flows in the
river Thames was analysed using a combination of statistical analysis, hydraulic simulations and
reliability analysis of flood defence infrastructure. Coupling this analysis of the probability of
flooding was with the results for the land use model, it is possible to calculate expected annual flood
damages now and in the future. Figure 5 compares expected annual damage from flooding under
four land use development scenarios (taken from the LUTM) and the baseline case (2005).
Walsh et al. Spatio-temporal modelling framework for the integrated assessment of cities
171

Figure 4. Result of running UDM on part of London for two different planning scenarios to accommodate
the same future ward level projections of population growth. First (left) unconstrained development pattern,
second (right) floodplain constrained development where no development can occur on land which may
experience a 1:100-yr event.

Figure 5 Expected annual damage from flooding under four land use development scenarios (2100) and the
baseline case (2005). The 2005 risk is plotted on the map at the ward scale. The expected risk in 2100 is
indicated by the bar charts at the borough level, with purple indicating the estimate risk in 2005, yellow
indicating eastern development, grey indicating internal development, green suburban development and
brown indicating baseline development.
4 Future Vision of UIAF for Earth Systems Engineering
The above examples highlight how the UIAF can be employed to allow a regional to local-scale
understanding of the impacts and implications of global processes on cities through a series of
interconnected modelling components that explicitly represent the linkages and feedbacks between
Walsh et al. Spatio-temporal modelling framework for the integrated assessment of cities
172
them. The UIAF approach within CESER has been successfully employed in a number of impact
analysis studies for the city of London and its scalability demonstrated with respect to the larger
spatial impact assessment being performed within the current ARCADIA project. The UIAF
developed within CESER is underpinned by many of the core themes of the Centre for Earth
Systems Engineering Research: multi-scale observation and monitoring via the use of spatial
database management approaches for handling of large volumes of geospatial data-set, the use of
climate scenarios and statistical downscaling to provide future scenarios of heat stress and heat
waves for cities, as well as, spatial patterns of extreme rainfall events for pluvial flooding and,
coupled system simulation where historically disparate modelling paradigms are employed to
improve our holistic understanding of climate impacts on humans and how they may potential
respond to such hazards.
Moreover, the UIAF framework that has been developed has the potential to not only investigate
climate change driven impacts on cities, but also a wider range of environmental impacts and events.
For example, due to the explicit spatial framework of the UIAF, and in particular our ability to
model flows and movements across the urban fabric, such as commuting trips, we have the ability
to investigate issues such as urban air quality by coupling the UIAFs land use transport component
with meso-scale dispersion models of air quality. The resulting spatial patterns can then be further
integrated with the geospatial data held on urban form (the physical layout) and related function
(activities taking place) of the cities to allow estimates of the spatial vulnerability and exposure to
areas of poor air quality, thus allowing spatial air quality risk assessments to be performed. Equally,
as the land use modelling component of the UIAF allows one to simulate future spatial patterns of
population, one can investigate the air quality risk that may be faced by future population and
explore adaption options with respect to different future configurations of urban form via the Urban
Development Model (UDM).
However, thus far the UIAF has been developed very much as a loosely coupled suite of disparate
models and analytical components, requiring significant effort in the reliable and consistent transfer
of the outputs of one component to form input to another. Moreover, maintaining the consistency
between the parameterisation of different UIAF components shown in Figure 1 for particular
scenarios or policy choices is a major issue. In this respect, the CESER team are currently working
on developing an Urban Spatial Modelling (USM) open source software framework which
integrates the diverse range of models in the UIAF into a common software environment for the
first time. Underpinning this activity is the use of geospatial database technology for the concurrent
storage of model input, intermediate and final output data-sets and parameters. Thus, the aim is to
develop a software framework which integrates urban spatial modelling and geospatial databases to
allow the seamless transfer and exchange of data between sub-models during potentially large
numbers of scenario runs of the UIAF. As such, we ultimately aim to provide an open source
resource for running large applied urban spatial modelling tasks and impact assessment studies.
The USM will facilitate rapid assessment of scenarios in other cities; however, we have already
shared ideas and collaborated with both Durban and Paris. A secondment to Newcastle City Council
is investigating transferability of the UIAF to another UK city, and has led to a number of other
opportunities as we seek to strengthen relationships within our own city and region. For example, in
collaboration with Newcastle City Council, Environment Agency, Northumbrian Water, Newcastle
Primary Care Trust and local businesses, we have been investigating the use of decision theatre
techniques to look at impacts of a storm event across the city and adaptation policy options for
Walsh et al. Spatio-temporal modelling framework for the integrated assessment of cities
173
pluvial flooding using modelling results. Also we are working with schools to develop a network of
weather stations across the city that will not only provide useful data for research and Newcastle
City Council but could also be used as part of school curriculums. An opportunity to pursue a long
term urban monitoring and research facility is currently underway. Focused on Newcastles Science
Central development, this will comprise of monitoring infrastructure, data acquisition, storage and
processing infrastructure, and a decision theatre for civic engagement, ideally mapping onto a
number of CESERs research themes and cutting across many more domains: including weather
and climate risks, transport, water, energy, sanitation, ICT, waste, health, wellbeing, economy.
Deployed over a much larger scale (i.e. to capture systems effects such as the urban heat island) and
at multiple scales (e.g. ground measurements being complimented by satellite data) this will lead to
a systems level understanding of cities, providing the evidence basis and the intensity of
collaboration within and across multiple sectors to initiate a step change in our understanding of
urban systems and societys capacity to design and initiate the necessary transition to make our
cities sustainable.
Acknowledgements
The work upon which this paper is based was funded by the Tyndall Centre for Climate Change
Research, Phase 2 and through the EPSRC funded ARCADIA project (EP/G061254/2).
References
Balk, D., Pozzi, F., Yetman, G. Deichmann, U and Nelson, A. (2005). The distribution of people and the dimension of
place: methodologies to improve the global estimation of urban extents, in Proc. Urban Remote Sensing, International
Society for Photogrammetry and Remote Sensing, March 2005, Tempe, Arizona.
Barker, T., Foxon, T. and Scrieciu, .S. (2008). Achieving the G8 50% target: modelling induced and accelerated
technological change using the macro-econometric model E3MG. Climate Policy Special Issue on Modelling long-
term scenarios for low-carbon societies, 8: S30-S45.
Dawson, R. J. (2007). Re-engineering cities: A framework for adaptation to global change. Phil. Trans. R. Soc, Special
issue on Visions of the Future 365(1861), 3085-3098.
DfT (2012). Department for Transport Analysis Guidance Website (WebTag), Department for Transport, London
(http://www.dft.gov.uk/webtag/documents/expert/unit3.1.2.php), accessed May 2012.
Fowler, H.J. and Wilby, R.L. (2007). Beyond the downscaling comparison study. International Journal of Climatology
27 (12), 1543-1545.
Hall JW, Dawson RJ, Barr SL, Batty M, Bristow AL, Carney S, Dagoumas A, Ford A, Tight MR, Walsh CL, Watters H,
Zanni AM. (2010). City-scale integrated assessment of climate impacts, adaptation and mitigation. In: Bose, R.K, ed.
Energy Efficient Cities: Assessment Tools and Benchmarking Practices. Washington, DC, USA: World Bank, pp.43-64.
Kilsby CG, Jones PD, Burton A, Ford AC, Fowler HJ, Harpham C, James P, Smith A, Wilby RL. (2007). A daily
weather generator for use in climate change studies. Environmental Modelling and Software 22(12), 1705-1719.
Junankar, S., Lofsnaes, O., and Summerton, P. (2007). MDM-E3: A short technical description. Technical report,
Cambridge Econometrics.
McEvoy, D., S. Lindley and J. Handley (2006). Adaptation and mitigation in urban areas: Synergies and conflicts.
Proceedings of the Institution of Civil Engineers: Municipal Engineer 159(4), 185-191.
O'Meara, M. (1999). Reinventing Cities for People and the Planet. Wordwatch, Washington DC.
Ortuzar, J. D. and Willumsen, L. G. (2001). Modelling transport, Wiley, Chichester, UK, pp323-343.
Rosenzweig, C., W. Solecki, S. A. Hammer and S. Mehrotra (2010). Cities lead the way in climate-change action.
Nature 467(7318), 909-911.
UN (2004). State of the Worlds Cities 2004/2005 Globalisation and Urban Culture, New York, United Nations
Publications.


174


175

Ecovulnerability assessment and urban ecozoning for
global climate change: A case study of Shanghai, China

Xiangrong Wang
1
; Yuan Wang
1
; Zhengqiu Fan
1
; Yi Yong
2


1
Center for Urban Eco-Planning & Design, Fudan University, Shanghai 200433, China
E-mail:xrxrwang@fudan.edu.cn; oneyuan1216@gmail.com; zhqfan@fudan.edu.cn
2
WWF Beijing Office - Shanghai Programme Office, B2-3002, No. 121 North Zhongshan No. 1
Road, Shanghai 200083, China, yyong@wwfchina.org
Abstract
By taking the city of Shanghai, China as an example, the comprehensive evaluation index system
based on "riskiness-sensitivity-responses" (RSR) model to assess the eco-vulnerability of climate
change was studied in this paper. The urban eco-zoning of climate vulnerability and strategy for
Shanghai, China was provided as well. It aims to explore the methodology of eco-vulnerability
assessment and urban eco-zoning for global climate change and provide the reference for the similar
estuary cities in the world.

Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
176
1 Introduction
With the global climate change in recent decades, the environment issues increase and become the
most serious challenge to human sustainable development. In the background of global climate
change, many scholar in the world have carried out the evaluation and strategy research on the
influence of climate in urban area (Wardekker, Jong, Knoop, et al, 2009; Roberto, 2009), such as in
the United States: on the west coast of the San Francisco-San Francisco bay (typical structure type
river), Seattle, Phuket Bay (typical fjords type estuary); the metropolitan circle of Boston and New
York in east coast of the US; Houston and New Orleans in the Gulf of Mexico; Buenos Aires in
South American; Rotterdam and London metropolitan circle in English channel(Delta river);
Melbourne and Sydney metropolitan circle in Australia; Guangzhou, Hong Kong, Macao, Shenzhen
and Zhuhai metropolitan circle in the Pearl River Delta, Shanghai metropolitan area in Yangtze
River Delta, Seoul in South Korea, Tokyo, Yokohama and Chiba metropolitan circle in Japan etc.
The main researches focus on the impact of sea level rising, extreme weather events, salt water
intrusion, and extreme climate events on the natural ecology, urban society and economy,
vulnerability assessment, response mechanism, strategies and policy, etc. (Malone, t., and g. Yohe
1992, Smith b., Burton I., Klein R.J.T., et al., 1994; Parmesan C, Yoho g. A., 2003; Downing t. e.,
Patwardhan a., 2004; VanMinnen J.G., Onigkeit j., Alcamo J. 2004; Janssen m. a., Schoon m. l.,
Kew., et al, 2006; Fussel h. m., 2007; Karl, T.R., Melilla, J.M., Peterson, T.C., 2009).
The estuary city is located at the mouth of large rivers into the sea, which has especially significant
effect of the metropolitan area. It is also a high strength area of the urban complex ecosystem with a
high density of population, industrial, capital highly centralized and land resources usage. It is in the
forefront of the high impact of climate change vulnerability (Klein r. j. t., Nicholls r. j., 1999). The
third assessment report of IPCC (Intergovernmental Panel on Climate Change) (2001) pointed out
that the vulnerability is a natural or social system vulnerable to or don't have the ability to deal with
Climate Change (including Climate Change rate and extreme Climate events) adverse influence, it
is the function of rate characteristics, amplitude, rate of change and its sensitivity and ability of
some system to adjust climate change (Figure 1). Thus, a research on the regional or urban eco-
vulnerability of climate change should include two important aspects, i.e., the research on the
climate change to region and urban area, and the adapt ability of regional and urban area to climate
change. Actually, the research on climate change eco-vulnerability of regional and urban area has
become one hotspot in the present and the future (Xiangrong Wang, Yuan Wang 2011).

Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
177









2 Study Area
Shanghai is located in 31`14 N and 121`29 E, and is located at the Yangtze River estuary in
China and the Pacific west bank and East Asian continent along. It is also at the front of Yangtze
River delta and the south by the Hangzhou Bay. The geographical position of Shanghai is superior
and is also a relatively typical estuary city (Figure 2), at the same time, it is also a modern
international metropolitan city with a high urbanization rate, which has reached 88.86% in 2010 (by
Shanghai municipal Statistics Bureau, 2010). It is the city with the highest urbanization level in
China.
The city covers an area of 6340.5 km
2
, accounted for 0.06% of the total area, has a population of
more than 23 million. Shanghai is located in the largest mainland in the world, i.e., Asia-Europe
continent, and on the longest river (i.e., the Yangtze River) into the world's largest ocean (i.e.,
Pacific) estuaries. It has the strongest continental characteristics and the most severe ocean features
of the intersection. Similarly, under the background of global climate change, the conflict between
its weak ecosystem and fast social and economic development is increasingly significant (such as
the impact of sea levels rising on the low altitude area, salt water invasion and extreme weather
disaster event, and so on). Related research showed that Shanghai has entered into the second stage
of climate warming in nearly hundred years since the end of the 1980s, its trend is significant
(Jialiang Xu, 2000).
3 Research Methodology
3.1 Index system building for climate change ecovulnerability
assessment of Shanghai
Based on its actual situation and data availability in Shanghai, the index system for climate change
eco-vulnerability assessment of Shanghai was constructed in this paper. It includes three 2- class
fields, seven 3-class themes, 18 major 4-class indicators, and 29 specific indexes (Table 1).

Figure 1 IPCCs definition of vulnerability on
climate change
Figure 2 TM Remote sensing image of Shanghai in
March, 2008
Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
178
Table 1 Index system for Shanghai climate change eco-vulnerability assessment
Objectives Fields Themes Indicators Indexes Units
C
o
m
p
r
e
h
e
n
s
i
v
e

a
s
s
e
s
s
m
e
n
t

i
n
d
e
x

s
y
s
t
e
m

R
i
s
k
i
n
e
s
s

1
6


Climate change
6
Sea-level
rise and sea
water and
hydrology
1 A perennial sea-level rise mm
Extreme
climate
disaster
2 Days, Tmax>= 35 C Day
3 Days, Torrential rain Day
Precipitation 4 Annual precipitation variability %
Temperature
5 Average values of air temperature
difference in previous years

6 Outskirts temperature difference in
Artificial
Stress
10
Economic
development
7 Energy consummation SC/MTpa
8 Total water consumption S m
3
x10
9


River basin
development
9 Watershed year waste water
discharging
Tx10
9

10 Watershed year runoff volume
rate of change
%
11 Watershed year sediment load
reductions
Tx10
9

Population
12 Population density

person/ km
2

Land use
13 Cultivated land reduction 10 thousand hm
2

14 Reclaiming land from marshes km
2
/a
Pollution
discharge
15 Total waste water discharge Tx10
9

16 waste air discharge S m
3
x10
9

S
e
n
s
i
t
i
v
e
n
e
s
s


Society
1
Human
health
17 The proportion of elderly and
children
%
Natural
Environment
4
Water
resource
18 Average annual rate of runoff
quantity
%
19 Length of river water quality
with Classes IV and better than the
IV
%
Vegetation
20 Vegetation carbon sinking
volume
10 thousand T
Soil 21 Soil carbon sinking volume 10 thousand T
R
e
s
p
o
n
s
e
s


Education &
Publicity
1
Education
and
publicity
22 Population non-illiteracy rate %
Economic
ability
3
Total and
efficiency

23 GDP per capita RMB Yuan (`)
24 Engel coefficient
investment
25 Green put in percent of GDP
percentage
%
Eco-building &
Pollution
control
4
Eco-
building
26 Green coverage ratio %
27 Natural reserve rate %
28 Industrial water reuse rate %
Pollution
control
29 Urban sewage treatment rate %

Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
179
Calculation of Shanghai climate change ecovulneiability inuex SBEvI
Based on 2008 TM remote sensing image map in March 2008, Shanghai land use map and basic
information of relevant ecological factors, the urban topographic type, NDVI index, the coastal
beach, the main streams, surface water environment, hydrological geology, and biological diversity
protection factors were selected to construct the evaluation model of Shanghai climate change eco-
vulnerability index (SHEVI):
3
k
i 1
k
SHEVI A W

(1)
where, SHEVI is the index of Shanghai climate change eco-vulnerability, A
k
is a 2nd-class index,
W
k
is the weight of the 2nd-class index.
All evaluation index and monitoring results were carried out by above model the uniformization
processing and overlay weight factor (Table 2), and respectively calculated out riskiness index (RI),
and sensitivity index (SI), and response index (AI) as well as three 2nd-class area indexes, and 7
3rd-class theme indexes. On this basis, the weighted eco-vulnerability index (SHEVI) was
calculated out the size of SHEVI in 1998-2006. The value of SHEVI is in 0-1, which is
corresponding from the lowest value and the highest value of climate change eco-vulnerability
between 1998-2006 in Shanghai; the higher numbers, the higher vulnerability, otherwise smaller.
Table 2 Index Weighting of SHEVI of Shanghai under Climate Change
Fields Weight Themes Weighting Indexes Weight
Riskiness 0.2360
Climate
change
0.1618
Sea-level rise and sea water
hydrology 0.2500
Extreme climate disaster
0.2500
Precipitation
0.2500
Temperature
0.2500
Artificial
stress
0.8382
Economic developing stress
0.2000
Watershed development
0.2000
Population stress
0.2000
Land use expanding
0.2000
Pollution discharge
0.2000
Sensitivity 0.3223
Society 0.9479 Human health
1.0000
Natural
environment
0.0521
Water resource
0.3333
vegetation
0.3333
Soil
0.3333
Responses 0.4417
Education &
Publicity
0.3981 Education level
1.0000
Economic
ability
0.2578
Economy and efficiency
0.5000
Investment quantity
0.5000
Eco-building
& Pollution
control
0.3441
Eco-building
0.5000
Pollution control
0.5000
The effects of climate change giauing
The effects of climate change are various and uncertain. Affected factors would produce chained
interaction each other. Sometimes, the indirect effects caused by climate change may be higher than
the direct effects. The affecting aspects can show different responses to climate change with the
Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
180
different regional environment characteristics. The effects of climate change are divided into 3
classes (Table 3) according to the effect objects of climate change in this paper.
Table 3 Levels of Climate Change Impact
Class Impact scope Impact type
I
Eco-environment Abiotic change Ocean acidification, increased surface
runoff (or reduced), land-water
disequilibrium
Direct
II Biotic change Changes of biological diversity, such as
quantity, structure, distribution and
behavior change
Direct
III Socio-economy Production &
Living change

Agriculture, fisheries, forestry,
employment, transportation, housing,
health, energy, water resources, health, etc
Indirect
Table 3 showed that the class I and class II impacts of climate change on eco-environment change
can be summarized as the direct impacts of climate change on the estuary urban ecosystem. The
class-I impact of climate change is the one on the abiotic factors in natural subsystem. While the
class-II impact is the one on the biological factors in natural ecosystem, mainly reflects in the
numbers of biological resources, structure, distribution, and behavior effect. The class-III impact is
the one on the impact of socio-economic systems, which can be summarized as the indirect effects.
It impacts on the production and living of estuarine city, i.e., the impact on the human beings and
relative subjects, such as the industry, agriculture, fisheries, forestry, transportation, housing and
health. The impact of climate change from class I to class III is a process of evolution of ecosystem
from direct to indirect, and from simple to complex.
3.4 Zoning method of Shanghai ecovulnerability of the climate change
The zoning model of climate change was established from the double angles of riskiness and
sensitivity, which was combined with the major factors of artificial stress and key issues of eco-
environment in Shanghai. Based on GIS spatial grid overlay module, the integrated vulnerability
assessment was carried out to identify the high vulnerable areas.
The process of rapid urbanization, population growth, and socio-economic development were
selected for the riskiness assessment. These factors on the one hand would contribute to promote the
region's climate change; on the other hand, will also have a stress and pressure on the regional
natural resources. The relevant factors, such as population pressure, industrial output, unit GDP
energy consumption, urbanization, etc were selected to calculate the comprehensive ecological
riskiness coefficient for generating the spatial distribution pattern of climate change eco-
vulnerability.
Other factors such as topography, vegetation, water conservation, wetland, stream corridor
protection, environmental protection, environmental geology, land subsidence were selected as
ecological factors for sensitivity evaluation. The integrated climate change sensitivity evaluation in
Shanghai and its space distribution pattern were studied by based on a GIS platform.
The detail indexes for sensitivity evaluation were (1) coastline and estuary--terrain landforms; (2)
vegetation--NDVI index; (3) beach wetlandspatial distribution of coastal beach wetland; (4)
Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
181
water environment--main water system protection and water environment function zoning; (5)
geological--environmental geology zoning; (6) groundwater--ground settlement; (7) biodiversity--
nature reserve. On this basis, the expert support system has been used for the factors classification
and class assignment of different sensitivity, and the climate change eco-vulnerability assessment as
well as comprehensive zoning of Shanghai was obtained on a GIS-based platform analysis.
4 Results and discussion
4.1 Comprehensive results
Figure 3 shows that the maximum value of climate change eco-vulnerability index in Shanghai
came in 1998 because of the increasing sensitivity of water resources significantly; this achieved
maximum value of sensitivity within nine years. Furthermore, its response was still at a low level
and vulnerability more evident as a result of accumulated. As Figure 4 shows, the vulnerability is
the functions of riskiness, sensitivity and response. The higher the riskiness and sensitivity are, the
higher the vulnerability is. While the high the response, the small the vulnerability. The minimum
value of climate change eco-vulnerability index of Shanghai occurred in 2000 because of the
riskiness and sensitivities in the year at a low level, and the response had been significantly
improved in the previous two years. The climate change eco-vulnerability index in Shanghai shows
interval peaks in 2002, mainly due to the high sensitivity of water resources. Especially, mainly by
increased surface runoff variability due to climate change and increased inland water pollution
caused by artificial stress. The change of hydrology and water resources in Shanghai has obviously
impact on the eco-vulnerability to climate change. At the same time, the riskiness of artificial stress
gradually increasing from 1998 to 2006. Thus, it should be the important mitigation measures to
control and ordering guide for river basin development, population growth, pollution control,
energy consumption and other human activities will be in Shanghai is currently

Figure 3 The index of climate change eco-vulnerability in Shanghai (1998-2006)
0.8!
0.3
0.?3+
0.3b?
0.?
0.++3 0.++8
0.?+9
0.39
0.000
0.!00
0.?00
0.300
0.+00
0.00
0.b00
0.00
0.800
0.900
!998 !999 ?000 ?00! ?00? ?003 ?00+ ?00 ?00b
Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
182

Figure 4 Eco-vulnerability dynamics of climate change in Shanghai (1998-2006)
4.2 Zoning for Ecovulnerability dynamics of climate change in Shanghai



4.2.1 The 1st class ecovulnerability subzone
The 1st-class vulnerable subzone in Shanghai is 447.66 km2, mainly in the Chongming Dongtan,
Sheshan National Forest Park, Dongping National Forest Park, the wetlands influenced by South
branches of the Yangtze River, Lakes and lakelet depressions in Dianshan Lake area, water source
protection areas in Upper Huangpu River, and coastal buffer zone (Figure 6).
Main issues
1. Sea-level rising will cause coastal erosion and reduction in wetland here, and impact on the
structure and behavior, spatial and temporal distribution of wetland vegetation. Chongming
Dongtan is an important wetland in Shanghai by having rich bio-diversity. There are Chinese
sturgeon and birds nature reserves within this wetland. It is also an important post in East
Asia-Australia` migratory bird migratory route. In recent years, more serious coastal erosion
0.000
0.200
0.400
0.600
0.800
1.000
1.200
1998 1999 2000 2001 2002 2003 2004 2005 2006
Riskiness Su:`t`v`t) k:ou::
Figure 5 Comprehensive zoning for Climate
Change in Shanghai eco-vulnerability of Shanghai
Figure 6 The 1
st
-class of eco-vulnerability
subzone in Shanghai
Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
183
in Chongming Dongtan Bank, reduction in wetland and environmental change are likely to
affect the habitat and bird migration.
2. Dongping and Sheshan National Forest Parks are rich in species diversity. Climate change will
results in reducing the biodiversity of these areas.
3. Climate change and sea-level rising will exacerbate storm surge frequency and intensity which
may increase the frequency and intensity of saline intrusion in Yangtze River estuary due to the
saline water of Huangpu River upstream, thus affecting the water quality in water protection
areas.
4. Saline intrusion can cause biological changes in habitat, and may result in reducing biodiversity.
5. Eutrophication has occurred in large areas of Dianshan Lake. Rising temperatures also may
exacerbate the large population of algae in rivers and lakes, expansion of eutrophication and
seriously impact on water quality and aquatic environment.
Countermeasures and Strategies
1. To strengthen response to adapt sea level rising. To combine the measurements of slope
protecting and beach guarding, the measurements of engineering and biology. To improve
design standards, strengthen protective countermeasures of response to sea-level rise in coastal
areas. To carry out light pressure measures such as seawater flooding and salinity in estuaries
upstream by upland rivers and reservoir water transfer.
2. To enhance the marine environmental monitoring and early-warning capacity. Additional
observations of coastal and island locations, construction of observation system should be
highly attended. To enhance the monitoring capacity and improve the ability of aerial remote
sensing, and telemetry on the marine environment. The coastal surge disasters early warning
and response system should be established to improve the marine disaster early-warning
capacity.
3. Coastal swamp grass, Phragmites wetland reclamation should be strictly prohibited.
4. Improve water resource risk management awareness, effective prevention of sudden pollution
accident in construction of water systems, and reduce the probability of occurrence of sudden
accident.
4.2.2 The 2ndclass ecovulnerability subzone
The 2nd-class eco-vulnerable subzone covers an area of 1849.07 km2, mainly located in the
northern part of Chongming Island, the main rivers (Southern Cross Irrigation Channel) of
Chongming Island, Hengsha Island, Changxing Island, Nanhui Month, coastal wetlands of
Hangzhou Bay and the main water system of Shanghai (Figure 7).

Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
184










Main issues
1. Northern Chongming, Changxing and Hengsha Islands are new sand islands in the estuary area.
Their altitude above sea level is relatively low, and in the seafront. Nanhui and its surrounding
area in Hangzhou Bay are rich in wetland resources. As the sea level rising, flooding coastal
areas will suffer and loss wetland. The losses of coastal beach will cause the declining of safety
coefficient in bank revetment engineering.
2. The water resource in Shanghai largely dependents on the outside. The runoff and annual
runoff distribution change in downstream of Yangtze River will affect the reliability of the
existing water supply system, recoverability and vulnerability in Shanghai, thus affecting the
sustainable use of water resources.
3. Climate warming will raise sea temperatures, changes in ocean circulation and the increase of
the concentration of dissolved CO2 in seawater. It can lead to ocean acidification and result the
changes in the composition of water body, and affect water quality indirectly. Combined with
the relative sea-level rising caused by seawater flooding penetrated surface runoff, will
seriously affect the ground water quality in Shanghai.
4. The sea level rising will cause the decreases of low-lying land drainage ability, and increase
flooding. Typhoons and rising water level will lead to increase storm frequency and intensity.
Flooding and storm surge disaster will affect urban water cycling. Combined with the current
water pollution in rivers and coastal waters, the sea-level rising effects on water quality of
Shanghai should not be underestimated.
Countermeasures and Strategies
1. Establish a reasonable system of integrated coastal zone management, integrated decision-
making mechanisms, as well as effective coordination mechanisms, timely processing of
coastal zone development and protection issues arising in the operation. Establish integrated
management demonstration areas.
2. Vigorously create a coastal protection forest, and establish a multi-species forest, multi-level,
multi-functional protection forest engineering system.
3. Reasonable using bank resources and carry out coastal survey evaluation, development
utilization and coastal protection planning.
Figure 7 The 2nd-class of eco-vulnerability
subzone in Shanghai
Figure 8 The 3rd-class of eco-vulnerability
subzone in Shanghai
Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
185
4. Strengthen the comprehensive improvement of sea water channel and protection of ecological
environment in Yangtze River estuary.
5. Establish a standardized, efficient system of meteorological disaster contingency plan and form
a smooth flow of information, responsive, efficient handling of the emergency mechanism, and
improve the ability of meteorological disaster emergency.
4.2.3 The 3rdclass ecovulnerable subzone
The 3rd-class eco-vulnerable subzone covers an area of 1660.11 km2, mainly in the south zone of
Chongming Island, Baoshan, Pudong New District and Minhang District, Downtown of Shanghai
and West of Jinshan District (Figure 8).
Main issues
1. South zone of Chongming Island is an old estuarine sand island area, although with a higher
altitude above sea levels, is still under the threat of sea-level rising as its many places facing the
sea.
2. Baoshan, Minhang District, and Pudong New Area are not only the important industrial
concentrated areas, but also the 1st-class strengthen area of groundwater mining. Significant
impact from human interference, resulting in pressure on resources and pollution are more
serious.
3. The central area of Shanghai has the high density construction and high population. It is under
the high pressure of artificial stress, such as demonstrated by urban heat island effect. The
impact of climate warming on water resources and energy supply in the central area brings
about the huge burden and also poses a certain threat to human health.
4. West of Jinshan District belongs to the landscape water area (class III water area), but also the
area with relatively complex and fragile geological environment. Due to the more serious land
subsidence caused by groundwater exploitation and lower altitude above sea level, significant
impact from human interference would affect mining and water quality in the region.
Countermeasures and Strategies
1. Establish a sound early warning system for extreme climate. Organize a specialized early
warning group, which can responsible for the extreme weather forecast in Shanghai.
2. Establish eco-industrial parks in industrial zone to carry out the resource recycling and get
resource using more efficiently. Rational use of energy, energy recycling and encourage the use
of clean energy. Change the energy structure; reduce the emissions of sulfur dioxide and carbon
dioxide.
3. Adjust the industrial structure to make the transfer from heavy industries towards light
industries, and increase the intensity of investment and development of high-tech industry.
4. Develop appropriate legal assessment system and assess the emissions of factories. Closing the
high energy consumption, high pollution, high emissions, and illegal sewage factory.
5. Control excessive extraction of underground water and land subsidence in the coastal areas, to
recharge artificial groundwater in appear funnel and ground subsidence area.
5 Conclusions
In conclusion, global climate change has become the largest impact factor of sustainable
development of human society, especially in estuarine urban area, like Shanghai. Xu Ming (2009)
predicted that it will increase 1.5-2 in the Yangtze River Basin of China by 2050. This paper is
Wang et al. Eco-vulnerability assessment and urban eco-zoning: A case study in China
186
just a preliminary study on regionalization of eco-vulnerability in Shanghai, and aims to provide the
scientific basis to develop strategies to address global change.
Acknowledgements
This study was financed by China`s Sci. & Tech. Supporting Plan Project (2008BAJ10B1) and
WWF Cooperation Foundation (2009-2011). The authors are grateful to our colleagues at the
Department of Environmental Science& Engineering the Center of Urban Eco-planning and
Design, Fudan University, and Shanghai Climate Center for their data supporting.
References
Downing T. E., Patwardhan A. (2004). Assessing vulnerability for climate adaptation [A]. In: Lim B., Spanger-
Siegfried E., Adaptation Policy Frameworks for Climate Change: Developing Strategies, Policies, and Measures[C].
Cambridge University Press, Cambridge, 2004.
Fssel H. M. (2007). Vulnerability: A generally applicable conceptual framework for climate change research [J] .
Global Environmental Change, 17 (2), 155 - 167.
Fssel H. M., Klein R. J. T. (2006). Climate change vulnerability assessments: an evolution of conceptual thinking
[J].Climatic Change, 75 (3), 301 - 329.
IPCC (2001). Climate Change. Synthesis Report. A Contribution of Working Groups I, II, and III to the Third
Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, UK.
IPCC (2007).. Climate Change. Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth
Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, UK.
Jialiang Xu. (2000). Comparison of Shanghais characteristics and cause of two warming periods in past hundreds years
Journal of Geography, 55 (4): 501-506.
Janssen M. A., Schoon M. L., Ke W., et al. (2006).Scholarly networks on resilience, vulnerability and adaptation within
the human dimensions of global environmental change. Global Environmental Change, 4:001.
Klein R. J. T., Nicholls R. J.(1999). Assessment of coastal vulnerability to climate change. Ambio, 28 (2), 182-187.
Karl, T.R., Melilla, J.M., Peterson, T.C. (2009). Global Climate Change Impacts in the United States [M], Cambridge
University Press, Cambridge.
Malone, T., and G. Yohe (1992). Towards a general method for analyzing regional impacts of global change. Global
Environmental Change. 2(2), 101-110.
Min Xu, Haide Ma, etc, (2009). Evaluation report on climate change vulnerability of Yangtze River Basin. Hydrological
Press, Beijing.
Parmesan C, Yohe G. (2003). A globally coherent fingerprint of climate change impacts across natural systems. Nature
421(6918),3742.
Roberto S. R.(2009). Learning to adapt to climate change in urban areas. A review of recent contributions. Current
Opinion in Environmental Sustainability 1, 201206.
Shanghai Statistics Bureau (2010). Statistic Year Book . Shanghai Statistic Press, Shanghai.
Smith B., Burton I., Klein R.J.T., et al.(1999).The science of adaptation: a framework for assessment. Mitigation and
Adaptation Strategies for Global Change, 4:199213.
VanMinnen J.G., Onigkeit J., Alcamo J. (2002). Critical climate change as an app roach to assess climate change
impacts in Europe: Development and application. Environmental Science and Policy 5, 335 -347.
Xiangrong Wang and Yuan Wang (2011). Evaluation on the vulnerability of climate change-A case study of Shanghai.
China Scientific Press, Beijing.
Wardekker J. A, Jong A. D. . Knoop J M, et. al. ( 2009). Operational sing a resilience approach to adapting an urban
delta to uncertain climate changes [J]. Technological Forecasting & Social Change 11,005.


187

The Loughborough University TEmperature Network (LUTEN):
Rationale and analysis of stream temperature variations
Robert L. Wilby
1
, Matthew F. Johnson
1
and Julia A. Toone
2

1
Department of Geography, Loughborough University, Loughborough, LE11 3TU, UK.
E-mail: r.l.wilby@lboro.ac.uk; m.f.johnson@lboro.ac.uk
2
Environment Agency, Trentside Office, Nottingham, NG2 5FA, UK
Email: julia.toone@environment-agency.gov.uk
Abstract
River water temperature (Tw) is a major determinant of ecological status. Anthropogenic climate
change is expected to increase Tw in the future, with implications for aquatic plant and animal
communities. It has been suggested that planting riparian woodlands along river corridors could
protect thermal refugia by shading the channel. This paper describes the Loughborough University
Temperature Network (LUTEN): an array of continuously monitored, paired air (Ta) and Tw sites
in the Rivers Dove and Manifold, English Peak District. The 33 sites have diverse channel and
riparian properties including morphology, sediment coarseness, water depth/width, shading,
distance from source, proximity of tributaries and drainage ditches, channel aspect, sinuosity, slope,
and roughness.
We report the findings of an analysis of the first full year of temperature data (March 2011 to
February 2012). First, daily mean Ta and Tw statistics were compiled for each site to gain a broad
understanding of spatial gradients within the data. Second, inter-site correlation coefficients were
computed for all pairs of sites using daily mean Ta then daily mean Tw. Third, daily Tw was
regressed against daily Ta at each site using linear and non-linear functions. Finally, sub-daily
variations in Ta and Tw were examined using hysteresis plots for selected sites, days and seasons.
The relationship between Tw and Ta is strongest in the lower sites of the Manifold where there are
relatively shallow channel gradients and little riparian shade compared with upstream sites. Sites
that are least sensitive to atmospheric heat exchange are found in the lower Dove where there is
strong buffering by tributary and natural spring flows. For example, in the 1.7 km reach between
Milldale and Dovedale (Pickering Tor) the temperature of the river fell by more than 6C during the
hottest day in 2011. Sites immediately upstream of Dovedale experience the highest Tw in the
network and are strongly influenced by advected heat. Therefore, we propose that monitoring
should be extended to weirs, tributaries and springs to better understand the thermal regime of these
local hot and cool spots, as well as the effect of more remote heat sources.
Our longer-term objective is to develop generalized statistical models that can help predict the
extent to which active management of the riparian vegetation and source protection (for natural
springs) could delay the loss of thermal refugia in these rivers.
Wilby et al. The Loughborough University Temperature Network
188
1 Introduction
The thermal regimes of freshwaters could be modified by rising air temperatures and changing
patterns of discharge under anthropogenic climate change (Bates et al., 2008; Caissie, 2006;
Mohseni et al., 1999; Webb and Nobilis, 2007; Webb et al., 2008). This has prompted calls for the
active management of surface-groundwater flows and riparian woodlands to create cool refugia for
threatened species (Hansen et al., 2003; Broadmeadow et al., 2009; Malcolm et al., 2008).
Conversely, it is known that changes in land cover, such as forest harvesting and removal of
riparian vegetation can exacerbate high water temperatures (Moore et al., 2005).
There is growing recognition that a new era of field monitoring and modelling is needed to test the
performance of adaptations to climate change (Wilby et al., 2010). [This is analogous to the field
campaigns of the 1970s and 1980s that solved the impacts of afforestation and acid rain on upland
catchments]. For example, the Environment Agencys project Keeping Rivers Cool is working with
charitable organisations to plant mosaics of trees and erect stock proof fencing in three pilot
catchments (Wye, Hampshire Avon, and Tyne). Likewise, we seek to involve interested parties
such as river and fish trusts, landowners, other researchers, students and conservation managers by
placing the data and findings of our field experiments in the public domain.
This paper describes a dense array of air and surface water temperature sampling locations within
two rivers in the English Peak District. Preliminary analysis of other air and water temperature data
held by the Environment Agency for this region suggests rivers have warmed by up to ~0.2C/year
since the mid-1990s but this partly reflects changes in observation times and brevity of the records
(Toone et al., 2011). However, the trend is consistent with the national picture of widespread and
rapid warming of rivers in England and Wales (Orr et al., 2012). We describe the rationale behind
our water temperature network and provide an early analysis of the first complete year of data.
2 Water temperature network
The Loughborough University TEmperature Network (LUTEN) was set up to investigate the
controls of water temperature at reach- and landscape-scales. Our overall objective is to develop
tools that predict the extent to which high river water temperatures could be moderated by
managing riparian vegetation cover and/or point discharges. This will be accomplished in three
steps. First, a general appraisal of water temperature variations in space and time linked to simple
indices of reach, riparian and landscape conditions the focus of this paper. Second, we will
explore different air-water temperature relationships and the behaviour of thermal refugia within the
river network. Third, we intend to use these empirical relationships to predict water temperatures at
un-gauged sites, with and without riparian cover.
As far as we are aware, LUTEN is the densest array of continuously monitored, paired air and water
temperature sites anywhere in the UK (Figure 1). Monitoring locations were selected from
approximately equidistant sites for representative river reaches, based on fluvial audit. The average
separation of each site is just 1.7 km. Tiny Tag thermistors were installed at each location: one
anchored to the river bed (Tw), the other at 2 metres elevation in shade nearby (Ta). All water
temperature sites are located on riffles in order to standardize the hydraulic conditions and to ensure
that water is well-mixed. However, it is recognised that higher local Tw may be recorded within
pools and backwaters. Instruments are checked for damage in situ and data downloaded every three
Wilby et al. The Loughborough University Temperature Network
189
months. In all cases, the sampling interval is set to 15 minutes (to conserve battery life), from which
the daily mean, maximum, and minimum are derived.
The first set of probes was deployed in March 2011 and has been recording since. As Figure 2
shows none of the series is 100% complete due to loss of the instrument(s), routine maintenance
and data download, or occasionally exposure to air (typically revealed by near identical Ta and Tw
series). To date, the most complete records are for D16 (downstream of Hartington Bridge) and D23
(Dovedale Pickering Tor), plus M8 (Longnor) and M12 (Brund Mill) in the Manifold, where more
than 99% of data are captured. By May 2012 there were 20 pairs of thermistors operating in the
Dove, and 13 pairs in the Manifold.
Key to site codes and names:
D1 Source of Dove
D2 Colshaw Farm
D3 Tenterhill
D4 Hollinsclough
D5 Swallow Brook
D6 Stannery
D9 Glutton Bridge
D10 Beggars Bridge
D11 Crowdecote
D12 Pilsbury Hill
D13 Parks Barn
D15 Hartington Bridge
D16 Hartington Bridge
D17 Beresford Dale
D18 Wolfescote Dale
D20 Fishpond Plantation
D21 Milldale A
D22 Milldale B
D23 Pickering Tor
D24 Dovedale
M2 Thick Withins A
M3 Thick Withins B
M6 Harding's Booth
M8 Longnor Wood
M9 Over Boothlow
M10 Ludburn A
M11 Ludburn B
M12 Brund Mill A
M13 Brund Mill B
M14 Wall Brook
M15 Hayesgate
M16 Apes Tor
M17 Wetton Mill

Figure 1 Monitoring sites in the Rivers Dove and Manifold. Site details and summary data are available at:
http://maps.google.co.uk/maps/ms?msid=208096189439563058549.0004bb7456c7fd11438af&msa=0
In-channel habitat and riparian conditions were surveyed at each site during the summer and
autumn of 2010 as part of a fluvial audit commissioned by Natural England (Rice and Toone, 2010).
Information was collected on: distance from source, morphological features (pools and riffles), bed
material particle size, water depth/width, turbidity, amount of shade, proximity of tributaries and
Wilby et al. The Loughborough University Temperature Network
190
drainage ditches, channel aspect, slope, sinuosity and roughness (Manning n). In addition, spot
conductivity measurements are now taken on every site visit.

Figure 2 Data downloaded for sites in the Dove (left panel) and Manifold (right panel) since March 2011.
3 Methods
Preliminary analyses of the first complete year of LUTEN data were undertaken following the
approach taken by Toone et al. (2011). First, summary statistics were compiled for each site based
on daily mean Ta and Tw. These provide a broad overview of spatial gradients within the data.
Second, inter-site correlation coefficients (r) were computed for all pairs of sites using daily Ta then
daily mean Tw. The two sets of coefficients were then plotted against site separation distance (km)
to assess the level of spatial-autocorrelation in the data. This analysis also helps to identify those
sites that are behaving differently to their neighbours, and therefore merit greater attention (in terms
of quality assurance or process explanation).
Third, daily Tw was regressed against daily Ta at each site. We began by investigating spatial
variations in the linear regression intercept () and coefficient (). The parameter gives Tw when
Ta is zero and is assumed to reflect net groundwater heat flux to the river; the parameter shows
the sensitivity of Tw to a unit change in Ta. The amount of explained variance (R
2
) indicates the
extent to which local Tw is predicted by local Ta (a proxy for radiant, sensible heat and other energy
exchanges). We also assess whether greater explanatory power can be achieved by more complex
transfer functions such as the three parameter s-shaped logistic function (Punzet et al., 2012).
Finally, sub-daily variations in Ta and Tw were examined using hysteresis plots for selected sites,
days and seasons. These graphically display site-specific lags between rising/falling Ta and Tw.
Particular attention is focused on behaviour during the hottest day in the LUTEN archive (26 June
2011) by calculating time-varying atmospheric sensible heat fluxes, advected heat and radiant heat
losses between the sites. Downstream changes in sub-daily Tw are determined by matching time-
stamped records that were off-set to account for estimated travel times between the sites. These
times are based on water velocities derived from the Manning formula.

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
03/11 04/11 05/11 06/11 07/11 08/11 09/11 10/11 11/11 12/11 01/12 02/12 03/12 04/12
R
iv
e
r
D
o
v
e
s
it
e
s
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
03/11 04/11 05/11 06/11 07/11 08/11 09/11 10/11 11/11 12/11 01/12 02/12 03/12 04/12
R
i
v
e
r
M
a
n
i
f
o
l
d
s
i
t
e
s
Wilby et al. The Loughborough University Temperature Network
191
4 Results
The following results refer to the first complete year of data in the LUTEN archive (March 2011 to
February 2012). This period was notable for the record-breaking heat-waves in April and October
and rainfall deficit which was 65% of normal in some parts of the Midlands (Met Office, 2012).
Daily mean Ta and Tw across the network averaged 8.8C and 9.5C respectively (Tables 1 and 2).
This slight difference could be due to factors such as: errors from undetected exposures in the water
thermistor array; time lags between the atmospheric heat flux and fluvial response (mediated by soil
water and groundwater heat fluxes); or unspecified geothermal influences.
Table 1 River reach features and daily mean Tw regression model coefficients for sites in the River Dove.
The transition between Millstone Grit (D16, Hartington Bridge) and Carboniferous Limestone (D17,
Beresford Dale) is shown. Records with more than 10% missing data are shown in red.
Site
Distance
(km)
Altitude
(m)
Slope Sinuosity
W:D
ratio
D50
(mm)
Manning
n
Conduct.
(s
-1
)
Ta
(C)
Tw
(C)
R
2
n
D1 0.4 394 0.127 1.06 290 9.16 8.71 0.86 0.50 4.15 300
D2 1.8 348 0.054 1.15 4.1 54.6 0.035 220 8.35 8.72 0.87 0.61 3.66 328
D3 2.8 308 0.049 1.12 6.8 78.9 0.080 187 8.96 8.94 0.91 0.62 3.34 354
D4 4.1 283 0.040 1.37 6.2 71.4 0.060 157 8.39 8.72 0.92 0.67 3.14 363
D5 4.8 285 1.33 210 9.04 8.99 0.94 0.71 2.59 272
D6 5.3 270 0.033 1.33 9.6 37.3 0.100 165 10.22 9.97 0.91 0.71 2.71 270
D9 7.7 254 0.025 1.56 5.1 35.9 0.070 180 8.73 9.44 0.91 0.70 3.37 348
D10 9.6 244 0.021 1.37 5.0 42.7 0.070 195 8.74 9.54 0.92 0.68 3.63 348
D11 11.8 230 1.22 272 8.95 9.77 0.89 0.61 4.28 348
D12 13.0 230 0.017 1.29 7.3 23.6 0.070 220 8.67 9.72 0.87 0.64 4.19 362
D13 14.3 227 0.015 1.25 9.0 19.7 0.040 255 8.95 9.98 0.87 0.69 3.80 343
D15 18.1 214 0.013 1.48 3.9 36.9 0.060 164 9.09 9.91 0.87 0.59 4.54 321
D16 19.0 214 0.012 1.48 8.6 9.5 0.060 273 9.06 9.90 0.85 0.58 4.67 361
D17 20.9 213 0.011 1.88 7.1 66.2 0.060 320 8.33 9.90 0.83 0.65 4.49 348
D18 22.5 205 1.33 8.78 9.81 0.81 0.65 4.08 339
D20 25.8 180 1.22 355 9.37 10.76 0.84 0.71 4.07 328
D21 27.6 163 0.010 1.52 9.7 0.060 308 9.27 10.64 0.84 0.76 3.59 335
D22 27.8 163 0.010 1.34 9.7 0.060 305 9.35 10.56 0.84 0.75 3.58 337
D23 29.3 153 0.010 1.26 7.9 48.5 0.050 565 8.63 9.14 0.86 0.39 5.75 361
D24 31.2 143 0.010 1.14 12.6 52.5 0.067 565 7.64 8.52 0.83 0.40 5.45 248
Table 2 As in Table 1 for sites in the Manifold
Site
Distance
(km)
Altitude
(m)
Slope Sinuosity
W:D
ratio
D50
(mm)
Manning
n
Conduct.
(s
-1
)
Ta
(C)
Tw
(C)
R
2
n
M2 3.6 334 0.032 1.70 1.74 20.74 0.10 221 8.94 8.89 0.92 0.71 2.55 323
M3 3.9 329 0.015 1.28 7.80 76.11 0.10 232 8.90 9.22 0.91 0.75 2.51 322
M6 6.6 288 0.018 1.15 4.25 85.82 0.15 219 8.37 9.05 0.91 0.81 2.28 359
M8 8.2 269 0.014 1.17 10.22 80.17 0.09 221 8.32 9.28 0.91 0.79 2.68 366
M9 10.6 250 0.008 1.26 5.59 68.95 0.09 222 9.00 9.61 0.90 0.76 2.73 322
M10 12.1 230 1.07 192 7.70 8.48 0.90 0.76 2.61 210
M11 12.7 230 1.2 201 7.64 8.35 0.90 0.76 2.5 217
M12 13.6 228 0.005 1.67 7.36 78.34 0.12 220 8.82 9.61 0.89 0.84 2.24 354
M13 14.1 225 1.25 196 7.39 8.20 0.89 0.78 2.42 221
M14 15.2 224 0.000 1.39 5.36 58.70 0.00 240 9.22 10.24 0.88 0.84 2.49 366
M15 16.2 219 0.006 1.25 6.05 64.25 0.08 215 8.85 9.74 0.87 0.85 2.23 354
M16 18.3 209 0.004 1.62 7.83 67.46 0.06 210 8.89 9.93 0.88 0.84 2.42 356
M17 22.2 188 0.005 1.55 7.55 67.02 0.05 297 8.00 9.01 0.89 0.81 2.52 299
Wilby et al. The Loughborough University Temperature Network
192
Another intriguing feature is the apparent lack of warming with decreasing elevation in both rivers.
For example, given the ~200 m vertical drop between the upper (D2) and lower (D23) Dove (and a
standard lapse rate of 0.6C per 100 m) a 1.2C temperature change would be expected, yet the
actual figure is less than 0.3C. Local landscape and micro-meteorological factors are the most
plausible explanation. Site D2 is a deeply incised and sheltered valley, whereas D23 sits within a
wider and deeper gorge surrounded by heavily wooded slopes. The aspect of D2 is predominantly
northwest-southeast, whereas D23 is north-south.
Daily mean Ta in the Dove ranged between -5.9C (D1) and 21.2C (D16), whereas Tw ranged
between 0.3C (D5) and 19.3C (D18). However, it is suspected that the highest Tw at D18 is due to
partial exposure of the thermistor on 4 July 2011. If this value is cropped, the true maximum daily
mean Tw detected in the Dove was 18.7C at D20 and D21 on 27 June 2011 following maximum
daily mean Ta on 26 June 2011. Daily mean Ta in the Manifold ranged between -6.4C (M3) to
20.1C (M2) and Tw ranged between -0.1C (M11, M12, M15) to 18.8C (M16). The (just) below
freezing values may indicate partial exposure of the probes. As with the Dove, the date of maximum
daily mean Tw was 27 June 2011 following maximum daily mean Ta on 26 June 2011.
The spatial-autocorrelation analysis of daily temperatures shows a very high degree of homogeneity
in Ta within the Manifold (Figure 3, lower left). Even at a separation distance of 20 km the average
correlation in Ta between sites is r = 0.986. There is also a high level of association in Ta amongst
most of the sites in the Dove (Figure 3, upper left). However, sites with lower than expected
correlations with neighbouring records are D6, D12 and to a lesser extent D1. Overall, the
floodplain topography is more complex (due to terracing and entrenchment) in the Dove than the
Manifold and this is reflected by weaker spatial autocorrelation of Ta along this river corridor.


Figure 3 Spatial autocorrelation of daily mean Ta (left column) and Tw (right column) amongst sites within
the River Dove (upper panels) and Manifold (lower panels) during the year March 2011 to February 2012.
y = 0.0007x + 1
R = 0.056
0.9
0.92
0.94
0.96
0.98
1
0 5 10 15 20 25 30 35
C
o
r
r
e
l
a
t
i
o
n
Separation distance (km)
River Dove: Ta
0.9
0.92
0.94
0.96
0.98
1
0 5 10 15 20 25 30 35
C
o
r
r
e
l
a
t
i
o
n
Separation distance (km)
River Dove: Tw
y = 0.0007x + 1
R = 0.4399
0.9
0.92
0.94
0.96
0.98
1
0 2 4 6 8 10 12 14 16 18 20
C
o
r
r
e
l
a
t
i
o
n
Separation distance (km)
River Manifold: Ta
0.9
0.92
0.94
0.96
0.98
1
0 2 4 6 8 10 12 14 16 18 20
C
o
r
r
e
l
a
t
i
o
n
Separation distance (km)
River Manifold: Tw
Wilby et al. The Loughborough University Temperature Network
193
Spatial autocorrelation in daily mean Tw is also very high in the Manifold (Figure 3, lower right),
but much less so in the Dove (Figure 3, upper right). In the latter case, one site, D23 (Pickering Tor),
is associated with every lower than expected correlation with neighbouring records (most notably
with D6, D16, D17, D21 and D22). This reach has markedly lower Tw and higher conductivities
than sites upstream (Table 1). In other words, this Tw series is exhibiting behaviour that is less like
other records and is, therefore, a key site within the array.
There is a very strong association between Ta and Tw in both catchments (Figure 4). The linear
regression analysis reveals that within the Dove, 81-94% of the variance in daily mean Tw is
explained by daily mean Ta recorded at the same site (Table 1). The equivalent values for the
Manifold are 87-92% (Table 2). As reported by Toone et al. (2011), predictability is weakest in the
lower reaches of the Dove, downstream of the transition from Millstone Grit to Carboniferous
Limestone. Here it is assumed that the Tw regime is increasingly affected by natural spring flows as
well as by atmospheric heat fluxes and advection of heat from upstream. At sites D21 and D23 Tw
exceeds Ta for a significant fraction of the time, most notably in the winter half year (Figure 4). The
annual Tw regime is also notably damped when compared with sites upstream such as D4 and D11.
Figure 4 Relationships between daily mean Ta and Tw at selected sites in the Dove.
Overall, the most responsive sites to Ta are found in the middle and lower sites of the Manifold (see
-values in Tables 1 and 2). For example, at M15 (Hayesgate) Tw changes by 0.85C for every
degree change in Ta (compared with 0.39C per degree change in Ta at D23). The inferred winter
heating by groundwater () is less in the Manifold than the Dove: on average 2.5 C and 4.0C
respectively. This is to be expected because all sites within the Manifold are located above the
elevation at which significant spring flows seep from the margins of the Millstone Grit.
Logistic regression models explain fractionally more variance than the linear function at the test
sites (Figs. 5 and 6). For example, the logistic model explains 93% compared with 92% by the
linear model at D4. Logistic models based on daily mean Ta and Tw (Figure 5) are marginally
5
0
5
10
15
20
03/11 04/11 05/11 06/11 07/11 08/11 09/11 10/11 11/11 12/11 01/12 02/12
T
e
m
p
e
r
a
t
u
r
e

(

C
)
D4
Air
Water
5
0
5
10
15
20
03/11 04/11 05/11 06/11 07/11 08/11 09/11 10/11 11/11 12/11 01/12 02/12
T
e
m
p
e
r
a
t
u
r
e

(

C
)
D11
Air
Water
5
0
5
10
15
20
03/11 04/11 05/11 06/11 07/11 08/11 09/11 10/11 11/11 12/11 01/12 02/12
T
e
m
p
e
r
a
t
u
r
e

(

C
)
D21
Air
Water
5
0
5
10
15
20
03/11 04/11 05/11 06/11 07/11 08/11 09/11 10/11 11/11 12/11 01/12 02/12
T
e
m
p
e
r
a
t
u
r
e

(

C
)
D23
Air
Water
Wilby et al. The Loughborough University Temperature Network
194
superior to those based on daily maximum Ta and Tw (Figure 6). The contrasting sensitivity to Ta is
evident in the gradient terms of both models for sites D21 and D23. However, because of higher
parameter dimensionality the logistic regression better replicates the inflection points at high and
low Ta.


Figure 5 Logistic regression of daily mean Ta and Tw for selected sites in the Dove.



Figure 6 As in Figure5 but for daily maximum 15 minute Ta and Tw.
0
2
4
6
8
10
12
14
16
18
20
5 0 5 10 15 20
W
a
t
e
r

t
e
m
p
e
r
a
t
u
r
e

(

C
)
Air temperature (C)
D4
0
2
4
6
8
10
12
14
16
18
20
5 0 5 10 15 20
W
a
t
e
r

t
e
m
p
e
r
a
t
u
r
e

(

C
)
Air temperature (C)
D11
0
2
4
6
8
10
12
14
16
18
20
5 0 5 10 15 20
W
a
t
e
r

t
e
m
p
e
r
a
t
u
r
e

(

C
)
Air temperature (C)
D21
0
2
4
6
8
10
12
14
16
18
20
5 0 5 10 15 20
W
a
t
e
r

t
e
m
p
e
r
a
t
u
r
e

(

C
)
Air temperature (C)
D23
0
2
4
6
8
10
12
14
16
18
20
5 0 5 10 15 20 25 30
W
a
t
e
r

t
e
m
p
e
r
a
t
u
r
e

(

C
)
Air temperature (C)
D4
0
2
4
6
8
10
12
14
16
18
20
5 0 5 10 15 20 25 30
W
a
t
e
r

t
e
m
p
e
r
a
t
u
r
e

(

C
)
Air temperature (C)
D11
0
2
4
6
8
10
12
14
16
18
20
5 0 5 10 15 20 25 30
W
a
t
e
r

t
e
m
p
e
r
a
t
u
r
e

(

C
)
Air temperature (C)
D21
0
2
4
6
8
10
12
14
16
18
20
5 0 5 10 15 20 25 30
W
a
t
e
r

t
e
m
p
e
r
a
t
u
r
e

(

C
)
Air temperature (C)
D23
Wilby et al. The Loughborough University Temperature Network
195
Sub-daily temperature data are also held in the LUTEN archive and reveal important site-specific
behaviours. For example, the hysteresis plots shown in Figure 7 show different: initial Tw, rates of
change, and maximum Tw on the rising limb; peak Tw lagging peak Ta by 45 to 75 minutes; and
then rates of cooling. Anticlockwise hysteresis is generally indicative of influences from physically
remote sources (in this case, later arrival of warm water from upstream). The area within the
hysteresis curve shows the amount of variance in temperature at each site and the extent to which
this varies seasonally (Figure 8). The effect of the exceptionally warm spell in April 2011 is clearly
seen in the plot for D4.
Figure 7 Hysteresis plots based on 15 minute sampling of Ta and Tw at sites D4, D11, D21 and D23
beginning at 00:15hrs on 26 June 2011 and ending at 00:00hrs on 27 June 2011.
Figure 8 As in Figure7 but for sites D4 and D24 on selected days in different seasons.
11
12
13
14
15
16
17
18
19
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
W
a
t
e
r

t
e
m
p
e
r
a
t
r
u
r
e
(

C
)
Air temperature (C)
D4
11
12
13
14
15
16
17
18
19
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
W
a
t
e
r

t
e
m
p
e
r
a
t
r
u
r
e
Air temperature
D11
11
12
13
14
15
16
17
18
19
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
W
a
t
e
r

t
e
m
p
e
r
a
t
r
u
r
e
(

C
)
Air temperature (C)
D21
11
12
13
14
15
16
17
18
19
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
W
a
t
e
r

t
e
m
p
e
r
a
t
r
u
r
e
(

C
)
Air temperature (C)
D23
5
6
7
8
9
10
11
12
13
14
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
W
a
t
r
e
r
t
e
m
p
e
r
a
t
u
r
e

(

C
)
Air temperature (C)
D4
08Apr11
08Jul11
08Oct11
08Jan12
5
6
7
8
9
10
11
12
13
14
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
W
a
t
r
e
r

t
e
m
p
e
r
a
t
u
r
e

(

C
)
Air temperature (C)
D23
08Apr11
08Jul11
08Oct11
08Jan12
Wilby et al. The Loughborough University Temperature Network
196
Sub-daily data can also be used to investigate advected heat between sites. Figure 9 shows the 15
minute Ta and Tw during the hottest day in 2011. The diurnal range in Ta and Tw is much greater at
D21 than D23 even though these sites are separated by only 1.7 km. However, the estimated
sensible heat flux at D23 is approximately twice that at D21 (~4 Wm
-2
and 2 Wm
-2
respectively).
Despite the advection of warmer water from upstream and a higher heat exchange from atmosphere
to river, there is a marked drop in Tw between D21 and D23 (Figure 9, lower right). When taking
into account the travel time between the two sites (estimated to be ~30 minutes), by 18:30hrs the Tw
cooling exceeds 6C at a time of day when the sensible heat flux is still strongly positive. This
contrasts with the downstream warming between D4 and D11, or between D11 and D21.


Figure 9 Fifteen minute Ta, Tw and sensible heat flux at sites D4, D11, D21 and D23 on 26 June 2011.
Changes in Tw between sites (having corrected for estimated travel times) are shown for the same day.
Three explanations for this behaviour merit further investigation. First, D23 is one of the most
deeply shaded reaches in the monitored network. Second, long-wave radiation from the water body
will be greater at D21 than at D23. Given the respective maximum 15 minute Tw (18.9C and 13.6
C), the difference in back radiation estimated by the Stefan-Boltzmann law is ~29 Wm
-2
. This is an
order of magnitude greater than the estimated gain from sensible heat (see above).
Second, natural spring flows are known to enter the channel between the two sites. During summer
these are markedly cooler than the river water and contribute to reduced temperatures at this time
(the opposite applies in winter when relatively warm ground waters raise temperatures see Figure
4). The fluvial audit also notes minor tributaries and drainage ditches which potentially affect the
thermal regime. For instance, there are unmonitored discharges from Hall Dale/Hurts Wood and
other channels between D1 and D23.
10
12
14
16
18
20
22
24
26
28
0
0
:
1
5
0
1
:
1
5
0
2
:
1
5
0
3
:
1
5
0
4
:
1
5
0
5
:
1
5
0
6
:
1
5
0
7
:
1
5
0
8
:
1
5
0
9
:
1
5
1
0
:
1
5
1
1
:
1
5
1
2
:
1
5
1
3
:
1
5
1
4
:
1
5
1
5
:
1
5
1
6
:
1
5
1
7
:
1
5
1
8
:
1
5
1
9
:
1
5
2
0
:
1
5
2
1
:
1
5
2
2
:
1
5
2
3
:
1
5
A
i
r

t
e
m
p
e
r
a
t
u
r
e
(

C
)
Time on 26 June 2011
D4
D11
D21
D23
10
12
14
16
18
20
22
24
26
28
0
0
:
1
5
0
1
:
1
5
0
2
:
1
5
0
3
:
1
5
0
4
:
1
5
0
5
:
1
5
0
6
:
1
5
0
7
:
1
5
0
8
:
1
5
0
9
:
1
5
1
0
:
1
5
1
1
:
1
5
1
2
:
1
5
1
3
:
1
5
1
4
:
1
5
1
5
:
1
5
1
6
:
1
5
1
7
:
1
5
1
8
:
1
5
1
9
:
1
5
2
0
:
1
5
2
1
:
1
5
2
2
:
1
5
2
3
:
1
5
W
a
t
e
r

t
e
m
p
e
r
a
t
u
r
e
(

C
)
Time on 26 June 2011
D4
D11
D21
D23
3
2
1
0
1
2
3
4
5
6
0
0
:
1
5
0
1
:
1
5
0
2
:
1
5
0
3
:
1
5
0
4
:
1
5
0
5
:
1
5
0
6
:
1
5
0
7
:
1
5
0
8
:
1
5
0
9
:
1
5
1
0
:
1
5
1
1
:
1
5
1
2
:
1
5
1
3
:
1
5
1
4
:
1
5
1
5
:
1
5
1
6
:
1
5
1
7
:
1
5
1
8
:
1
5
1
9
:
1
5
2
0
:
1
5
2
1
:
1
5
2
2
:
1
5
2
3
:
1
5
S
e
n
s
i
b
l
e

h
e
a
t

f
l
u
x

(
W
/
m
2
)
Time on 26 June 2011
D4
D11
D21
D23
8
6
4
2
0
2
4
0
0
:
1
5
0
1
:
1
5
0
2
:
1
5
0
3
:
1
5
0
4
:
1
5
0
5
:
1
5
0
6
:
1
5
0
7
:
1
5
0
8
:
1
5
0
9
:
1
5
1
0
:
1
5
1
1
:
1
5
1
2
:
1
5
1
3
:
1
5
1
4
:
1
5
1
5
:
1
5
1
6
:
1
5
1
7
:
1
5
1
8
:
1
5
1
9
:
1
5
2
0
:
1
5
2
1
:
1
5
2
2
:
1
5
2
3
:
1
5
T
e
m
p
e
r
a
t
u
r
e
c
h
a
n
g
e

b
e
t
w
e
n

s
i
t
e
s

(

C
)
Time on 26 June 2011
D4 to D11
D11 to D21
D21 to D23
Wilby et al. The Loughborough University Temperature Network
197
5 Conclusions
This paper has explained the rationale behind the Loughborough University TEmperature Network
(LUTEN) which has been recording air (Ta) and water (Tw) temperatures every 15 minutes at 33
sites in the Rivers Dove and Manifold since March 2011. Such high fidelity is needed to understand
the complex spatial and temporal controls that shape the thermal regime at reach scales. The overall
objective is to develop tools that predict the extent to which increasing river water temperatures
could be delayed through active management of riparian vegetation cover and/or point discharges.
The first full year of temperature data is already providing important insights. The thermal regimes
of sites in the Manifold behave in a relatively homogeneous way compared with the Dove. Both Ta
and Tw are highly auto-correlated in space and time so there is scope for redeployment of some
thermistors once the more subtle variations are understood. For example, it is not yet clear why
some sites (such as site M14) are markedly warmer than their neighbours. In this case, local Tw
may be strongly influenced by tributary drainage and heat flux from surrounding agricultural land.
The Tw-Ta relationships in the Dove are altogether more complicated. Beyond the geological divide
between the Millstone Grit and Limestone strata (sites D16/D17) the character of the thermal
regime changes abruptly. Upstream of this transition zone, Ta and Tw are highly correlated,
explaining more than 90% of the variance at some sites. Downstream sites (D20 to D22) are on
average warmer until an abrupt cooling below D23. Upstream of this point, natural springs and
tributary flows contribute to net cooling of Tw and reduce the influence of the atmospheric heat flux.
Again, there is a case for more intensive monitoring of these reaches by instrumenting major
tributaries, drainage channels and natural springs. This will be supplemented by analyses of the
Environment Agency discharge record at Dovedale to improve estimates of the rate of travel and
advected heat budget in the lower river. For instance, a sequence of weirs may be increasing the
residence time and hence heating of water passing beyond D17. Daily maximum 15 minute Tw
values at sites D20 to D22 are already approaching levels that are potentially harmful for trout
feeding and growth (Solomon, 2008). For example, the logistic regression model for D21 (Milldale)
predicts that Ta ~26C is sufficient to produce a local maximum Tw >20 C. Regional warming due
to anthropogenic climate change will increase the likelihood of such high temperatures.
The next step in our analysis will be to generalise the logistic regression models such that Tw can be
predicted at sites that are not monitored. This involves establishing empirical relationships between
the three parameters of the model (i.e., upper bound temperature, steepest slope, and inflexion point
of the function) and terrain indices. Our preliminary assessment has already revealed that the slope
parameter is highly site specific. However, the data will need to be stratified in such a way that the
relative influence of in situ, advected, and landscape controls of the heat balance can be discerned.
This will help determine the extent to which active management of the riparian vegetation and
source protection (for natural springs) will delay the loss of thermal refugia in these rivers. One
point is clear: catchment-wide and reach-specific perspectives will be needed when it comes to the
long-term management of these cool spots.
Acknowledgements
The authors thank all the landowners who have kindly given us access to the rivers. The support of
the Wild Trout Trust and the Trent Rivers Trust is also gratefully acknowledged.
Wilby et al. The Loughborough University Temperature Network
198
References
Broadmeadow, S., Jones, J.G., Langford, T.E.L., Shaw, P.J. and Nisbet, T. (2009). The influence of riparian shade on
lowland stream water temperatures in southern England and their viability for brown trout. Rivers Research
Applications, 26, 1-12.
Caissie, D. (2006). The thermal regime of rivers: a review. Freshwater Biology, 51, 1389-1406.
Hansen, L.J., Biringer, J.L. and Hoffman, J.R. (2003). Buying time: a user's manual for building resistance and
resilience to climate change in natural systems. WWF Climate Change Program, 244pp.
Malcolm, I.A., Soulsby, C., Hannah, D.M., Bacon, P.J., Youngson, A.F. and Tetzlaff D. (2008). The influence of
riparian woodland on stream temperatures: implications for the performance of juvenile salmonids. Hydrological
Processes, 22, 968-979.
Met Office (2012). UK annual weather summary. Weather, 67, 43.
Mohseni, O., Erickson, T.R. and Stefan, H.G. (1999). Sensitivity of stream temperatures in the United States to air
temperatures projected under a global warming scenario. Water Resources Research, 35, 3723-3733.
Moore, R.D., Spittlehouse, D.L. and Story, A. (2005). Riparian microclimate and stream temperature response to forest
harvesting: a review. Journal of the American Water Resources Association, 41, 813-834.
Orr, H.G., Simpson, G.L., des Clers , S., Watts, G. , Hughes, M., Hananford, J., Dunbar, M.J., Laize, C., Wilby, R.L.,
Battarbee, R.W., Evans, E. and Phillips, H. 2012. Evidence of widespread and rapid warming of rivers. Hydrological
Processes, under revision.
Punzet, M., Vo, F., Vo, A., Teichert, E. and Brlund, I. (2012). A global approach to assess the potential impact of
climate change on stream water temperatures and related in-stream first order decay rates. Journal of Hydrometeorology,
published online.
Rice, S. and Toone, J.A. (2010). Fluvial audit of the Upper Dove Catchment, Derbyshire and Staffordshire, UK. Natural
England Survey Report.
Solomon, D.J. (2008). The thermal biology of brown trout and Atlantic salmon: a literature review. Environment
Agency Southwest Region, 40pp.
Toone, J.A., Wilby, R.L., Rice, S. (2011). Surface-water temperature variations and river corridor properties. Water
Quality: Current Trends and Expected Climate Change Impacts Proceedings of symposium H04 held during IUGG2011
in Melbourne, Australia, July 2011. IAHS Publ. 348, 129-134.
Webb, B.W. and Nobilis, F. (2007). Long-term changes in river temperature and the influence of climate and
hydrological factors. Hydrological Sciences Journal, 52, 74-85.
Webb, B.W., Hannah, D.M., Moore, R.D., Brown, L.E. and Nobilis, F. (2008). Recent advances in stream and river
temperature research. Hydrological Processes, 22, 902-918.
Wilby, R.L., Orr, H., Watts, G., Battarbee, R.W., Berry, P.M., Chadd, R., Dugdale, S.J, Dunbar, M.J., Elliott, J.A.,
Extence, C., Hannah, D.M., Holmes, N., Johnson, A.C., Knights, B., Milner, N.J., Ormerod, S.J., Solomon, D., Timlett,
R., Whitehead, P.J. and Wood, P.J. (2010). Evidence needed to manage freshwater ecosystems in a changing climate:
turning adaptation principles into practice. Science of the Total Environment, 408, 4150-4164.

You might also like