Professional Documents
Culture Documents
PRESIDENT OBAMA will have a hard time achieving his foreign policy goals until he
masters some key terms and better manages the expectations they convey. Given the furor that will surround the news of America's readiness to hold talks with Iran,
he could start with "engagement one of the trickiest terms in the policy lexicon. The
Obama administration has used this term to contrast its approach with its predecessor's
resistance to talking with adversaries and troublemakers. His critics show that they misunderstand the
concept of engagement when they ridicule it as making nice with nasty or hostile regimes. Let's get a few things straight. Engagement in statecraft
is not about sweet talk. Nor is it based on the illusion that our problems with
rogue regimes can be solved if only we would talk to them. Engagement is not
normalization, and its goal is not improved relations. It is not akin to detente, working for rapprochement, or appeasement.
So how do you define an engagement strategy? It does require direct talks.
There is simply no better way to convey authoritative statements of position or
to hear responses. But establishing talks is just a first step. The goal of
engagement is to change the other country's perception of its own interests and realistic
options and, hence, to modify its policies and its behavior. Diplomatic engagement
is proven to work in the right circumstances. American diplomats have used it to change the
calculations and behavior of regimes as varied as the Soviet Union, South
Africa, Angola, Mozambique, Cuba, China, Libya and, intermittently, Syria.
There is no cookie-cutter formula for making it work, however. In southern Africa in the 1980s, we directed our
focus toward stemming violence between white-ruled South Africa and its black-ruled neighbors. This strategy put a priority on regional conflict management in order to stop
cross-border attacks and create better conditions for internal political change. The United States also engaged with the Cubans in an effort aimed at achieving independence for
Namibia (from South Africa) and at the removal of Cuban troops from Angola. In Mozambique, engagement meant building a constructive relationship with the United States,
Bush administration's
restraining South African interference in Mozambique's internal conflicts and weaning the country from its Soviet alignment. More recently, the
matter what we say , the rogue regime may claim that engagement confers legitimacy. A more
consequential danger is that a successful engagement strategy' may leave the target regime in place and even strengthened, an issue that troubled some critics of the Bush
choices. Perhaps this is what frightens its critics the most. As the Obama team works to fend off accusations that it is rushing into Russian, Iranian, Syrian or even North
Korean arms, it will need to get the logic and definition of engagement right. In each case, we will need a clear-eyed assessment of what we are willing to offer in return for the
changed behavior we seek. Engagement diplomacy maybe easier to understand if the Obama administration speaks clearly at home about what it really requires.
Scholars have limited the concept of engagement in a third way by unnecessarily restricting the scope of the
policy. In their evaluation of post-Cold War US engagement of China, Paul Papayoanou and Scott Kastner define
engagement as the attempt to integrate a target country into the international order through
promoting "increased trade and financial transactions." (n21) However, limiting engagement
policy to the increasing of economic interdependence leaves out many other issue areas that were
an integral part of the Clinton administration's China policy, including those in the diplomatic, military and
cultural arenas. Similarly, the US engagement of North Korea, as epitomized by the 1994 Agreed Framework pact, promises
eventual normalization of economic relations and the gradual normalization of diplomatic relations.(n22) Equating engagement with
economic contacts alone risks neglecting the importance and potential effectiveness of contacts in noneconomic issue areas.
Finally, some scholars risk gleaning only a partial and distorted insight into engagement by restrictively evaluating its effectiveness in
achieving only some of its professed objectives. Papayoanou and Kastner deny that they seek merely to examine the "security
implications" of the US engagement of China, though in a footnote, they admit that "[m]uch of the debate [over US policy toward
the PRC] centers around the effects of engagement versus containment on human rights in China."(n23) This approach violates a
cardinal tenet of statecraft analysis: the need to acknowledge multiple objectives in virtually all attempts to exercise inter-state
influence.(n24) Absent a comprehensive survey of the multiplicity of goals involved in any such attempt, it would be naive to accept
any verdict rendered concerning its overall merits.
In order to establish a more effective framework for dealing with unsavory regimes, I propose
that we define engagement as the attempt to influence the political behavior of a target state
through the comprehensive establishment and enhancement of contacts with that state across
multiple issue-areas (i.e. diplomatic, military, economic, cultural). The following is a brief list
of the specific forms that such contacts might include:
DIPLOMATIC CONTACTS
Summit meetings and other visits by the head of state and other senior government officials of
sender state to target state and vice-versa
MILITARY CONTACTS
Visits of senior military officials of the sender state to the target state and vice-versa
Arms transfers
Military aid and cooperation
Intelligence sharing
ECONOMIC CONTACTS
Foreign economic and humanitarian aid in the form of loans and/or grants
CULTURAL CONTACTS
Cultural treaties
This definition implies that three necessary conditions must hold for engagement to constitute an effective foreign policy
instrument. First, the overall magnitude of contacts between the sender and target states must initially be low. If two states are
already bound by dense contacts in multiple domains (i.e., are already in a highly interdependent relationship), engagement loses its
impact as an effective policy tool. Hence, one could not reasonably invoke the possibility of the US engaging Canada or Japan in
order to effect a change in either country's political behavior. Second, the material or prestige needs of the target state must be
significant, as engagement derives its power from the promise that it can fulfill those needs. The greater the needs of the target
state, the more amenable to engagement it is likely to be. For example, North Korea's receptivity to engagement by the US
dramatically increased in the wake of the demise of its chief patron, the Soviet Union, and the near-total collapse of its national
economy.(n28)
Third, the target state must perceive the engager and the international order it represents as a potential source of the material or
prestige resources it desires. This means that autarkic, revolutionary and unlimited regimes which eschew the norms and institutions
of the prevailing order, such as Stalin's Soviet Union or Hitler's Germany, will not be seduced by the potential benefits of
engagement.
Australian's vision of a decarbonised power supply is bold but it reveals the conundrum at the heart of energy planning. The problem is not simply
The
taking carbon out of the electricity grid, a task the South Australian government has embraced as it reaches 27 per cent wind-generated power.
future challenge is producing reliable power with low carbon emissions, as the
population increases and people and continue to live in electricity-hungry cities.
Getting the future energy equation right is a moving target. The International Energy Agency forecasts an 80
per cent increase in world electricity demand to 2040, with an increase in total
energy demand (gas, coal, oil, renewables) of 37 per cent by 2040. And even with a massive
push for renewables, 75 per cent of the energy used globally will be still be the
hydrocarbons of oil, gas and coal, which produce the highest carbon emissions.
The Bureau of Resources and Energy Economics (BREE) forecasts Australian energy usage to grow 42 per cent to 2050, with electricity generation
growing 30 per cent over the same period. But renewables such as solar and wind will not take over the generation task. Coal's share of total electricity
generation will remain stable at about 65 per cent to 2050, according to BREE. And wind and solar currently only comprise 20 per cent of Australia's
renewables generation: the bulk is from biomass, in particular from the sugar and timber industries. Observing the patterns of demand is the job of
Matt Zema, chief executive of the Australian Energy Market Operator. He says total electricity usage in 2009 was about 200,000 gigawatt hours, and it
has shrunk to 180,000 gwh. We are not expected to return to 2009 levels until at least 2020, and in 2035 the total won't rise much past 220,000. He
says the challenge is to forecast power patterns based on behaviour rather than the old certainties of demand and load. "Since 2005 we've seen a move
to decentralised power generation," Zema says. "We're moving away from huge, centralised power stations that were built from the 1960s onward and
now we move into a new phase." Solar only just begun Zema says the rise of solar PV panels on roofs has only just begun because the arrival of cheap
and effective battery storage will increase the uptake and the amount of power generated and used, from rooftops. "A few years ago, storage was
something happening in 10 years, perhaps. Now we can see that affordable storage is three to five years away. Technology will change our future
energy usage faster than other factors.' Zema says the current forecasts are that coal will continue to provide most electrical power in Australia in 2040,
but that can't account for technology and consumer behaviour. This is because coal is Australia's cheapest and most "dispatchable" power source, but
storage technology might make some renewables dispatchable too. By 2035, AEMO forecasts that South Australia's PV rooftop panels will account for
28 per cent of underlying residential and commercial consumption, and in Queensland it will be just over 20 per cent coming from PV. When effective
storage is added, Zema says, it is consumer behaviour that drives the energy market, not the old metrics of demand and load. In South Australia, by the
end of 2025, PV users could be net generators to the grid, at certain times, Zema says, which means rooftop PV will be sufficient, on some days, to
meet the underlying consumption of the residential, commercial and industrial sectors during the middle of the day. Unhitching from
coal as a future energy source is directly reliant on plans for base-load power, says Ben Heard, a
director at ThinkClimate Consulting. He says if you take out the fluctuations and spikes of power
usage and the daily peaks and seasonal ups and downs, you end up with a base of daily and annual
power demand that must always be available. "When you build a power supply,
you have the base-load at the foundation," says Heard, also a doctoral candidate at University of Adelaide.
"You want this to be available 24/7 and so you use the cheapest and most
reliable way of doing it. And in Australia, that's coal-fired generation." Nuclear tech under consideration The
other reliable base-load technology is nuclear, a technology now being actively considered for South Australia.
South Australia is something of a cautionary tale for green warriors running too quickly to a decarbonised future, Heard says. The night before he spoke
to The Australian Financial Review there had been a two-hour power outage in the state. " If you place too much reliance
on wind and solar, and retire your dirty [coal] base-load supply, you might get
away with it for a few years when demand is flat; but when the population and
cities start growing again, you're vulnerable." South Australia has the highest mix of renewables in Australia
(while demand is flat), but it also relies on a grid interconnector to bring coal-fired power from Victoria. Australia's energy future will have lower
emissions while still keeping its base-load power, the chairman of the Academy of Technological Science and Engineering (ATSE), Dr Bruce Godfrey says.
But he says the challenge won't be met by forcing a comparison "between apples and oranges". "We have to be careful of false comparisons," Godfrey
We need base-load supply, and that comes from coal, gas and nuclear. Wind, solar,
says. "
tidal and wave are variable supplies. They have their place, and as they improve
they are gaining a bigger place in our grids. But you can't compare them with base-
load."
On August 14, 2003 a chain of events beginning with the loss of a few
powerstations due to high energy load escalated, with help from a combination of human error and
tall trees, into the worst blackout in history. All told, 50 million people in the Midwest,
Northeast and Ontario, Canada, with a combined load of 61.8 gigawatts lost power for up to 4
days. [1] One of the most interesting, and worrying, points about this blackout is that a relatively small
number of failures in generating plants and a few transmission lines shorting
out by touching trees were able to set off a massive chain of transmission line
and generating failures across a huge portion of the electrical grid. This effect is
known as a cascading power transmission failure, and once it begins it is nearly
impossible to stop before it has eliminated almost all power transmission in a large area.
The 2003 Blackout originated in Ohio with the FirstEnergy power plants and transmission lines. [1] Under high load, more current
flows through a transmission line and some of the power transmitted is dissipated as heat. At high temperatures, the conductors in
a high-voltage transmission line expand, causing the line to sag more toward the ground. This effect is amplified on a hot day as less
heat can be dissipated to the air. FirstEnergy had failed to trim the trees along several of its power lines, and so when they sagged
under load, four lines, three of which were carrying 345 kV, contacted trees and tripped their ciruit breakers. [1] These line failures,
combined with an ironic failure in the alarm system meant to notify FirstEnergy of line failures, resulted in a large drop in voltage
across the entire FirstEnergy power network. At this point, the low voltage and high current on lines that had not failed, tripped the
circuit breaker on the 345 kV Sammis-Star line, setting off the cascade. [2]
To understand a cascading power failure, one must first understand the relationship between current, voltage and impedance on an
AC line. In general, Ohm's law for AC circuits states that V = IZ where V is voltage, I is current and Z is impedance. As load increases
on a line in the power system, either because more power is being used at the end, or because generators or other transmission
lines fail, the magnitude of impedance drops. Thus, if voltage is held constant by the generators, the amount of current will increase.
This is generally what happens under normal operation, and the lines are rated to function with amounts of current much higher
than what would be caused by lots of people using air conditioning or a few isolated line failures.
The second key point to understanding a cascade is the protection system used for transmission lines. Long transmission lines are
very expensive, so if there is a short-ciruit, it is important to almost instantly isolate the line before high currents can do serious
damage. By far the most the most prevalent protection system in place is the impedance relay. [2] These devices measure
impedance along the line, and at the junction with lines to which it connects further up in the power grid network. The idea is that if
a line, or any of its neighboring lines shorts out (perhaps by touching a tree) the extremely high current from a short circuit to
ground will result in very low impedance. Again this is because, by Ohm's law, Z = V/I. The impedance relay then trips circuit
breakers to isolate the line from the grid and protect it.
August 14, 2003, the generators are unable to adjust quickly to the change in
load, so voltage across the remaining lines also temporarily drops. At this point, current
has also increased because load has increased, so impedance is very low. The key point here is that the impedance relays cannot tell
the difference between the low impedance caused by a short circuit, and that caused by the sudden drop in voltage and rise in
Thus, when FirstEnergy's lines and
current from the failure of several lines and generators. [2]
generators failed, the impedance relay on the Sammis-Star line interpreted the
drop in impedance as a short ciruit and promptly tripped its circuit breakers to
protect the line. At this point, it was impossible for any sort of intervention to
stop the cascade. [2]
Once one impedance relay trips simply because of a rise in load rather than by an actual short cicuit, the cascading power failure
begins. This is essentially because the spike in current and drop in voltage that can cause a relay trip has reached a critical mass.
Once the relay trips, the line is isolated from the rest of the grid, so some other line must very rapidly take on the extra load from
the tripped relay. Since this rapid change in load was already enough to trip the first line, having it suddenly shifted to some other
line again produces low enough impedance to cause a relay trip. This isolates the second line from the grid and the whole process
repeats itself, rippling through the grid extremely quickly. In fact, once the Sammis-Star 345 kV line tripped in 2003, it took only a
little over 5 minutes for 50 million people to lose power. [2]
All told, a most estimates of the full economic repercussions of the 2003 blackout
put the collective damage between $4 and $10 billion. [1] However, the more
startling number is the total cost per year to U.S. electricity consumers due to
power outages: $79 billion. [3] Though not nearly all of these outages can be
attributed to cascades, the effect is related. When load gets too high on some
sub grid in the power system, it is necessary to cease power supply to some
subset of customers in order to prevent the build up to critical load levels that
can cause a cascade. [1] Though much has been done in terms of new regulations and standards to prevent blackouts
like that in 2003 from reoccuring, it is important to note that the system is necessarily
a cyber attack that can take out a civilian power grid, for example could also cripple the
To make matters worse
U.S. military.
The senator notes that is that the same power grids that supply cities and towns, stores and gas stations, cell towers and heart monitors also power every military base in our country. Although bases would be prepared to
backup diesel generators, within hours, not days, fuel supplies would run out
weather a short power outage with , he said.
command and control centers could go dark Radar systems that detect air threats
Which means military . to our
would shut Down completely Communication between commanders and their troops
country .
would also go silent. And many weapons systems would be left without either fuel or electric
power, So in a few short hours or days, the mightiest military in the world would be
said Senator Grassley.
left scrambling to maintain base functions , he said. We contacted the Pentagon and officials confirmed the threat of a cyber attack is something very real. Top national security officials
preventing a cyber attack and improving the nations electric grids is among the most urgent
priorities of our country (source: Congressional Record). So how serious is the Pentagon taking all this? Enough to start, or end a war over it, for sure (see video: Pentagon declares war on cyber attacks
A cyber attack today against the US could very well be seen as an Act
http://www.youtube.com/watch?v=_kVQrp_D0kY&feature=relmfu ).
of War and could be met with a full scale US military response. of nuclear That could include the use
Above all, Abe has taken several moves to strengthen Japan's most important strategic
relationship: its alliance with the United States. In April 2015, Tokyo and Washington
upgraded their ties for the first time since 1997, announcing that they would start
cooperating more closely on maritime security and regional stability. The two nations also agreed to
work together to deal with ambiguous security situations that fall short of formal conflict and to jointly respond to threats in space and
cyberthreats. REMAKING ASIA By slowly eliminating its restraints on security cooperation, by
deepening its relationship with the United States, and by emphasizing more muscular,
liberal rhetoric, Abe's Japan has positioned itself as a sort of anti-China in Asia and
beyond. Yet many of the other restrictions on Japan's military remain in place, and these will not be revoked anytime soon. Japan's
society would not allow its military to play a more normal role in dealing with foreign crises; the Japanese also remain highly wary of
entangling alliances. Yet many of Japan's elites-who are worried about the threats from China and North Korea and who fear that the
United States is distracted by crises in the Middle East and Ukraine-have embraced the country's new realism. Leading thinkers, including
the journalist Yoichi Funabashi, the former diplomat Kuni Miyake, the political scientist Koji Murata, and the former defense minister Satoshi
Morimoto, are among those writing and speaking about the need for a more muscular Japanese posture. Indeed, there is a growing
community of academics, policy analysts, and politicians who believe that Japan must do more to ensure its own security, as well as to help
support the global system that has protected it since the end of World War II. As Abe expands Japan's global role, his policies will include
new activities abroad and entail deeper security cooperation with existing partners. The more unstable the global environment becomes,
the more Japan will need to play a global role commensurate with its size and economic strength. That role should take advantage of
After decades of stagnation in Japan's
multilateral organizations, but it will, realistically, privilege Japan's security.
foreign and security policies, the new posture will contribute to the maintenance of Asia's
liberal post-World War II order over the coming decade and beyond. Abe's policies, which build on
some of those of his predecessors, are a series of small yet interlinked steps that will enhance Japan's security, diplomacy, and economy.
In focusing primarily on stemming the growing threat from China, Abe is attempting a
tricky balance: to prevent the souring of relations between Beijing and Tokyo but also to
keep Asia's balance of power from tilting too far toward China.
Further suggestions to accelerate progress include (i) test sites for prototype projects
providing access to innovators from China, the United States, and other
countries; (ii) joint development of opensource architecture for major advanced
plant subsystemssuch as a standards-based specification for reactor modules
of all types that would address general safety criteria, fuel lifetime,
transportability, and so on, as well as open-source codes for advanced reactors;
(iii) joint programs to develop, demonstrate, and license advanced non-light-
water reactors; (iv) agreement on a regulatory approach that encourages
technical innovation in safety assurance, as opposed to detailed prescriptive
specifications, also stage gates of approval rather than a single review that
can require hundreds of millions of dollars in preparation. Jointly funded
projects would be governed by the regulations of the host country.
The counterplan utilizes both countries expertise and builds mutual trust solves
relations.
Dong Zhaohui, 3-27-2016, "U.S.-China cooperation on nuclear energy helps build trust in relations: expert,",
http://english.chinamil.com.cn/news-channels/pla-daily-commentary/2016-03/27/content_6978756.htm//KEN
WASHINGTON, March 26 (Xinhua) -- Nuclear energy cooperation between the United States and China
has yielded tremendous benefits for both countries and can contribute to trust in the larger
bilateral relationship, a U.S. nuclear energy expert told Xinhua. The United States and China
could further enhance cooperation on nuclear energy as there are vast
commercial opportunities for both countries and the world, Daniel Lipman, vice president of
Washington-based Nuclear Energy Institute (NEI), said in an interview ahead of the Nuclear Security Summit to be held in
cooperation "requires a strong
Washington from March 31 to April 1. Bilateral nuclear energy
foundation of mutual respect and trust that shared technologies will be used
only for peaceful purposes," Lipman said, adding that it is "not something the United
States enters into lightly." Through extensive person-to-person and institutional
contacts, commercial nuclear trade can also share best practices on nuclear
safety, security and nonproliferation, the expert said.
Without nuclear power, needed climate change reduction becomes impossible.
Hansen and Kharecha 2
James Hansen, PhD in Physics from the University of Iowa; Currently works at the Earth Institute
as a Professor at Columbia University, Pushker Kharecha, NASA Goddard Institute for Space
Studies; Researcher at Columbia in Earth Science; PhDs in Geosciences and Astrobiology, "
Prevented Mortality and Greenhouse Gas Emissions from Historical and Projected Nuclear
Power" Environmental Science and Technology,
http://pubs.giss.nasa.gov/docs/2013/2013_Kharecha_kh05000e.pdf, March 13, 2013
reduction of global mortality and GHG emissions due to fossil fuel use. If the role of
nuclear power significantly declines in the next few decades, the International
Energy Agency asserts that achieving a target atmospheric GHG level of 450
ppm CO2-eq would require heroic achievements in the deployment of
emerging lowcarbon technologies, which have yet to be proven. Countries that
rely heavily on nuclear power would find it particularly challenging and
significantly more costly to meet their targeted levels of emissions. 2 Our
analysis herein and a prior one7 strongly support this conclusion. Indeed, on the basis of combined
evidence from paleoclimate data, observed ongoing climate impacts, and the measured planetary energy imbalance, it appears
increasingly clear that the commonly discussed targets of 450 ppm and 2 C global temperature rise (above preindustrial levels) are
insufficient to avoid devastating climate impacts; we have suggested elsewhere that more appropriate targets are less than 350 ppm
Aiming for these targets emphasizes the importance of
and 1 C (refs 3 and 3133).
retaining and expanding the role of nuclear power, as well as energy efficiency improvements and
renewables, in the near-term global energy supply
Climate Adv
1. Just wanted to point our on the top that they dont have a single terminal
impact in the 1ACso what if they cause sea rise, they have no evidence saying
the explicit impacts of itmeans you cant evaluate any impact they are able to
weasel out in the 2AC
The debate over climate change is horribly polarized. From the way it is conducted, you would think that only two
positions are possible: that the whole thing is a hoax or that catastrophe is inevitable. In fact there is room for lots of intermediate positions, including
harm, let alone prove to be the greatest crisis facing humankind this century. After more than
25 years reporting and commenting on this topic for various media organizations, and having started out alarmed, thats where I have ended up. But it
is not just I that hold this view. I share it with a very large international organization, sponsored by the United Nations and supported by virtually all the
worlds governments: the Intergovernmental Panel on Climate Change (IPCC) itself. The IPCC commissioned four different
models of what might happen to the world economy, society and technology in the 21st century and what each would mean
for the climate, given a certain assumption about the atmospheres sensitivity to carbon dioxide. Three of the models show a
moderate, slow and mild warming, the hottest of which leaves the planet just 2 degrees Centigrade warmer than today
in 2081-2100. The coolest comes out just 0.8 degrees warmer. Now two degrees [above pre-indistrial levels] is the threshold at which warming starts to
turn dangerous, according to the scientific consensus. That is to say, in three of the four scenarios considered by the IPCC, by the time my childrens
children are elderly, the earth will still not have experienced any harmful warming, let alone catastrophe. But what about the fourth
scenario? This is known as RCP8.5, and it produces 3.5 degrees of warming in 2081-2100 [or 4.3 degrees above pre-industrial
levels]. Curious to know what assumptions lay behind this model, I decided to look up the original paper describing the creation of this scenario.
Frankly, I was gobsmacked. It is a world that is very, very implausible. For a start, this is a world of
continuously increasing global population so that there are 12 billion on the
planet. This is more than a billion more than the United Nations expects, and flies in the
face of the fact that the world population growth rate has been falling for 50 years and is on course
to reach zero i.e., stable population in around 2070. More people mean more emissions. Second, the world is assumed in the
RCP8.5 scenario to be burning an astonishing 10 times as much coal as today, producing 50% of its primary energy
from coal, compared with about 30% today. Indeed, because oil is assumed to have become scarce, a lot of liquid fuel would then be derived from coal.
Nuclear and renewable technologies contribute little, because of a slow pace of innovation and hence fossil fuel technologies continue to dominate
the primary energy portfolio over the entire time horizon of the RCP8.5 scenario. Energy efficiency has improved very little. These are
highly unlikely assumptions. With abundant natural gas displacing coal on a huge scale in the United
States today, with the price of solar power plummeting , with nuclear power experiencing a
revival, with gigantic methane-hydrate gas resources being discovered on the seabed, with
energy efficiency rocketing upwards, and with population growth rates continuing to
fall fast in virtually every country in the world, the one thing we can say about RCP8.5 is that it is very, very implausible. Notice, however, that even
so, it is not a world of catastrophic pain. The per capita income of the average human being in 2100
for more than 17 years. With these much more realistic estimates of sensitivity (known as transient climate response), even RCP8.5
cannot produce dangerous warming. It manages just 2.1C of warming by 2081-2100 [see table 3 in the report by Lewis
and Crok here] That is to say, even
if you pile crazy assumption upon crazy assumption till you have an
edifice of vanishingly small probability, you cannot even manage to make climate change cause minor
damage in the time of our grandchildren, let alone catastrophe. Thats not me saying this its the IPCC itself. But what strikes
me as truly fascinating about these scenarios is that they tell us that globalization, innovation and economic growth are
unambiguously good for the environment. At the other end of the scale from RCP8.5 is a much more cheerful scenario called
RCP2.6. In this happy world, climate change is not a problem at all in 2100, because carbon dioxide emissions have
plummeted thanks to the rapid development of cheap nuclear and solar, plus a surge in energy efficiency. The
RCP2.6 world is much, much richer. The average person has an income about 16 times todays in
real terms, so that most people are far richer than Americans are today. And it achieves this by free trade, massive globalization, and
lots of investment in new technology. All the things the green movement keeps saying it opposes because they will wreck the planet. The
answer to climate change is, and always has been, innovation. To worry now in 2014 about a very small, highly
implausible set of circumstances in 2100 that just might, if climate sensitivity is much higher than the evidence suggests, produce a marginal damage to
the world economy, makes no sense. Think of all the innovation that happened between 1914
and 2000. Do we really think there will be less in this century? As for how to deal with that small risk, well there are several possible options.
You could encourage innovation and trade. You could put a modest but growing tax on carbon to
nudge innovators in the right direction. You could offer prizes for low-carbon technologies. All of these might make a little
sense. But the one thing you should not do is pour public subsidy into supporting old-fashioned existing technologies that produce more carbon dioxide
per unit of energy even than coal (bio-energy), or into ones that produce expensive energy (existing solar), or that have very low energy density and so
require huge areas of land (wind).
of the most respected academic journals are right, planet Earth could be on
course for global warming of more than seven degrees Celsius within a lifetime.
And that, according to one of the worlds most renowned climatologists, could be
the last 784,000 years, on the left of the graph, followed by a projection to 2100
based on new calculations of the climate's sensitivity to greenhouse gases
(Friedrich, et al. (2016)) In a paper in the journal Science Advances, they said the actual range could be
between 4.78C to 7.36C by 2100, based on one set of calculations. Some have dismissed the idea that the
world would continue to burn fossil fuels despite obvious global warming, but emissions are still increasing despite a 1C rise in
Trump has said he will rip up
average thermometer readings since the 1880s. And US President-elect Donald
Americas commitments to the fight against climate change. Professor Michael Mann,
of Penn State University in the US, who led research that produced the famous hockey stick graph showing how
humans were dramatically increasing the Earths temperature, told The Independent the new paper
appeared "sound and the conclusions quite defensible". And it does indeed provide support
for the notion that a Donald Trump presidency could be game over for the climate , he wrote in
an email. READ MORE Trump's views on climate change mean he's a danger to us all By game over for the climate, I mean
game over for stabilizing warming below dangerous (ie greater than 2C) levels. If Trump makes good on his promises, and the US
pulls out of the Paris [climate] treaty, it is difficult to see a path forward to keeping warming below those levels. Greenpeace UK
said the new research was further evidence that urgent action was needed. Dr Doug Parr, the environmental
campaign groups chief scientist, said: The worrying thing is the suggestion
climate sensitivity is higher [than thought] is not incompatible with higher
temperatures we have been seeing this year. If there is science backing that
up, that theres a higher sensitivity of the climate to greenhouse gases, that
puts at risk the prospect of keeping the globe at the Paris target of well below
2C. Anybody who understands the situation we find ourselves in would have already have realised we are in an emergency
situation. Science news in pictures 20 show all Dr Tobias Friedrich, one of the authors of the paper, said: Our results
km2 per year (111) in tropical regions is partially balanced by a net forest gain
of 8,700 km2 per year in Europe (110). However, part of the net forest gain is the
result of new forest plantations, often with exotic species, which often have
lower biodiversity than natural forests (113). Fire plays a major role in many
regions in the conversion of forest to agriculture but also in maintaining open
landscapes. As expected, there is an agreement between the spatial
distribution of areas of natural habitat being converted to agriculture and the
distribution of species affected by habitat loss (Figure 7a,b), including in Madagascar, some areas of
sub-Saharan Africa, Brazil's Atlantic Forest, the Middle East, and Southeast Asia. Forest loss in Southeast Asia is not well captured in
our land-use change map but has been reported in other studies (57). There are some regions where there is a high proportion of
species affected by habitat loss where most land-use change already occurred in the past (much of Europe), and regions where
species have been affected by habitat loss not captured in our analysis (e.g., the Sahara). River systems have been deeply altered by
impoundments and diversions to meet water, energy, and transportation needs of a growing human population (14). Today, there
Dams have upstream impacts, where
are more than 45,000 large dams (>15 m in height) worldwide (14).
lotic systems are changed into lentic systems, and downstream impacts, where
the timing, magnitude, and temperature of water flow is changed (45). Dams
are also responsible for the fragmentation of river systems, as they hamper or
even block the dispersal and migration of organisms (14). Furthermore, water
resource development by impoundments and diversions has high spatial
overlap with other pressures in freshwater ecosystems, such as pollution and
catchment disturbance by cropland (114). Other important habitat changes in
freshwater ecosystems include the loss of wetlands owing to drainage for
conversion to agriculture or urbanization, overextraction of groundwater (45),
and the excavation of river sand (115). Marine habitats are also being affected
by human activities, particularly by destructive fishing practices, such as
trawling and dynamiting (116). Coastal habitats and wetlands have been
affected mostly by urbanization, aquaculture development, and coastal
engineering works (15, 77). 5.2. Overexploitation Overexploitation is the major driver of
biodiversity loss in the oceans (2, 19). Capture fisheries production increased
for much of the twentieth century but has reached a plateau since the mid-1980s at around 7080 million
tons annually, despite continuing increases in global fishing effort levels (117, 118). The global landings would have likely declined
except for the spatial expansion of the fishing effort toward deeper and further offshore waters. By the mid-1960s, most fully
fishing efforts
exploited or overexploited fisheries were located in coastal areas of the Northern Hemisphere. By the 1980s,
were having an impact on regions much farther away from the coast, in the
middle of the northern and southern Atlantic Oceans. One decade later, the spatial expansion of
the fisheries had reached much of the world's oceans, with only some parts of the Indian Ocean, the Pacific Ocean, and the Antarctic
ocean not having reached maximum historical catches (116). In terrestrial systems, hunting is a major
concern in tropical savannahs and forests (2). Large birds and mammals are
targeted for their meat and charismatic species for their ornaments and alleged
medicinal purposes (108, 111). Wild-meat harvest has been estimated at 67164 thousand tons in the Brazilian Amazon
and 13.4 million tons in Central Africa (119). The impacts are particularly acute in Southeast Asia and Central Africa (111). A
connection has been established between the reduction of fish availability per
capita and the increase in hunting pressure of wild meat in West Africa (120).
Synergistic interactions between hunting and other drivers, such as land-use
change and disease, can also occur and cause local extinctions (106). 5.3. Pollution
Eutrophication and other ecosystem changes caused by pollution are major
drivers of biodiversity loss and alterations in both inland waters and coastal
systems (121). River nitrogen loads from point sources, such as domestic and industrial sewage, and nonpoint sources, such as
agriculture and atmospheric deposition, increased in most world regions from 1970 to 1995 but are starting to decline or are
Lakes are particularly vulnerable
projected to decline until 2030 in Europe and northern Asia (Russia) (122).
included in the assessment of species extinction risk (Figure 7c) and not directly
related to atmospheric nitrogen deposition (Figure 7d). 5.4. Introduction of Exotic Species and
Invasions One of the major trends in global biodiversity change is the increased
regimes, and impact other ecosystem services (129, 130). A particularly serious type of invasions is
epidemic disease. One example is chytridiomycosis, which has been decimating amphibians in many regions of the world and is a
leading cause of the global amphibian decline (131). Invasive species have also had important impacts on freshwater ecosystems,
where their incidence is correlated with human economic activity (132), and in marine and estuarine ecosystems due to ballast
water or hull fouling transported by ships (133). Still, many invasive species have had more moderate impacts on ecosystems (134),
and recently, some ecologists have called for a more embracing attitude toward exotic species, arguing that alien species should not
be a priori considered negative in an ecosystem but should be assessed objectively for their impacts (135, 136). Others have argued
for active translocation or assisted migration of species endangered by climate change (137), an approach that seems fraught with
peril on the basis of our historical experience of human introductions of exotic species, often with the best intentions. 5.5. Climate
Change Global mean surface temperature increased 0.74C from 1906 to 2005 and is expected to increase between 1.8C and 4C
during the twenty-first century, depending on the socio-economic scenario (138). Warming is spatially very heterogeneous as it is
largest in terrestrial systems and at high northern latitudes, with recent warming greater than 1.5C in some areas, and least
The impacts of climate
pronounced in the tropics, where many regions have warmed around 0.5C (Figure 7f).
with high vulnerability are species that have narrow climate niches, cannot shift
their ranges, or are unable to change their phenology, evolve their physiology, or behaviorally adapt to
the new conditions (93, 140). For instance, the limited ability of mountaintop species to
shift in elevation has been identified as a major climate vulnerability (92). For
amphibians, important future climate impacts have been projected in the northern Andes, parts of the Amazon, Central America,
southern and southeastern Europe, sub-Saharan tropical Africa, and Southeast Asia (140, 141). Surprisingly, this disagrees somewhat
from the recent spatial patterns of increased extinction risk owing to climate change (Figure 7e). In corals, most threatened and
Climate change is also causing sea-level
climate changesusceptible species occur in Southeast Asia (140).
argument here is the oft-quoted statement that the climate warmed by 1 F (0.6 C)
in the last 100 years AND that SL rose by 18 cm. Both parts of the statement are true; but the
second part does not necessarily follow from the first. The first clue that there might be
something amiss with the logic is hidden in the IPCC report itself. According to their compilation of data, the
contribution to SL rise of the past century comes mainly from three sources: (i)
Thermal expansion of the warming ocean contributed about 4 cm; and (ii) the melting of
continental glaciers about 3.5 cm. (iii) The polar regions, on the other hand,
produced a net lowering of SL, with most of this coming from the Antarctic. (The mechanism is intuitively easy
to understand but difficult to calculate: A warming ocean evaporates more water, and some of it rains out in the polar regions,
When one simply adds up
thus transferring water from the ocean to the polar ice caps.) The surprising result:
all these contributions (neglecting the large uncertainties), they account for
only about 20 percent of the observed rise of 18 cm. The climate warming since
1900 cannot be the cause of the SL rise; something is missing here. The second clue comes
from geological observations that SL has been rising for past centuries at about the same
rate as seen by tide gauges in the last 100 years. In other words, SL was rising even
during the cold Little Ice age, from about 1400 to 1850. This provides further support for
the hypothesis that the observed global SL rise since 1900 is reasonably
independent of this centurys temperature rise. The explanation for this riddle had been suspected
for some time, based on historic data of SL rise derived independently from measurements of coral growth [Fairbanks] and from
isotope determinations of ice volume [Shackleton]. But the picture was filled in only recently [Bindschadler 1998] through
estimates of the rate of melting of the West Antarctic Ice Sheet (WAIS), by tracing its shrinkage (through the receding position of
its grounding line, i.e., the line of contact of the ice sheet with the underlying continental mass) [Just published by Conway et al.
in Science, Oct. 8, 1999]. A quite independent measurement of the rate of release of melt water, using isotopes to identify the
melt, has just been reported and leads to a concordant result [Hohmann et al . 1999]. We can therefore describe the broad
The strong temperature increase that followed the peak of the last
scenario as follows:
ice age about 18,000 years ago has melted enough ice to raise global SL by 120
meters (360 feet). The rate of rise was quite high at first, controlled by the rapid melting away of the ice sheets covering
North America and the Eurasian land mass. These disappeared about 8000 years ago; but then, as SL rose, the WAIS continued to
melt, albeit at a lower rate -- and it is still melting at about this rate today. The principal conclusion is that this melting of the
And there is nothing
WAIS will continue for another 7000 years or so, unless another ice age takes over before then.
that we can do to stop this future sea level rise! It is as inevitable as the ocean
tides. Fortunately, coral reefs will continue to grow, as they have in the past, to
keep up with SL rise. The rest of us will just have to adapt, as our ancestors did
some 10,000 years ago. At least we are better equipped to deal with environmental changes. A final note: What
about the effects of human-induced global warming on SL rise? Will it really increase the rate above its natural value, as predicted
by the IPCC? We do have a handle on this question by observing what actually happened when the climate warmed sharply
between 1900 and 1940, before cooling between 1940 and 1975. The answer is quite surprising and could not have been derived
SL rise slowed down when the climate
from theory or from mathematical models. The data show that
6. Sea level rise claims are exaggerated rises will be less than half an inch
Idso and Idso 07 Research Physicist with the U.S. Department of Agriculture's Agricultural
Research Service; Keith, Vice President of the Center for the Study of Carbon Dioxide and Global
Change with a PhD in Botany; and Craig, former Director of Environmental Science at Peabody
Energy in St. Louis, Missouri and is a member of the American Association for the Advancement
of Science, American Geophysical Union, American Meteorological Society, Arizona-Nevada
Academy of Sciences, Association of American Geographers, Ecological Society of America
[Separating Scientific Fact from Personal Opinion A critique of the 26 April 2007 testimony of
James E. Hansen made to the Select Committee of Energy Independence and Global Warming of
the United States House of Representatives entitled "Dangerous Human-Made Interference with
Climate http://co2science.org/education/reports/hansen/hansencritique.php]
A good perspective on this issue is provided in the 16 March 2007 issue of Science by Shepherd and Wingham (2007), who review
what is known about sea-level contributions arising from wastage of the Antarctic and Greenland Ice Sheets, focusing on the
results of 14 different satellite-based estimates of the imbalances of the polar ice sheets that have been derived since 1998.
These studies have been of three major types - standard mass budget analyses,
altimetry measurements of ice-sheet volume changes, and measurements of
the ice sheets' changing gravitational attraction - and they have yielded a
diversity of values, ranging from an implied sea-level rise of 1.0 mm/year to a
sea-level fall of 0.15 mm/year. Based on their evaluation of these diverse
findings, the two researchers come to the conclusion that the current "best estimate" of
the contribution of polar ice wastage to global sea level change is a rise of 0.35
millimeters per year, which over a century amounts to only 35 millimeters, or
less than an inch and a half. Yet even this small sea level rise may be
unrealistically large, for although two of Greenland's biggest outlet glaciers
doubled their mass-loss rates in 2004, causing many to claim that the
Greenland Ice Sheet was responding more rapidly to global warming than
expected, Howat et al. (2007) report that the glaciers' mass-loss rates "decreased in
2006 to near the previous rates." And these observations, in their words, "suggest that special care must be
taken in how mass-balance estimates are evaluated, particularly when extrapolating into the future, because short-term spikes
could yield erroneous long-term trends."
7. Satellite monitoring, UN & WHO programs, vaccines, border control, and CDC
quarantines all check disease
8. The media exaggerates the risk
Lind, 11 (Michael, Policy Director of the Economic Growth Program at the New America
Foundation, March/April 2011, So Long, Chicken Little, Foreign Policy,
http://www.foreignpolicy.com/articles/2011/02/22/so_long_chicken_little?page=0,5) KB
There's nothing like a good plague to get journalists and pundits in a frenzy. Although
the threat of global pandemics is real, it's all too often exaggerated. In the last few years, the world
has experienced two such pandemics, the avian flu (H5N1) and swine flu (H1N1).
Both fell far short of the apocalyptic vision of a new Black Death cutting huge swaths of
mortality with its remorseless scythe. Out of a global population of more than 6 billion people, 8,768 are estimated to have died
it was not just the BBC ominously informing us that
from swine flu, 306 from avian flu. And yet
"the deadly swine flu cannot be contained." Like warnings about the proliferation of nuclear
weapons, the good done by mobilizing people to address the problem must be
Official data from the Xiaoting Health and Family Planning Commission only
includes rates of diabetes, psychosis, and hypertension, as required by the
national commission. The data shows no clear differences between local and national
data. Both the Xiaoting District government and the Yichang municipal
government said none of the residents had come to them voicing concerns
about cancer. Yu Wanlin, a retired leader in Xiamacao Village, told Sixth Tone
that in the past, local residents welcomed the arrival of heavy industry, and
people were less concerned about the environment. It was a sign of
development, he said. In this part of Yichang, residents claim they can trace the
origin of their health problems to the arrival of one company in particular:
Hubei Yihua Chemical Industry Co. Ltd. Established in 1977, Yihua is the oldest
company in the area it set up in an area that would later become part of the
Yichang Economic Development Zone and is a division of Hubei Yihua Group, a
state-run enterprise that owns the largest fertilizer manufacturing plants in
China, according to data from the Chemical Industry and Engineering Society of
China. In 2004, Yihua gave assurances that one of their then-soon-to-be-added
factories would be environment friendly.
2. The plan doesnt solve this advantagethey dont mandate a destruction of
the dams, or changing government- corporation law. They dont have an internal
link
3. Status Quo methods solve bettercause a feedback loop to the economy for
further sustainable development
Nature Conservancy 16 (The Nature Conservancy, Conserving the lands and waters on
which all life depends, China; Places We Protect: The Yangtze River,
http://www.nature.org/?intc=nature.tnav.logo) KBanuri
silver bullets no one right answer for everyone (not GMOs, not agroecology). People on the ground
should be able to choose the tools that work in their unique context. So what does this mean for someone interested in helping to
feed the world? In the public debate, I hear farmers saying that, if the worlds going to eat, they have to drive down the price of food
by producing more of it. And I hear progressives saying that we already have plenty of food, we dont need more ag technology, all
we have to do is make sure everyone gets a fair share. These guideposts Ive set out suggest that both are wrong. Heres why:
First, while its true that there would be enough food to go around if everyone
shared equitably, humanity has never been all that good at sharing. The idea
that we could have completely equitable distribution of food is much more
unrealistic than the idea that we could eliminate poverty. Second, 70 percent of
the worlds poor depend, at least in part, on farming. Helping them produce more, and make a
little money, is a smart strategy for addressing poverty. But third, feeding the world only makes sense
if its bottom up rather than top down. It doesnt help relieve poverty when
super-efficient farmers in the U.S. outcompete poor farmers in the global south .
Feeding the world is noble if it means that some of those poor farmers will become as rich as Americans and produce a lot more
food on a lot less land. But its not at all noble if it means competing at an unfair advantage against the less fortunate. Heres what I
think we should be doing instead of repeating those talking points: Listening to poor farmers. When you listen to those farmers, and
the people who work closely with them, they say they want agricultural technology (like high-yielding seeds), affordable loans, and
education for their children. We should also listen when they say theyd rather do something else than farming. Even though
agriculture touches most of the poor people around the world, in some places the most effective ways of helping will have nothing
to do with farming. Investments in agriculture do have a good track record of improving the standard of living. Nevertheless, if the
goal is to improve conditions for humans and minimize our collective impact on the environment (I argue that the two are one and
the same), weve got to stay focused on reducing poverty instead of fixating on any one particular means of getting there. Fighting
poverty is complicated, but its not all that complicated. We know enough to help, and even agree on a lot of it across partisan lines.
Also: Its a hell of a lot simpler than producing babies tuned for photosynthesis.
5. Cross apply the Ridley evidencenew technology in the Status Quo solves
they dont assume every warrant. Make them prove how they uniquely solve
everything through their solvency mechanisms
6. Prefer our evidence --- best data on these questions --- that outweighs since its
not biased by ideology.
Allouche 11Jeremy Allouche, research Fellow, water supply and sanitation @ Institute for
Development Studies, former professorMIT, PhD in International Relations from the Graduate
Institute of International Studies [The sustainability and resilience of global water and food
systems: Political analysis of the interplay between security, resource scarcity, political systems
and global trade, Food Policy, Volume 36, Supplement 1, January 2011, Pages S3S8, Science
Direct] KBanuri
The question of resource scarcity has led to many debates on whether scarcity (whether of food or
water) will lead to conflict and war.The underlining reasoning behind most of these discourses over food and
water wars comes from the Malthusian belief that there is an imbalance between the economic
availability of natural resources and population growth since while food production grows
linearly, population increases exponentially. Following this reasoning, neo-Malthusians claim that finite
natural resources place a strict limit on the growth of human population and aggregate consumption; if these
limits are exceeded, social breakdown, conflict and wars result. Nonetheless, it seems that most empirical
studies do not support any of these neo-Malthusian arguments. Technological change and greater inputs
of capital have dramatically increased labour productivity in agriculture. More generally, the neo-Malthusian view has suffered
because during the last two centuries humankind has breached many resource barriers that
seemed unchallengeable. Lessons from history: alarmist scenarios, resource wars and international relations In a so-called
age of uncertainty, a number of alarmist scenarios have linked the increasing use of water resources and
food insecurity with wars.The idea of water wars (perhaps more than food wars) is a dominant discourse
in the media (see for example Smith, 2009), NGOs (International Alert, 2007) and within international organizations (UNEP,
2007). In 2007, UN Secretary General Ban Ki-moon declared that water scarcity threatens economic and
social gains and is a potent fuel for wars and conflict (Lewis, 2007). Of course, this type of discourse has
an instrumental purpose; security and conflict are here used for raising water/food as key policy priorities at
the international level. In the Middle East, presidents, prime ministers and foreign ministers have also used this bellicose rhetoric.
Boutrous Boutros-Gali said; the next war in the Middle East will be over water, not politics (Boutros Boutros-Gali in Butts, 1997, p.
65). Thequestion is not whether the sharing of transboundary water sparks political tension and
alarmist declaration, but rather to what extent water has been a principal factor in international
conflicts. The evidence seems quite weak. Whether by president Sadat in Egypt or King Hussein in Jordan, none of
these declarations have been followed up by military action. The governance of transboundary water has gained increased attention
these last decades. This has a direct impact on the global food system as water allocation agreements determine the amount of
water that can used for irrigated agriculture. The likelihood of conflicts over water is an important parameter to consider in
assessing the stability, sustainability and resilience of global food systems. None
of the various and extensive
databases on the causes of war show water as a casus belli.Using the International Crisis
Behavior (ICB) data set and supplementary data from the University of Alabama on water conflicts, Hewitt,
Wolf and Hammer found only seven disputes where water seems to have been at least a partial
cause for conflict (Wolf, 1998, p. 251). In fact, about 80% of the incidents relating to water were limited
purely to governmental rhetoric intended for the electorate (Otchet, 2001, p. 18). As shown in The Basins At Risk
(BAR) water event database, more than two-thirds of over 1800 water-related events fall on the
cooperative scale(Yoffe et al., 2003). Indeed, if one takes into account a much longer period, the
following figures clearly demonstrate this argument. According to studies by the United Nations
Food and Agriculture Organization (FAO), organized political bodies signed between the year 805 and 1984
more than 3600 water-related treaties, and approximately 300 treaties dealing with water
management or allocations in international basins have been negotiated since 1945 ( [FAO, 1978] and
[FAO, 1984]). The fear around water wars have been driven by a Malthusian outlook which equates
scarcity with violence, conflict and war. There is however no direct correlation between water
scarcity andtransboundaryconflict.Most specialists now tend to agree that the major issue is not
scarcity per se but rather the allocation of water resources between the different riparian states (see for example
[Allouche, 2005], [Allouche, 2007] and [Rouyer, 2000]). Water rich countries have been involved in a number of disputes with other
relatively water rich countries (see for example India/Pakistan or Brazil/Argentina). The
perception of each states
estimated water needs really constitutes the core issue in transboundary water relations. Indeed,
whether this scarcity exists or not in reality, perceptions of the amount of available water
shapes peoples attitude towards the environment (Ohlsson, 1999). In fact, some water experts have
argued that scarcity drives the process of co-operation among riparians( [Dinar and Dinar, 2005] and [Brochmann and
Gleditsch, 2006]). In terms of i nternational r elations, the threat of water wars due to increasing scarcity
does not make much sense in the light of the recent historical record. Overall, the water war rationale
expects conflict to occur over water, and appears to suggest that violence is a viable means of securing national water supplies, an
argument which is highly contestable. The debates over the likely impacts of climate change have again popularised
the idea of water wars. The argument runs that climate change will precipitate worsening ecological conditions contributing
to resource scarcities, social breakdown, institutional failure, mass migrations and in turn cause greater political instability and
conflict ( [Brauch, 2002] and [Pervis and Busby, 2004]). In a report for the US Department of Defense, Schwartz and Randall (2003)
speculate about the consequences of a worst-case climate change scenario arguing that water shortages will lead to aggressive wars
(Schwartz and Randall, 2003, p. 15). Despite
growing concern that climate change will lead to instability
and violent conflict, the evidence base to substantiate the connections is thin( [Barnett and Adger,
2007] and [Kevane and Gray, 2008]).
7. Media hype over diseases detracts from health education and results in more
counterproductive measures
Belluz 15 Health reporter (Julia, Ebola doctor Craig Spencer says media's disease hype was
deadly, Vox, February 26, 2015, http://www.vox.com/2015/2/26/8114299/ebola-media). KB
Yesterday, I was on the phone with a Liberian man who survived the world's worst Ebola epidemic. I asked him to rate his fear of the
virus during the height of spread in his home city, Monrovia. When he knew little about the disease, he said, he was extremely
fearful, even preemptively pulling his children out of their classes before schools across the country shutdown. But as he learned
Ebola is simple," he reasoned, calmly. "Obey the rules and you won't get
more, his fears went away. "
infected." Then he said something interesting: "The media hype on Ebola was so much that the
fear of Ebola probably killed a lot of people." He was speaking from experience: his sister-in-law, who was three
months pregnant, died because no one would admit her to a hospital when she
was having problems with her pregnancy. Irrational fears about the virus, he
believes, caused many of the doctors and nurses to walk off the job in
Monrovia, and turn otherwise healthy patients like his beloved family member away. This fear, he said, was
entirely whipped up by the media who focused too much on conspiracy
theories and pseudoscience and not enough on educating the public about the
virus. He's not the first to observe that the overwrought reactions to this virus had damaging effects. Closer to home, Dr.
Craig Spencer who became infamous for bowling with Ebola in New York said much the same thing
in a new piece in the New England Journal of Medicine. He too blames the
media (and self-serving politicians) for stirring fear and hate, unnecessarily
vilifying returning humanitarians like himself despite the fact that we know from
science it would have been almost impossible for him to transmit the virus: After my
diagnosis, the media and politicians could have educated the public about Ebola. Instead, they spent hours
retracing my steps through New York and debating whether Ebola can be
transmitted through a bowling ball. Little attention was devoted to the fact that
the science of disease transmission and the experience in previous Ebola
outbreaks suggested that it was nearly impossible for me to have transmitted the
virus before I had a fever. The media sold hype with flashy headlines "Ebola: `The
ISIS of Biological Agents?'"; "Nurses in safety gear got Ebola, why wouldn't you?"; "Ebola in the air? A nightmare that
could happen" and fabricated stories about my personal life and the threat I posed to public health, abdicating
their responsibility for informing public opinion and influencing public policy. We the media and the public
need to absorb this Ebola lesson. It applies to every disease and health issue that
becomes a matter of public concern. We need to emphasize reason not fear, scientific
explanation not conspiracy theory, compassion not derision and hate. Peoples' lives hang in
the balance.
Homer-Dixon provoked a great deal of controversy and concern with his claim that we are
on the threshold of an era in which armed conflicts will arise with increasing frequency as a result of
environmental change.[ ] However, in the years since his warning, the search for
3
evidence behind this claim has provided little support. As Paul Diehl has remarked, the
many publications from the [Toronto] project have produced largely abstract conceptions
development (e.g., soil erosion, deforestation and air and water pollution) has little or no significant role
in generating civil or international wars.[ ] Detailed cross-national studies have
5
found only very weak relations between environmental degradation and either
international or domestic armed conflict.[ ] In most studies that make an effort to measure the relative
6
impact of environmental and other causes, environmental factors emerge as less important in
determining the incidence of civil conflict than economic and political factors.[ ] 7
For example, Wenche Hauge and Tanja Ellingsen, in the most comprehensive global test of the environmental-scarcity-leads-to-
while deforestation, land degradation and
violence hypothesis with recent data (198092), found that
low freshwater availability were positively correlated with the incidence of civil
war and armed conflict, the magnitude of their effects was tiny. By themselves,
these factors raised the probability of civil war by 0.5 to under 1.5 percent.[ ] These 8
factors did have a slightly higher impact on the probability of lesser kinds of armed conflict (causing increases in the chances of
their influence paled compared to the impact of
such conflict by from 4 percent to 8 percent); but
such traditional risk factors as poverty, regime type and current and prior
political instability. In addition, Gnther Baechler's extensive study of the relationships between environmental
change and violent conflict found that while environmental degradation could be a
deliberately sought environmental causes for a wide range of violent conflict events, including authoritarian coups, revolutionary
after adjusting for the impact of living standards,
wars, ethnic wars and genocides. However,
DURHAM, N.C. Leaks from carbon dioxide injected deep underground to help fight
climate change could bubble up into drinking water aquifers near the surface, driving up levels of contaminants
in the water tenfold or more in some places, according to a study by Duke University scientists. Carbon capture and storage (CCS)
from fossil fuel plants has many problems that constrain its ability to be even 10% of the
solution to the climate problem (as discussed here). One of the biggest near-term
problems is cost (see Harvard: Realistic first-generation CCS costs a whopping $150 per ton of CO2 20 cents per
kWh!). But public acceptance (aka NIMBY) is also a huge problemone that is likely to
grow after the publication of this new study, Potential Impacts of Leakage
from Deep CO2 Geosequestration on Overlying Freshwater Aquifer (PDF here). What
kind of contaminants could bubble up into drinking water aquifers: Potentially dangerous uranium and barium increased
throughout the entire experiment in some samples. Heres more of the Duke release on the study that appears in the online
edition of the journal Environmental Science & Technology, at http://pubs.acs.org/doi/abs/10.1021/es102235w." Storing carbon
dioxide deep below Earths surface, a process known as geosequestration, is part of a suite of new carbon capture and storage (CCS)
technologies being developed by governments and industries worldwide to reduce the amount of greenhouse gas emissions
entering Earths atmosphere. The still-evolving technologies are designed to capture and compress CO2, emissions at their source
typically power plants and other industrial facilitiesand transport the CO2 to locations where it can be injected far below the
Earths surface for long-term storage. The U.S. Department of Energy, working with industry and academia, has begun seven
The fear of drinking water contamination from CO2 leaks is one
regional CCS projects.
of several sticking points about CCS and has contributed to local opposition to
it, says Jackson, who directs Dukes Center on Global Change. We examined the idea that
if CO2 leaked out slowly from deep formations, where might it negatively impact freshwater aquifers near the surface, and why.
Jackson and his colleague Mark G. Little collected core samples from four freshwater aquifers around the nation that overlie
After a
potential CCS sites and incubated the samples in their lab at Duke for a year, with CO2 bubbling through them.
years exposure to the CO2, analysis of the samples showed that there are a
number of potential sites where CO2 leaks drive contaminants up tenfold or
more, in some cases to levels above the maximum contaminant loads set by the
EPA for potable water, Jackson explains. Three key factorssolid-phase metal
mobility, carbonate buffering capacity and redox state in the overlying
freshwater aquiferwere found to influence the risk of drinking water
contamination from underground carbon leaks. The study also identified four markers that scientists
can use to test for early warnings of potential carbon dioxide leaks. Along with changes in carbonate concentration and acidity of
the water, concentrations of manganese, iron and calcium could all be used as geochemical markers of a leak, as their concentration
increase within two weeks of exposure to CO2, Jackson explains. The study was funded by the Department of Energys National
Energy Technology Laboratory and Dukes Center on Global Change. The release notes: Based on incubations of core samples
from four drinking water aquifers, we found the potential for contamination is real, but there are ways to avoid or reduce the risk,
says Robert B. Jackson, Nicholas Professor of Global Environmental Change and professor of biology at Duke. Geologic criteria that
I doubt that
we identified can help determine locations around the country that should be monitored or avoided.
limit the places where sequestration is practicaleither because the geology is problematic or the
site is simply too close to the water supply of a large population.
At the time of writing it is far from clear whether China will move quickly on CCS
demonstration. Much depends on the funding question and wider international climate policy
negotiations. In addition, three other factors will have a significant influence on
the future of CCS in China: the location and adequacy of CO2 storage sites in
China; the development of a regulatory framework to manage the capture,
storage and transport of CO2; and the handling of Intellectual Property Rights
(IPR) for any joint CCS development initiatives with an international
dimension.
3. China says no to bilateral efforts, or status quo policies can solve
Findlay et al 09 (Matthew Findlay, Nick Mabey, Russell Marsh, Shinwei Ng, Shane Tomlinson. May 2009. Carbon Capture
and Storage in China: an E3G Report for Germanwatch, , http://germanwatch.org/klima/ccs-china.pdf. The authors are scientists
and foreign policy analysts writing for Germanwatch and E3G // KB)
There is no obvious alternative to CCS as a means to reduce Chinas emissions from coal on the
scale required to avoid catastrophic climate change. Chinas official view on CCS is that
developed countries must take the lead in demonstrating CCS and provide a much
stronger framework of incentives for action in developing countries. China is involved in a
number of multilateral and bilateral CCS cooperation initiatives and there are
plans for some small-scale demonstration projects. Chinese companies see CCS as a
potential export opportunity and its Ministry of Science and Technology is developing a longer-
term CCS R&D strategy. However the issue of domestic CCS demonstration and
deployment remains sensitive and there is still a good deal of caution among
Chinese policy makers. This is partly because of the energy penalty:
installing CCS reduces a plants energy efficiency by up to 8-14 percentage points.
Current CCS activity in China is focused on developing a number of small, standalone
demonstration projects to test different elements of the technology, with a particular focus on
pre-combustion (Integrated Gasification Combined Cycle - IGCC) options. Enhanced Oil
Recovery and Enhanced Coal Bed Methane Recovery are also of interest as they provide a
possibility of additional revenue which could offset concerns around the energy penalty. Post-
combustion is a more sensitive issue as this opens up wider questions around retrofitting
existing power stations (a potentially huge and costly undertaking
the powerful fossil-fuel mindset, and let go of the false sense of optimism that CCS investments provide. We
also need to end the perception that CCS or any specific mix of technologies has the potential
to solve climate change. We need to divest from perpetuating a fossil-fuel
infrastructure, and instead invest in social and technical changes that will help us prepare to be more resilient in an
increasingly unstable and unpredictable future.
world.) Moreover, a leak rate from underground CO2 storage reservoirs of less than 1% per thousand years is
required for CCS to achieve the same climate benefits as renewable energy sources (3). Before embarking on
projects to inject enormous volumes of CO2 at numerous sites around the world, it is important to note that over time periods of just a few decades, modern seismic networks
have shown that earthquakes occur nearly everywhere in continental interiors. Fig. 1, Upper shows instrumentally
recorded earthquakes in the central and eastern United States and southeastern Canada. Fig. 1, Lower shows instrumentally recorded intraplate earthquakes in south and east Asia (4). The seismicity
catalogs are complete to magnitude (M) 3 . The occurrence of these earthquakes means that nearly everywhere in
continental interiors a subset of the preexisting faults in the crust is potentially active in the current
stress field (5, 6). This is sometimes referred to as the critically stressed nature of the brittle crust (7). It should also be noted that despite the overall low rate of earthquake occurrence in
continental interiors, some of the most devastating earthquakes in history occurred in these regions. In eastern China, the M 7.8, 1976 Tangshan earthquake,
approximately 200 km east of Beijing, killed several hundred thousand people. In the central United States, three M 7+ earthquakes in 1811 and 1812 occurred in
the New Madrid seismic zone in southeast Missouri. Because of the critically stressed nature of the crust, fluid injection in deep
wells can trigger earthquakes when the injection increases pore pressure in the vicinity of
preexisting potentially active faults. The increased pore pressure reduces the frictional resistance to
fault slip, allowing elastic energy already stored in the surrounding rocks to be released in
earthquakes that would occur someday as the result of natural geologic processes (8). This effect was first documented in the 1960s in Denver, Colorado when injection into a 3-km-deep well at
the nearby Rocky Mountain Arsenal triggered earthquakes (9). Soon thereafter it was shown experimentally (10) at the Rangely oil field in western Colorado that earthquakes could be turned on and off by
varying the rate at which water was injected and thus modulating reservoir pressure. In 2011 alone, a number of small to moderate earthquakes in the United States seem to have been triggered by
injection of wastewater (11). These include earthquakes near Guy, Arkansas that occurred in February and March, where the largest earthquake was M 4.7. In the Trinidad/Raton area near the border of
Colorado and New Mexico, injection of produced water associated with coal bed methane production seems to have triggered a number of earthquakes, the largest being a M 5.3 event that occurred in
August. Earthquakes seem to have been triggered by wastewater injection near Youngstown, Ohio on Christmas Eve and New Years Eve, the largest of which was M 4.0. Although the risks associated with
wastewater injection are minimal and can be reduced even further with proper planning (11), the situation would be far more problematic if similar-sized earthquakes were triggered in formations intended
to sequester CO2 for hundreds to thousands of years. Deep borehole stress measurements confirm the critically stressed nature of the crust in continental interiors (12), in some cases at sites directly
relevant to the feasibility of large-scale CCS. For example, deep borehole stress measurements at the Mountaineer coal-burning power plant on the Ohio River in West Virginia indicate a severe limitation
formations at depth, pore pressure increases would be expected to trigger slip on preexisting faults
if CO2 injection rates exceed approximately 1% of the 7 million tons of CO2 emitted by the Mountaineer plant each
year. Similarly, stress measurements at Teapot Dome, Wyoming, the US government-owned oil field where pilot CO2 injection projects have been considered, show that very small pressure buildups are
capable of triggering slip on some preexisting faults (14). Dam construction and water reservoir impoundment produce much smaller pore pressure changes at depth than are likely to occur with CO2
sequestration, but many have triggered earthquakes at various sites around the world (15) (red dots in Fig. 1). Except for the much smaller pore pressure increases at depth, reservoir-triggered earthquakes
are a good analog for the potential for seismicity to be triggered by CO2 injection. Both activities cause pore pressure increases that act over large areas and are persistent for long periods. Three reservoir
impoundments in eastern Canada (located in the ancient, stable core of the North American continent) triggered earthquakes as large as M 4.1 and M 5 at the two sites (Fig. 1), despite the fact that the
pore pressure increases at depth were extremely small. Triggered Earthquakes and Seal Integrity Our principal concern is not that injection associated with CCS projects is likely to trigger large
earthquakes; the problem is that even small to moderate earthquakes threaten the seal integrity of a CO2 repository. In parts of the world with good construction practices, it is unusual for earthquakes
smaller than approximately M 6 to cause significant human harm or property damage. Fig. 2 uses well established seismological relationships to show how the magnitude of an earthquake is related to the
size of the fault that slipped and the amount of fault slip that occurred (16). As shown, faults capable of producing M 6 earthquakes are at least tens of kilometers in extent. (The fault size indicated along
the abscissa is a lower bound of fault size as it refers to the size of the fault segment that slips in a given earthquake. The fault on which an earthquake occurs is larger than the part of the fault that slips in
an individual event.) In most cases, such faults should be easily identified during geophysical site characterization studies and thus should be avoided at any site chosen for a CO2 repository. (Faults in
crystalline basement rocks might be difficult to recognize in geophysical data. We assume, however, that any site chosen as a potential CO2 repository would be carefully selected, avoiding the possibility of
pressure changes in the CO2 repository from affecting faults in crystalline basement.) The problem is that site characterization studies can easily miss the much smaller faults associated with small to
moderate earthquakes. Although the ground shaking from small- to moderate-sized earthquakes is inconsequential, their impact on a CO2 repository would not be. Most of the geologic formations to be
used for long term storage of CO2 are likely to be at depths of approximately 2 kmdeep enough for there to be adequate sealing formations to isolate the CO2 from the biosphere but not so deep as to
Given large volumes of CO2 injected into selected formations for many
encounter formations with very low permeability.
several centimeters of slip would be capable of creating a permeable hydraulic pathway that could compromise the seal integrity of the
CO2 reservoir and potentially reach the near surface. Safe Sequestration It is important to emphasize that CCS can be valuable and useful for reducing
greenhouse gas emissions in specific situations. A good example is the injection of CO2 into the Utsira formation (19) at the Sleipner gas field in the North Sea, where a significant amount of CO2 is coproduced
with natural gas. After separating the CO2 from the produced gas, approximately 1 million tons of CO2 per year has been injected over the past 15 y without triggering seismicity. Assuming isolation from the near
surface, injection into highly porous and permeable reservoirs that are laterally extensive would produce small increases in pressure in response to CO2 injection. Moreover, weak, poorly cemented sandstones are
expected to deform slowly in response to applied geologic forces. In such reservoirs, the stresses relax over time, and such formations are not prone to faulting (20). In this regard, the Utsira formation is ideal for
CO2 sequestration. It is isolated from vertical migration by impermeable shale formations, and it is highly porous, permeable, laterally extensive, and weakly cemented. To contribute significantly to greenhouse
gas emission reductions (2), roughly 3,500 sites similar to the Utsira formation would have to be found at convenient locations around the world, assuming comparable injection rates of approximately 1 million
tons of CO2 per year. In fact, it would take approximately 85 such sites coming on line each year to reach a goal of storing approximately 1 billion tons of CO2 by midcentury. Clearly this is an extraordinarily
difficult, if not impossible, task if only highly porous and permeable and weakly cemented formations are to be used. Of course, rather than using potentially problematic geologic formations close to coal-burning
power plants for sequestration (as illustrated by the Mountaineer case study cited above), relatively ideal formations for CO2 storage could be sought on a regional basis to accommodate emissions from a number
of plants. One example of this is the potential use of the Mt. Simon sandstone in the Illinois basin. The Mt. Simon is porous, permeable, and regionally extensive. However, models of injection of 100 million tons of
CO2 per year for 40 y predicts (21) increases in pore pressure of several megapascals over a region of 40,000 km2 . The approximate area of significantly increased pore pressure resulting from injection is shown
as the blue-shaded area in Fig. 3, essentially adjacent to the Wabash fault zone, where a series of moderate natural earthquakes occurred in the spring of 2008, the largest being M 5.2. Paleoseismic data indicate
the occurrence of much larger nearby earthquakes (some greater than M 7) in the recent geologic past (22). Importantly, the 100 million ton annual CO2 injection rate used in the modeling only represents
approximately one seventh of the CO2 generated by the coal-burning power plants in the Ohio River Valley alone. Because of the need to carefully monitor CO2 repositories with observation wells, geophysical and
geochemical monitoring systems, etc., it is likely that most sites will have to be located on land or very near shore. Otherwise, highly porous reservoirs located offshore, like those adjacent to salt domes along the
US Gulf Coast, would be relatively ideal sites because salt formations are known to be excellent seals for hydrocarbons. Depleted oil and gas reservoirs are potentially suitable for CO2 storage for a variety of
reasonsan infrastructure of wells and pipelines exist, and there is a great deal of geologic and subsurface property data available to characterize the subsurface from decades of study. In addition, from an
earthquake-triggering perspective, depleted reservoirs are attractive because at the time injection of CO2 might start, the pore pressure would be below the value that existed before petroleum production. Thus,
there could be significant injection of CO2 before pressures increase to preproduction values, thereby reducing the potential for triggering earthquakes. There are a number of potential issues to consider before
using depleted oil and gas reservoirs for CO2 storage, the most important of which are capacity and geographic distribution. The reasons that there is such interest in using saline aquifers for CO2 storage is that
they are potentially well distributed with respect to likely sources of CO2, and they could presumably accommodate the enormous volumes of CO2 that need to be stored. If one were only to consider the United
States, storing the 2.1 billion tons of CO2 currently generated annually by coal burning power plants in depleted oil and gas reservoirs would require injection of CO2 at a rate of approximately 17 billion barrels per
year; a rate equivalent to eight times current US annual oil production and more than four times US peak annual oil production that occurred in the early 1970s. In addition, it is important to make sure that
production-related activities, such as water flooding during secondary recovery, did not compromise the seal capacity of the reservoirs. There also needs to be careful study of the wells in the depleted oil or gas
field to make sure that poorly cemented well casings, especially in older wells, will not be pathways for release of stored CO2 (23). Finally, there are likely to be complicated legal questions concerning ownership
and liability that will need to be worked out on a case-by-case basis. Although enhanced oil recovery (EOR) using CO2 (in which CO2 is injected to dissolve in oil and reduce its viscosity) would be a beneficial use of
CO2, it is important not to confuse this with CCS. In CCS the goal is to inject large quantities of CO2 into available pore space and store it there for hundreds to thousands of years. When CO2 is used for EOR, the
CO2 dissolved in the oil is separated and captured from produced oil and then reinjected. Thus, smaller volumes of CO2 are used, and the long-term storage capacity of the reservoir is not an issue. Many CCS
research projects are currently underway around the world. Much of this work involves characterization and testing of potential storage formations and includes a number of small-scale pilot injection projects.
Because the storage capacity/pressure build-up issue is critical to assess the potential for triggered seismicity, small-scale pilot injection projects do not reflect how pressures are likely to change (increase) once
full-scale injection is implemented. Moreover, even though limitations on pressure build-up are among the many factors that are evaluated when potential formations are considered as sequestration sites, this is
usually done in the context of not allowing pressures to exceed the pressure at which hydraulic fractures would be initiated in the storage formation or caprock. In the context of a critically stressed crust, slip on
preexisting, unidentified faults could trigger small- to moderate-sized earthquakes at pressures far below that at which hydraulic fractures would form. As mentioned above, sequences of small to moderate
earthquakes were apparently induced by injection of waste water near Guy, Arkansas, Trinidad, Colorado, and Youngstown, Ohio in 2011 and on the Dallas-Ft. Worth airport, Texas. Although these earthquakes
CO2 was being injected, the impacts would have raised pressing and important questions: Had the seal
been breached? Was it still safe to leave previously injected CO2 in place? In summary, multiple lines of
evidence indicate that preexisting faults found in brittle rocks almost everywhere in the earths crust
are subject to failure, often in response to very small increases in pore pressure. In light of the risk posed to a CO2 repository by
even small- to moderate-sized earthquakes, formations suitable for large-scale injection of CO2 must be carefully chosen. In addition to being well sealed by impermeable overlaying strata, they should also be
weakly cemented (so as not to fail through brittle faulting) and porous, permeable, and laterally extensive to accommodate large volumes of CO2 with minimal pressure increases. Thus, the issue is not whether
In this
CO2 can be safely stored at a given site; the issue is whether the capacity exists for sufficient volumes of CO2 to be stored geologically for it to have the desired beneficial effect on climate change.
context, it must be recognized that large-scale CCS will be an extremely expensive and risky strategy for
achieving significant reductions in greenhouse gas emissions.